Thursday, October 6, 2011

You've Got Mail! - The PFF File Format

My recent experimentation and blog post on the analysis of the Microsoft Extensible Storage Engine (ESE) database used by Microsoft  Windows Desktop Search (WDS) prompted me to begin looking at other ways Microsoft utilizes the ESE file format. Microsoft Outlook also utilizes the ESE in the form of the Personal Folder File (PFF) format. This includes the Personal Storage Table (PST) and Outlook Storage Table (OST) files which are commonly known as Outlook Data Files. The former (PST) is used in a non-enterprise setting when configuring outlook with email services such as pop/smtp and the later is created in enterprises with Outlook cached mode is enabled. Other forms of PFF include the Personal Address Book (PAB).

Joachim Metz has also done a fair amount of research on the PFF file structure as part of his libpff project. During the time of his research, the PFF file format was largely unknown. In 2010, however, Microsoft published the open specification on the PFF format and made it available as part of the MSDN Library.

The first four bytes of the file header contains the file signature of "!BDN " (0x2142444e). The 9th and 10th byte contain the content type which is 'SM' for PST (0x534D) and 'SO' (0x534F) for OST.

 
Metz's libpff pffexport utility will parse either file type. Once parsed, pffexport exports the following information on messages;
  • Internet Email Headers
  • Outlook Headers
  • Conversation Index
  • Recipients
  • Message Body 
  • Attachments 
Prior to Outlook 2007 there were three forms of file encryption available for PFF files; none, compressible, and high encryption. Metz documents the following about the two later options;
...actually more of a way to obfuscate the information in the PFF than real means to ensure confidentiality....
Microsoft's Open Specification document on the PST file structure also confirm Metz's findings on PFF encryption prior to Outlook 2007. They now recommended the use of Encrypted File System (EFS) or BitLocker Encryption to secure these files. Consequently, versions of Outlook after 2007 use compressible encryption and high encryption is no longer available.

Additionally, Microsoft Outlook allows users to set a password on their PST files. This password however, is a weak 32-but Cyclic Redundancy Check (CRC32) and consequently, is subject to collisions. This has been know for quite some time and Microsoft has documented this;
The PST Password, which is stored as a property value in the Message store, is a superficial mechanism that requires the client implementation to enforce the stored password. Because the password itself is not used as a key to the encoding and decoding cipher algorithms, it does not provide any security benefit to preventing the PST data to be read by unauthorized parties.
Metz clarifies this a bit more in his research. Applications, such as Microsoft Outlook, are conforming to the password protection but in reality, none of the data is actually protected by the password. Consequently, the libpff pffexport utility can export all items stored in the PFF file without supplying the password.

The libpff utility was able to parse the email headers and content on both the PST and OST files during my testing.


This certainly could be useful to forensics practitioners. The aforementioned, lack of security of these files however got me thinking more about the use of products such as Outlook Anywhere (RPC over HTTP) in the corporate world. Outlook Anywhere allows users to access corporate email on their personal computers using Microsoft Outlook. Consequently, corporate email would be stored in the local PFF file on the user's home system. Unless Whole Disk Encryption or other means were being used to secure the file system, then the potential risk to the intellectual property of corporation could be significant.

Happy Hunting!

Thursday, September 8, 2011

Windows Desktop Search Index

Microsoft Extensible Storage Engine (ESE) database is used by a variety of Microsoft services including Exchange, Windows Mail, Active Directory, and Windows Desktop Search. I recently began wondering what forensic artifacts might be indexed by Windows Desktop Search (WDS) and available to an analyst. By default, user documents and IE internet history are indexed, but Outlook 2007/2010 also integrates with WDS. Consequently, this might be an additional source of email artifacts. While there can be a wealth of information available to a responder in an enterprise that utilizes Microsoft Exchange and any of a variety of email archiving solutions, the WDS ESE database may still be useful in non-enterprise settings.


After some searching, I came across Joachim Metz research on the ESE format and WDS as part of the libesedb project. Metz documents the ESE database structure, data obfuscation, and compression thoroughly. Consequently, I am not going to summarize all of his research but fully recommend you read it if interested.

The libesedb project contains two tools; esedbinfo and esedbexport. Esedbinfo provides detail about the structure of the ESE file and Esedbexport allows you to extract the tables for analysis. The following is an example of running Esedbexport on the WDS database (the default location is C:\ProgramData\Microsoft\Search\Data\Applications\Windows\Windows.edb). It should be noted that the Windows Search (WSearch) service needs to be stopped to access this file on a live system.


The SystemIndex_0A table contains the bulk of useful information. The following is an example of the Outlook Welcome email obtained from the parsed table.


To the best of my knowledge, it is unknown how long indexed data is kept but I was able to obtain previously deleted emails from several days prior without issue. This included the full body of the email (see update below). Again, I am unsure how often a forensicator would need to utilize these artifacts. In addition to the aforementioned resources available in an enterprise, Microsoft Outlook also utilizes the Personal Folder File (PFF) format for Personal Storage Table (PST) and Outlook Storage Table (OST) files. These are both commonly known as Outook Data Files. The former (PST) is used in a non enterprise setting when configuring outlook with email services such as pop/smtp and the later is created in enterprises with Outlook cached mode is enabled.

In addition to the libesedb project, Joachim Metz also runs the libpff project. His research there provides a tremendous amount of insight into the PFF file structure and usefulness.

So what do you say? Is the Microsoft ESE file format a useful artifact for file forensics?

Happy Hunting!

Updated: September 09, 2011

Dave Hull was kind enough to post a comment and share some of his experiences with WDS and deleted files. This consequently got me to revisit my testing with a larger poking stick. After several hours I determined a few things about deleted emails and the affects on the WDS index.

First and foremost, I could not duplicate finding deleted emails in the index. I am unsure if my initial testing was flawed or if there is internal workings unknown to me. I did however note the following behavior when deleting emails.

When an email is sent to the Deleted Items folder in Outlook the "System_IsDeleted" is marked as True and the "System_ItemFolderPathDisplay" value is changed to reflect this new location. This comes as no surprise. This was the case with my initial testing and the example I gave of the Outlook Welcome Email.

Once the email is removed from the Deleted Items folder, the Index Record is removed very quickly. I confirmed this multiple times. This leaves a missing DocID in the table which is eventually re-used for another index record. This is very similar to the behavior of the NTFS Master File Table when a files/folders are deleted.

I re-read Joachim Metz's initial research and he does mention that the WDS index can contain deleted file information and content but was unsure how long this is kept. He also mentions a table called "SystemIndex_DeletedDocIds" which contains the deleted DocId's in Windows Vista and above. Unfortunately, the Esedbexport tool does not seem to extract this table as of yet.

All things considered, a very interesting experiment.

Thursday, August 4, 2011

Carving Symantec VBN Files

Those of you who perform IT support or incident response are most likely intimate with corporate antivirus products. While the usefulness of antivirus can be debated, the purpose of this post is to provide some insight into the file structure of Symantec's quarantine files. It is not uncommon for an IT practitioner or an incident responder to restore and perform further analysis on a malicious file to verify the attackers intent. Someone recently posted to the Windows Forensics email group about having issues restoring quarantined files from Symantec Endpoint Protection (SEP) 11 which prompted me to put together this quick post.

Symantec does provide a utility called QExtract that allows for the extraction of quarantined files. Documentation on the syntax of the command line utility can be found in Symantec's online knowledge base. As an example, the following is the output obtained from using the /DETAILED switch with qextract.exe on a system that the Mebroot rootkit payload was detected on.



QExtract can restore the malicious file by using the session ID, file name, or risk name obtained from this output (see the aforementioned documentation for syntax). The utility works, but is limited. It only runs on Windows. Additionally, you cannot point QExtract to an alternate source location. If SEP is not installed, then the default path to the quarantine files must be manually created. Moreover, when restoring something from a quarantine file, the original path of the file must exist or restoration will fail.

The file structure of the quarantine files in Symantec's AV products has been known for some time, however. Since 2007 there has been an Encase script available that will extract these files. SEP Quarantine files, also known as Virus Bin (VBN) files, are located in the C:\ProgramData\Symantec\Symantec Endpoint Protection\Quarantine folder. For the purpose of this post, I am looking at the detection of the aforementioned Mebroot rootkit. Some details including hashes and statistics from Virus Total are as follows.
Symantec: Trojan.Mebroot
MD5: fd543137a51fc24e07e00f9bc7c3c06e
SHA1: 357ac149ba2c864a5f0fc2276c2fa437b5c5533b
http://www.virustotal.com/file-scan/report.html?id=43cafc4464ac08a6b1be53958be377c70ded28ed6f0602449fbd7872604074fe-1303095131
Looking at a VBN file using X-Ways WinHex Editor we see the file begins with the original location of the detected malware. At offset 0x00000184 (byte 388) SEP stores additional information on detection of the malicious file including the system name, original location/name of file, time of detection, and Symantec unique record ID.


At offset 0x00000E68 (byte 3688) we see something else. It appears that the data has been obfuscated or encrypted. Note the that the value 0x5A is common throughout the file. What are the chances that these are actually spaces (0x20) and the data was XOR'd with the value of 0X5A?


Using Winhex to inverse XOR with the value of 0x5A gives us the malicious file. Note: the file signature of 0x4D5A (MZ) which is for a Windows/DOS executable file.


To carve out the Mebroot payload, simply copy the selected block to a new file and save it.

I would imagine this will work with previous versions of Symantec Corporate Edition but the offsets may be different. If anyone has any experience in that regard let me know.

Happy Hunting!

Friday, July 29, 2011

Dear Diary: AntiMalwareLab.exe File_Created

I have previously posted about the usefulness of parsing the NTFS Master File Table during static malware analysis.  The Master File Table ($MFT) is only one of the twelve metadata files in NTFS file system however. The $Extend object ($MFT Record Entry 11) is used for optional extensions to NTFS. Beginning with Windows 2000, Microsoft added change journaling ($UsnJrnl) to this list of NTFS extensions. $UsnJrnl is turned on by default in Windows Vista and 7, and records all changes that are made to the file system. It should be noted that changes recorded do not include what specific data changed, rather just the type of change and time stamp of when the change occurred.. This can still be useful however when attempting to establish a timeline of malicious changes to a system.

The $UsnJrnl is stored on the root of the volume in the \$Extend\$UsrJrnl file. The file has two $DATA attributes, the $Max attribute which contains general information about the journal and the $J attribute which contains the actual list of changes. Each journal record varies in size and includes an Update Sequence Number (USN). The USN is 64 bit in size and is stored in byte 64-71 of the $STANDARD_INFORMATION ($SI) attribute of the $MFT.  The following output is an example of the $SI XXD of a file named malicious.dll.

Searching a dd (raw) image for a suspected malicious file called malicious.dll with the The Sleuth Kit (TSK) tool “fls” produces the $MFT Record Number of the file.
fls -f ntfs -r /media/Passport/Images/Image001.dd | grep malicious.dll

 ++ r/r 1618-128-1:    malicious.dll
Using this entry number (1618) we can display the $SI attribute (type=16) from the $MFT record  $SI (type=16) with the TSK "icat" tool.
icat -f ntfs /media/Passport/Images/Image001.dd 1618-16 | xxd
The USN, in the above example, represents the byte offset in the $UsnJrnl (remember each record varies in size). It should also be noted that the $Usnjrnl is a sparse file, meaning it has a maximum size but old records are overwritten with zero's and any updates to it will be written to the end of the file and perpetually increase the USN (based on byte offset from the beginning of the file).

Microsoft MSDN has a fair amount of documentation on the structure of the $UsnJrnl $J file and what fields it stores. Additionally, Brian Carrier does a great job of breaking down the data structure and byte offsets in his book File System Forensic Analysis. The following is an example of a $UsnJrnl record structure.

We can obtain the $MFT entry address of the $Usnjrnl $J file by using the TSK "fls" tool (note: the $Extend Object will always be $MFT entry 11).
fls -f ntfs /media/Passport/Images/Image001.dd 11
Once the location of the $J file is obtained, the contents can be displayed by using the TSK "icat" tool as follows. Please note that the -h option skips holes in the sparse file.
icat -h –f ntfs /media/Passport/Images/Image001.dd 41455-128-3 | xxd
A quick search for our "malicious.dll" provides a good example of the structure a $UsnJrnl record.
Byte 40-43 is the USN_CHANGE flag and is well documented on MSDN. For reference purposes the following table summarizes the type of flags and their hexadecimal values recorded in the $UsnJrnl.


There are a few utilities and scripts available to automate the parsing of the records but for the purpose of this post I am using one I recently became aware of through the Windows Forensic Analysis Email list. The Windows Journal Parser (JP) is available for Windows, Linux, and Mac. JP pulls the allocated clusters from the sparse file and parses the records. Information pulled includes Time/Date of change, File/Folder Affected, Type of Change, and by using the verbose option (-v) it will add the $MFT Entry Number and Sequence Number. JP is able to parse a the $UsnJrnl from a live volume, dd image, or carved $J file and export to a variety of formats.

I recently came across a compromised Windows 7 system and had the opportunity to use JP during analysis. The following is the location, hash values, and Virus Total stats of the malicious (unsigned) process that was found on the system.
File name: VD90c_2121.exe
Submission date: 2011-07-21 14:13:39 (UTC)
Result: 14 /43 (32.6%)
MD5   : c8a695e4c411af859fa358eabb4127d1
SHA1  : 78e10150b3fd91b199adf0457a2e3902bc70eaf6
SHA256: 54e80b6d08bedf9210e6a0cead297a36d34f12170568c672e70ff6f750a69a00
After parsing the $UsnJrnl with JP, I searched for the aforementioned malicious process and was quickly able to obtain a timeline of changes made during infection.

Within a few minutes of analyzing the output from the $Usnjrnl I recognized some of the files and locations created as being similar of a malicious program I analyzed previously last November and outlined here. Hence significantly reducing the time necessary to find the origin, payload, and other infection locations on disk.

It should be noted again that $UsnJrnl records are not going to kept indefinitely. Moreover, if a file is deleted, related $MFT entries may be overwritten. More info on carving old $UsnJrnl records from unallocated space and other $UsnJrnl parsing utilities is posted over at the Forensics From the Sausage Factory Blog. I recommend you check it out.

Happy Hunting!

References:

Carrier, Brian (2005). File System Forensic Analysis. Addison Wesley.
.
Microsoft MSDN USN Record Structure.

Friday, May 27, 2011

Virtualizing Raw Disk Images

I have heard a lot of people ask about how to forensically handle raw (dd) disk images of systems that have been encrypted with whole disk encryption. Both PGP and Truecrypt support the use Recovery/Rescue ISO's to decrypt drives without booting the OS (Note: an administrator pass phrase is still going to be required). So if you could boot the raw image in VMware, for example, then you could mount the ISO and decrypt the image.

One Windows tool, Live View, can be used to convert dd images to a vmdk (Virtual Machine Disk Format) file. Live View was created at Carnegie Mellon University in 2009 but it unfortunately has not been updated since then. Consequently, there is no support for modern versions of Windows or VMWare Workstation or Server.

Fortunately, Tasos Laskos, expanded on their work and created the raw2vmdk utility. Raw2vmdk is an open source, OS independent (requires JRE 1.6.0_18 or higher), command line utility that can create a vmdk file with the appropriate disk type parameters that will allow you to boot directly from a dd image.

The readme outlines the syntax of the utility (Note: if disk type is not specified then it defaults to IDE).
java -Dtype=<ide|buslogic|lsilogic|legacyESX> -jar raw2vmdk.jar <raw image> <vmdk outfile>
Note the syntax of the slashes when running the command on a Windows system.
java -jar raw2vmdk.jar D:\\data001.dd D:\\data001.vmdk
Once run, the analysis and creation of the vmdk file only takes a few seconds.


Raw2vmdk creates a properly formatted vmdk with the appropriate path to the raw image, disk type, and parameters.
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=5c643bba
parentCID=ffffffff
isNativeSnapshot="no"
createType="monolithicFlat"

# Extent description
RW 156301488 FLAT "D:\data001.dd" 0

# The Disk Data Base
#DDB

ddb.virtualHWVersion = "7"
ddb.longContentID = "bf304434123a064225efde635c643bba"
ddb.uuid = "60 00 C2 91 8e 73 27 62-43 58 3b f8 05 ae 2e a0"
ddb.geometry.cylinders = "1023"
ddb.geometry.heads = "255"
ddb.geometry.sectors = "19"
ddb.adapterType = "ide"
The monolithic flat disk type is a pre-allocated disk type that is stored in one file. This format also supports raw dd images. Once the creation of the file is complete, create a new virtual system as you normally would within Vmware Workstation or Server and point the hard disk to the newly created vmdk file.


You should now able to boot your image within VMware (assuming it includes the boot partition). A word of caution, however. Always follow IR and Forensics best practices and use a second copy of your raw image. I also like to create the virtual system and vmdk in a separate folder from the raw dd image, so if the VM is accidentally deleted it does not also delete your raw disk image.

Happy Hunting.

Wednesday, May 18, 2011

Herding Cats: Windows Object Access Analysis on a Budget



I recently had to deal with a lot of archived Windows Security Logs (evtx files) spanning a fairly lengthy period of time. The evtx binary was introduced with Windows Vista and can be found on all modern version of windows. The author of EVTX Parser has posted his work on documenting the evtx file structure here and has created a utility called EVTX Parser that will parse evtx binaries and store them as xml. A good overview of his research and tool is posted in a slide deck from the SANS Forensic Summit in 2010.

There are a few additional free tools available to search and filter Windows event logs if you don't have a log management product. While the Windows event log supports the import of multiple evtx files, I can tell you through experience that the MMC will puke if you feed it a large amount of files. Moreover, there is limited support for many of the xpath string functions such as "contains" and "starts-with" which can be hindrance. All the same, I managed to come up with some useful expressions to query Object Access logs from Windows 7 and 2008 R2 Server.

Microsoft provides a decent spreadsheet on Windows Security Event ID's and some documentation on the schema of events. Looking at the XML of a few events, however, will certainly give you what you need.


When dealing with object access logs, you are going to need to distinguish between the types of access granted on the file system and registry. After much googling and experimentation I managed to scrape together the following Access Mask values and their associated bit wise equivalents used in the Windows Event log. These are the permissions that were exercised on the audited object(s).

1537 (0x10000) = Delete
4416 (0x1) = ReadData(or List Directory)
4417 (0x6) = WriteData(or Add File)  (0x2 on Windows 2008 Server)
4418 (0x4) = AppendData (or AddSubdirectory)
4432 (0x1) = Query Key Value
4433 (0x2) = Set Key Value
4434 (0x4) = Create Sub Key

So for example if you need to write and expression to see all successful and failed modifications by a particular user on files and folders.
<querylist>
<query id="0" path="Security">
<select path="Security">*[EventData[Data[@Name='SubjectUserName']='bugbear' and [@Name='AccessMask']='0x6']]</select>
</query>
</querylist>
After playing with different variations of this query, I began to get creative during dynamic analysis of the Renocide worm and its effects on the System32 and HKLM registry keys. After enabling auditing on both objects, I came up with the following query to produce all changes made by the payload and malicious process. Note: the syntax when working with an externally saved evtx file.
<querylist>
<query id="0" path="file://C:\Worm.evtx">
<select path="file://C:\Worm.evtx">*[System[Provider[@Name='Microsoft-Windows-Security-Auditing'] and EventID=4663 and (Task = 12800 or Task = 12801)] and EventData[Data[@Name='ProcessName']='\Device\HarddiskVolume2\02MAY2011\scffog.exe' or Data='C:\Windows\System32\csrcs.exe']]</select>
</query>
</querylist>
This produced some interesting logs I used for further analysis.


If filtering multiple archived evtx files you can import the files into the mmc event viewer, create a view including them, and filter on that view. But dont expect to be able to work with a large amount of data. In fact, Microsoft will generate a warning if you attempt to import more than ten evtx files. Fortunately, there are faster and more flexible alternatives. Microsoft Log Parser will parse the binary (specify evt as the input type). Specifying a wild card in the filename will parse multiple files located in a specified folder and Log Parser also provides additional flexibility by allowing the use of statements such as "LIKE". The following are valid data fields that can be used when parsing evt/evtx binaries.


Note: If filtering by user you will need to use the SID and much of the event data, such as access masks, are combined as a string in the "Message" data field. The following is an example of a query that will pull events from multiple evtx binaries that contain the specified WriteData and Delete Access Mask values.

LogParser.exe -i:evt -o:csv "Select * from C:\Logs\*.evtx where EventID=4663 and (Message Like '%Access Mask: 0x6%' or Message Like '%Access Mask: 0x10000%')" > C:\Logs\Out.csv

Another alternative is Windows Powershell. The following is a similar example as the one given above (all WriteData and Delete Access Masks) using the Get_WinEvent and Where_Object Cmdlet'.

 get-winevent -path "C:\Logs\Comp1.evtx", "C:\Logs\Comp2.evtx" | where {$_.Id -eq "4663" -and $_.message -like "*0x10000*" -or $_.Id -eq 4663 -and $_.message -like "*0x6*"} > C:\Logs\Out.csv

 Using "| Format-List" provides a view of the data fields available for use with the "Where" statement.


While not ideal, the IT Practicioner or Incident Responder can certainly wrangle with evtx files without a SIEM or Log management system. The recent release of the Verizon DBIR report (2011) included a statement on page 60 that notes an interesting but not unexpected finding.

"...discovery through log analysis and review has dwindled down to 0%. So the good news is that things are only looking up from here..." - Verizon DBIR 2011

Happy Hunting!

Updated May 19, 2011

I intentionally did not provide any detail on enabling Object Access auditing in Windows since there is a fair amount of documentation available on that. In retrospect, however, I did want to mention a few things and share a few tips.

First, choose what Accesses you audit carefully. Accesses such as "List Folder/Read Data" are very noisy and will only increase the amount of logs you have to parse and may fill up the event log completely so it begins to overwrite itself (note: there are settings for the size of the log too).

Second consider what user or group you audit access for carefully. The "Users" group may be fine for auditing access to files stored on a file server but consider using the "Everyone" group if auditing changes made by malicious code. This group will include the System account.

Lastly, enabling auditing of changes to the system folders or registry may become resource intensive and non-manageable in a production environment. Use with caution. That said, I do believe it can be useful during analysis of malicious code. I would include a few more locations than just the System32 and HKLM however. The C:\Users, C:\ProgramData, and HKCU keys come to mind.

Friday, May 13, 2011

Renocide Worm: Hiding in Plain Sight

I recently came across a sample of Renocide which has been circulating for some time now. Microsoft recently published some of its infection numbers on the MSRT blog if you are interested. The malicious code takes advantage of the auto run settings in Windows and spreads via mapped drives and USB storage devices. Virus Total shows decent coverage by the AV industry. While not particularly unique, I did note something interesting when I parsed the NTFS $MFT table during analysis. The malicious code seems to manipulate NTFS $MFT Timestamps on several malicious files it creates in the %windir%\System32 folder. The following screen shot is the $MFT attributes for the process csrcs.exe which the payload creates.

csrcs.exe (MD5: 989460dc5f8ac5c886078f50720d71e8)

There a few things that struck me about the time manipulation. While it is not unusual to find the $SI born (creation) and modified attributes altered, I have never seen the $FN Born attribute changed. A closer look at the hex values of the $SI Born Attribute revealed something else.


The $SI Born time of "20e6 980c a303 ca01" converts more specifically to 2009-07-13 06:16:55.938000 . The usec value is not zero which is unusual. My first thought was that the date/time values were copied from another file but while the date mirrors other system files, the time correctly coincides with the time of infection. Things that make you go hmm.

Sunday, March 27, 2011

An Overdue Rant: The RSA Compromise

OK I haven't had a good rant in a while on the blog, so be warned, there may be some pent up rage in the paragraphs ahead. Read on at your risk.


I do not usually write posts on the latest compromise as I always feel there is enough coverage, speculation, and commentary from smarter people than I. There is a lot of speculation about the recently announced RSA breach both on the technical details of the compromise and on who may have been behind the attack. Yeah everyone is throwing three letter acronyms around again. The Digital Underground Podcast recently posted a great discussion on the technical side here and there as been some good posts on mitigation techniques.

The part I really have issue with is RSA's lack luster disclosure of this compromise. Some have suggested that they should be praised for publicly announcing the breach. I'm not sure when we set the bar so low. Since when is posting a written notification with vague details and little to no information on when and what was compromised and who is affected become acceptable?

A lot of organizations have paid a lot of money to increase the security of their information systems and data by purchasing the RSA SecureID solution. Don't forget even if your not a customer of RSA (Disclosure: I am not) it is still your family's data being protected by such solutions. In short, I find RSA's actions post compromise disgusting and inept.

While knowing the technical details of the compromise would benefit the security community by giving everyone an opportunity to learn where things went wrong, the reality is we will probably never know the details and this is OK with me. What needs to be done, on the part of RSA however, is to step up and fix where things went wrong, notify those clients affected, and offer them replacements or fixes for the technology they already purchased. Thus far the advice given by RSA is nothing more than best practice and common sense. I would like to think those implementing RSA's authentication solutions are probably already familiar with such administration controls.

To use a bad analogy. This is the equivalent of a new home owner hiring a Master Locksmith to replace all the locks in their new home with a more secure solution, only to have the locksmith keep a copy of the keys and tell the customer at a later date that the key has been stolen and the customer should go buy a bigger guard dog or better alarm system at their own expense. Would this be acceptable?

Not the greatest analogy but I did say their were more intelligent people than I posting about this didn't I?

The truth is, everyone gets owned at some time or another. It is the actions of the compromised organization during the aftermath that will distinguish it from other competitors. Asking other security solution providers to sign an NDA to learn more about the compromise is not looking out for the best interests of your customers.

/Rant

Updated June 01, 2011

It appears that there may have been several attacks against U.S. defense contractor's that leveraged information from the RSA compromise. Last Friday, Reuters reported that there was a breach at Lockheed Martin Corporation. On Monday, Wired reported that L-3 Communications had also been targeted and leaked memo suggested the attackers were using inside information on their SecureID system gained by the RSA hack. Today, Fox news is reporting a possible attack against Northrop Grumman. With all these reports flooding the internet it is difficult to know how much is based on fact but I did want to share a gem of a quote from the Wired report.
Asked if the RSA intruders did gain the ability to clone SecurID keyfobs, RSA spokeswoman Helen Stefen said, “That’s not something we had commented on and probably never will.”
Updated June 7, 2011

It appears RSA has updated their Open Letter to RSA SecurID Customers. The update provides verification of the Lockheed Martin attack and offers long awaited replacements of SecurID tokens, although for what appears to be a limited subset of SecurID customers. Thanks to Wim Remes for the heads up on the updated post.

Thursday, March 24, 2011

Pauldotcom Security Weekly: I am Talking about What?

On Thursday March 24, 2011 I will be presenting the tech segment on Episode 236 of PaulDotCom Security Weekly. The segment will cover the use of NTFS MFT timeline forensics in the static analysis of malware. This is a geekier version of my NAISG BOS presentation back in January and will cover some additional tools and technique's. The podcast begins around 8:00 PM and a live feed is available at http://www.pauldotcom.com/live. So if you are around, kick back with a beer, cigar, and listen live! I am looking forward to it.

Updated March 24, 2011 3:30 PM

As part of the tech segment this evening, Mark Mckinnon of RedWolf Computer Forensics has release the Windows beta of mft_parser which supports $MFT $SI and $FN bodyfile output from both the CLI and GUI. Big thanks to Mark from the Incident Response and Forensics community.

Thursday, January 27, 2011

Shmoocon or Bust

What would you do to get to Shmoo?

Woke up at 4:00 AM                  
2.5 hours shovelling snow
1 hour to get to train station
1 hour on local commuter rail to BOS
2 minutes to find my train to PVD cancelled
1 hour on on first commuter rail to PVD
Finding my 12:50 train to DC only five minutes late = priceless

In about four hours I will be at Shmoocon and it will to be Epic. This years schedule contains a lot of fresh blood and new faces (which is not a bad thing IMHO). The schedule is so packed with goodness, that I am going to have to make some tough decisions on which sessions to attend. In addition, the after hours action is packed full of awesomesauce. There is the return of Firetalks on both Friday and Saturday evening, Podcasters meetup (including free booze), Jason Scott is previewing his new documentary called Get Lamp on Saturday evening (first computer program I wrote was a text based Adventure game on my TI99-4A), and of course there are the parties and meet-ups that will certainly include scotch and cigars.

On Friday we begin with, Gone in 60 Minutes: Stealing Sensitive Data from Thousands of Systems Simultaneously with OpenDLP with Andrew Gavin. Leveraging enterprise defense products = sexy in my book. Following that there are several cool sessions including a long awaited update from Johnny Long (who is back in the states for the con), and keynote by Peiter "Mudge" Zatko of DARPA.

On Saturday, I am hoping that Jon Oberheide and Zach Lanier has the cure for my much anticipated hangover with their talk; TEAM JOCH vs. Android: The Ultimate Showdown which will highlight their work on subverting the Android OS. I plan to follow-up with Hard Drive Paperweight: Recovery from a Seized Motor! being delivered by Scott Moulton. Scott is a super smart dude who never disappoints. I am guaranteed to learn something there.

Printers Gone Wild! with Ben Smith is in the next slot. Next I need to make some of those tough decisions I mentioned earlier. There is Attacking 3G and 4G mobile telecommunications networks with Enno Rey & Daniel Mende and An Evite from Surbo? Probably an invitation for trouble with Trent Lo aka "Surbo" from i-hacked.comhttp://www.i-hacked.com/. There is no doubt that mobile tech has definetly come of age and consequently will become a target but Trent is also a smart, entertaining dude. Then in at 16:00 there is Defeating mTANs for profit with Axelle Apvrille and Kyle Yang (mTAN = one-time bank password by SMS) and G W Ray Davidson's talk on designing a network for a conferance entitled ShmooCon Labs Goes To College. Both decisions will most likely be down to the wire. On Sunday, the talk that seems to be on averyone's agenda is Georgia Weidman's Transparent Botnet Control for Smartphones Over SMS in which she will release POC for a sms controlled botnet.

Total estimated time to get to the con = 15 hours (and worth it). See you in a few hours Shmoocon

Friday, January 14, 2011

NAISG: Leveraging NTFS Master File Table Timeline Forensics in the Analysis of Malware

What is in your incident response kit?

Next week I am delivering a talk at the Boston Chapter of National Information Security Group (NAISG) on Thursday January 20, 2011. I will be speaking on the use of NTFS Master File Table Timeline Forensics in the Analysis of Malware. The meeting and talk is open to everyone and more information can be found here. If you are in the Boston area come down and check it out. NAISG will post the talk and slides at a later date and I will make sure I link back to it here.

Updated: February 1, 2011

NAISG has posted the video for my presentation here. The slide deck can be found on Slideshare here. I also wanted to say thank you to NAISG Boston chapter for having me. It was a blast!