Tuesday, November 26, 2013

Finding Cryptolocker Encrypted Files using the NTFS Master File Table

For the most part, everyone seems to be familiar with the new variants of Cyptolocker making the rounds these days. To quickly summarize, this form of ransomware that encrypts documents and pictures found on local and mapped network drives in an attempt to obtain payment for the decryption keys. The attackers are using decent encryption and the malware is very efficient. A good write up can be found here.


Recently, I dealt with an infection and during forensic analysis noted that the NTFS Master File Table $SI Creation and Modified dates remained unchanged on files encrypted. I made a note of this for later and circled back around during post analysis.


Since the infection not only encrypted all the documents on the user's local drive but also files located on mapped file shares too, I decided to grab the MFT from the Windows file server. Using analyzeMFT and MFTParser, I was able to parse the 9 GB $MFT in a reasonable time frame. Identifying some known encrypted files by the $FN file name, I noted the only date in the MFT record that coincided with the infection was the MFT Entry Date or date the MFT record itself was modified. Using this, I filtered out all records that had $SI or $FN time stamps that preceded this.


The result was I was able to identify over 4400 files encrypted on the file share. Not bad for an infection that only lasted a few hours before being caught by the most recent Antivirus signature. Load up the backup tapes boys!

Happy Hunting!

Updated November 27, 2013 2:15 PM

After exchanging a few emails with some people in the industry, I think what we are seeing here is an example of File System Tunneling. To be specific, if a file is removed and replaced with the same file name (in the same folder) of a NTFS drive within fifteen seconds (default with NTFS) it will retain the original NTFS attributes. I have seen this before with other Trojans as a way to avoid detection. Just an educated hunch. More info on File System Tunneling can be found here. Thanks to all who responded to me.

Updated November 27, 2013 3:00 PM

Just a quick update of some of the IOC's (Indicators of Compromise: MD5, SHA1, Location) for this particular variant;

2a790b8d3da80746dde3f5c740293f3e    7d27c048df06b586f43d6b3ea4a6405b2539bc2c    \\.\PHYSICALDRIVE1\Partition 2 [305043MB]\NONAME [NTFS]\[root]\ProgramData\Symantec\SRTSP\Quarantine\APEA53C866.exe
f1e2de2a9135138ef5b15093612dd813    ea64129f9634ce8a7c3f5e0dd8c2e70af46ae8a5    \\.\PHYSICALDRIVE1\Partition 2 [305043MB]\NONAME [NTFS]\[root]\Users\%userprofile%\AppData\Local\Temp\e483.tmp.exe
714e8f7603e8e395b6699cea3928ac81    36f40d0be83410e911a1f4231eeef4e863551cee    \\.\PHYSICALDRIVE1\Partition 2 [305043MB]\NONAME [NTFS]\[root]\Users\%userprofile%\AppData\Roaming\dhsjabss\dhsjabss
17610024a03e28af43085ef7ad5b65ba    77f9d6e43b8cb1881396a8e1275e75e329ca7037    \\.\PHYSICALDRIVE1\Partition 2 [305043MB]\NONAME [NTFS]\[root]\Users\%userprofile%\AppData\Roaming\dhsjabss\egudsjba.exe
621f35fd095eff9c5dd3e8c7b7514c1e    f03233e323f9a49354f2d6c565b6ec95595cc950    \\.\PHYSICALDRIVE1\Partition 2 [305043MB]\NONAME [NTFS]\[root]\Users\%userprofile%\Desktop\Iqbcxbvszzgdxjvbp.bmp
I wanted to also comment on using software restriction policies in Windows to block executable's from running from locations such as C:\Users\%userprofile%\AppData\Local\Temp. With no local admin rights, users only have the ability to write to three locations on modern versions of Windows (by default). Thee are;

C:\$Recycle.Bin
C:\ProgramData
C:\Users\%Userprofile%

The attackers know this and 99% of infections I see in my environment are using these locations efficiently (including this one).

Unfortunately, a lot of legitimate software also use these locations. So using, suggestions such as Software Restriction Policies, to stop the execution from these locations in a large enterprise environment may or may not be realistic. I suspect adding rules, to check if the executable is legitimately signed, would reduce false positives. I am, however, seeing malicious code signed on occasion. In conclusion, there is no silver bullet here but I personally plan to explore these defenses more and will update what I find as I do.

Lastly, some online posts of this malware has mentioned the use of the HKEY_CURRENT_USER\Software\CryptoLocker location in the Windows registry as a way to determine what files have been encrypted. I just wanted to mention, that I did carve the ntuser.dat file from the compromised system and noted that this location did exist in the registry. It however, did not contain any entries on what files were encrypted.

Updated December 05, 2013 3:00 PM

Since Michael Mimosa over at Threat Post was kind enough to link back to my post, I thought I would return the favor. Forensics Method Quickly Identifies CryptoLocker Encrypted Files

Thursday, May 23, 2013

ZAccess/Sirefef.P Artifacts

I wanted to share a few interesting artifacts from two ZAccess/Sirefef.P compromises I recently had to deal with. In both infections, malicious files were written to hidden sub directories located in the User and System accounts $Recycle.Bin's. Much like other variants of this Trojan, these files were injected into legitimate processes including explorer.exe and services.exe. At first I thought the infection had mucked with the permissions of the hidden sub directories within the Recycle.Bin but then noticed the S-1-5-18 SID, indicating the use of the SYSTEM account.





The first compromise went a step further and overwrote the Wdf01000.sys driver under \SystemRoot\System32\Drivers. I would have missed this if I had not dumped the NTFS Master File Table and used the $SI Entry Date when creating my timeline. By overwriting the existing file, it would appear the other NTFS timestamps were preserved due to File System Tunneling (ref: KB172190 and WIR Blog). A very interesting artifact indeed.


The first variant loaded some typical Fake Antivirus into the C:\ProgramData folder. Nothing new there but with the second variant, I noted the creation of a lot of Internet cache files under \SystemRoot\System32\Config\Systemprofile\AppData in what appeared to be the presence of click fraud.


Overall, a couple of interesting variants that I enjoyed playing with. Here's the hashes for reference.
[root]\$Recycle.Bin\S-1-5-18\$a032773db1be215125d280696ec7b357\n
MD5: 3aaac8a9352dde4e2073a7814514bd9d
SHA1: 321132983c3fc25448e19ae63e65cb127f28c5b7 

[root]\$Recycle.Bin\S-1-5-18\$f58c247e192203c97063e19c12229833\n
MD5: cfaddbb43ba973f8d15d7d2e50c63476
SHA1: 34206a971fe3cbb1acf2ce8bb9f145bfd78e256e 
Happy Hunting!

Tuesday, February 5, 2013

The WTF of the Week

Last week someone mentioned to me you can retrieve your Wireless SSID and Encryption key for the Verizon FIOS Actiontec router via the support menu of the TV guide. Somehow this did not surprise me and since the router is most likely sitting next to your TV, well physical access has been gained anyways.But it did get me curious on if Verizon allowed this functionality via the customers online account. Five minutes later...Well I'll just let these screen shots speak for themselves.





Now I could insert a rant here. But really what's the point? We all know this is just a bad idea. So remember to shred your bills or as I prefer put them to good use with the fire pit on a Saturday night.




Happy Hunting!

Wednesday, October 17, 2012

Incident Response in 3.08 MB

I don't normally post anything on specific software products but occasionally I come across a commercial tool that truly excites me. One recent example is a tool called Carbon Black from Kyrus. I had participated in the beta testing of the product last year and I recently decided to revisit the production release.

For years, defensive strategies I helped to implement such least privilege, patch management, user account control, and system hardening has kept the majority of the malicious binaries off the hosts I have supported. Recently, these defenses seem to be working less and less, however. The bad guys are getting better and I suspect this has to do with organizations implementing the aforementioned strategies in a much more efficiently and consistent manner which has forced the attackers to adapt.

Attackers have graduated to using exploits against third party software and browser plugin's such as Java and Flash. They are writing to the Microsoft Windows users profile and HKCU registry keys when local administrator rights are not present. It seems to be working well and organizations I speak with are left relying on lagging AV and IPS signatures for detection and prevention. The issue is compounded for smaller companies, that do no have a full time IR team in place.

The idea behind Carbon Black (CB) is to monitor code execution. A small Windows agent is deployed to each host throughout the enterprise. This agent hashes each process, monitors the sub processes, module loads, registry edits, file writes, and network connections. Digital signatures and the activity of each binary is stored on the CB server.

The interface is well thought and intuitive. You can easily filter and drill down or up the relational data easily and quickly based on any of these aforementioned data points. Once potential indicators have been identified, it is easy to correlate the related activity.

For example, there was a recent string of well done phishing emails that got pass my org's spam filters. Claiming to be from ADP Internet Services, the email contained a malicious link that brought the unsuspected user to a web server that was hosting a JAR file.


The user, realizing the error of her actions, forwarded the email to me. Our corporate AV and IPS never detected the incident. Using CB to filter for unsigned files, I determined that an exe was dropped to temp folder in the Windows user profile.


From there I was able to quickly drill down to the sub process loaded, file writes, and registry edits. Not only did I know exactly what was changed on the system but now I had the MD5's of the indicators.




Using these hashes to filter for processes and sub-processes on all my hosts, I could determine if anyone else clicked the link and was compromised.

CB FTW!

The team at CB have also started to add some plugin's to the toolkit. These include; an autorun's checker, virurtotal submission using the VT API, and csv data exports to list just a few. These have some great potential and I cannot wait to see more developed. Additionally, I would like to see support *nix and OSX. But overall, I think the tool is a fantastic asset and am looking forward to demoing it to the rest of my team.

Happy Hunting!

Thursday, October 6, 2011

You've Got Mail! - The PFF File Format

My recent experimentation and blog post on the analysis of the Microsoft Extensible Storage Engine (ESE) database used by Microsoft  Windows Desktop Search (WDS) prompted me to begin looking at other ways Microsoft utilizes the ESE file format. Microsoft Outlook also utilizes the ESE in the form of the Personal Folder File (PFF) format. This includes the Personal Storage Table (PST) and Outlook Storage Table (OST) files which are commonly known as Outlook Data Files. The former (PST) is used in a non-enterprise setting when configuring outlook with email services such as pop/smtp and the later is created in enterprises with Outlook cached mode is enabled. Other forms of PFF include the Personal Address Book (PAB).

Joachim Metz has also done a fair amount of research on the PFF file structure as part of his libpff project. During the time of his research, the PFF file format was largely unknown. In 2010, however, Microsoft published the open specification on the PFF format and made it available as part of the MSDN Library.

The first four bytes of the file header contains the file signature of "!BDN " (0x2142444e). The 9th and 10th byte contain the content type which is 'SM' for PST (0x534D) and 'SO' (0x534F) for OST.

 
Metz's libpff pffexport utility will parse either file type. Once parsed, pffexport exports the following information on messages;
  • Internet Email Headers
  • Outlook Headers
  • Conversation Index
  • Recipients
  • Message Body 
  • Attachments 
Prior to Outlook 2007 there were three forms of file encryption available for PFF files; none, compressible, and high encryption. Metz documents the following about the two later options;
...actually more of a way to obfuscate the information in the PFF than real means to ensure confidentiality....
Microsoft's Open Specification document on the PST file structure also confirm Metz's findings on PFF encryption prior to Outlook 2007. They now recommended the use of Encrypted File System (EFS) or BitLocker Encryption to secure these files. Consequently, versions of Outlook after 2007 use compressible encryption and high encryption is no longer available.

Additionally, Microsoft Outlook allows users to set a password on their PST files. This password however, is a weak 32-but Cyclic Redundancy Check (CRC32) and consequently, is subject to collisions. This has been know for quite some time and Microsoft has documented this;
The PST Password, which is stored as a property value in the Message store, is a superficial mechanism that requires the client implementation to enforce the stored password. Because the password itself is not used as a key to the encoding and decoding cipher algorithms, it does not provide any security benefit to preventing the PST data to be read by unauthorized parties.
Metz clarifies this a bit more in his research. Applications, such as Microsoft Outlook, are conforming to the password protection but in reality, none of the data is actually protected by the password. Consequently, the libpff pffexport utility can export all items stored in the PFF file without supplying the password.

The libpff utility was able to parse the email headers and content on both the PST and OST files during my testing.


This certainly could be useful to forensics practitioners. The aforementioned, lack of security of these files however got me thinking more about the use of products such as Outlook Anywhere (RPC over HTTP) in the corporate world. Outlook Anywhere allows users to access corporate email on their personal computers using Microsoft Outlook. Consequently, corporate email would be stored in the local PFF file on the user's home system. Unless Whole Disk Encryption or other means were being used to secure the file system, then the potential risk to the intellectual property of corporation could be significant.

Happy Hunting!

Thursday, September 8, 2011

Windows Desktop Search Index

Microsoft Extensible Storage Engine (ESE) database is used by a variety of Microsoft services including Exchange, Windows Mail, Active Directory, and Windows Desktop Search. I recently began wondering what forensic artifacts might be indexed by Windows Desktop Search (WDS) and available to an analyst. By default, user documents and IE internet history are indexed, but Outlook 2007/2010 also integrates with WDS. Consequently, this might be an additional source of email artifacts. While there can be a wealth of information available to a responder in an enterprise that utilizes Microsoft Exchange and any of a variety of email archiving solutions, the WDS ESE database may still be useful in non-enterprise settings.


After some searching, I came across Joachim Metz research on the ESE format and WDS as part of the libesedb project. Metz documents the ESE database structure, data obfuscation, and compression thoroughly. Consequently, I am not going to summarize all of his research but fully recommend you read it if interested.

The libesedb project contains two tools; esedbinfo and esedbexport. Esedbinfo provides detail about the structure of the ESE file and Esedbexport allows you to extract the tables for analysis. The following is an example of running Esedbexport on the WDS database (the default location is C:\ProgramData\Microsoft\Search\Data\Applications\Windows\Windows.edb). It should be noted that the Windows Search (WSearch) service needs to be stopped to access this file on a live system.


The SystemIndex_0A table contains the bulk of useful information. The following is an example of the Outlook Welcome email obtained from the parsed table.


To the best of my knowledge, it is unknown how long indexed data is kept but I was able to obtain previously deleted emails from several days prior without issue. This included the full body of the email (see update below). Again, I am unsure how often a forensicator would need to utilize these artifacts. In addition to the aforementioned resources available in an enterprise, Microsoft Outlook also utilizes the Personal Folder File (PFF) format for Personal Storage Table (PST) and Outlook Storage Table (OST) files. These are both commonly known as Outook Data Files. The former (PST) is used in a non enterprise setting when configuring outlook with email services such as pop/smtp and the later is created in enterprises with Outlook cached mode is enabled.

In addition to the libesedb project, Joachim Metz also runs the libpff project. His research there provides a tremendous amount of insight into the PFF file structure and usefulness.

So what do you say? Is the Microsoft ESE file format a useful artifact for file forensics?

Happy Hunting!

Updated: September 09, 2011

Dave Hull was kind enough to post a comment and share some of his experiences with WDS and deleted files. This consequently got me to revisit my testing with a larger poking stick. After several hours I determined a few things about deleted emails and the affects on the WDS index.

First and foremost, I could not duplicate finding deleted emails in the index. I am unsure if my initial testing was flawed or if there is internal workings unknown to me. I did however note the following behavior when deleting emails.

When an email is sent to the Deleted Items folder in Outlook the "System_IsDeleted" is marked as True and the "System_ItemFolderPathDisplay" value is changed to reflect this new location. This comes as no surprise. This was the case with my initial testing and the example I gave of the Outlook Welcome Email.

Once the email is removed from the Deleted Items folder, the Index Record is removed very quickly. I confirmed this multiple times. This leaves a missing DocID in the table which is eventually re-used for another index record. This is very similar to the behavior of the NTFS Master File Table when a files/folders are deleted.

I re-read Joachim Metz's initial research and he does mention that the WDS index can contain deleted file information and content but was unsure how long this is kept. He also mentions a table called "SystemIndex_DeletedDocIds" which contains the deleted DocId's in Windows Vista and above. Unfortunately, the Esedbexport tool does not seem to extract this table as of yet.

All things considered, a very interesting experiment.

Thursday, August 4, 2011

Carving Symantec VBN Files

Those of you who perform IT support or incident response are most likely intimate with corporate antivirus products. While the usefulness of antivirus can be debated, the purpose of this post is to provide some insight into the file structure of Symantec's quarantine files. It is not uncommon for an IT practitioner or an incident responder to restore and perform further analysis on a malicious file to verify the attackers intent. Someone recently posted to the Windows Forensics email group about having issues restoring quarantined files from Symantec Endpoint Protection (SEP) 11 which prompted me to put together this quick post.

Symantec does provide a utility called QExtract that allows for the extraction of quarantined files. Documentation on the syntax of the command line utility can be found in Symantec's online knowledge base. As an example, the following is the output obtained from using the /DETAILED switch with qextract.exe on a system that the Mebroot rootkit payload was detected on.



QExtract can restore the malicious file by using the session ID, file name, or risk name obtained from this output (see the aforementioned documentation for syntax). The utility works, but is limited. It only runs on Windows. Additionally, you cannot point QExtract to an alternate source location. If SEP is not installed, then the default path to the quarantine files must be manually created. Moreover, when restoring something from a quarantine file, the original path of the file must exist or restoration will fail.

The file structure of the quarantine files in Symantec's AV products has been known for some time, however. Since 2007 there has been an Encase script available that will extract these files. SEP Quarantine files, also known as Virus Bin (VBN) files, are located in the C:\ProgramData\Symantec\Symantec Endpoint Protection\Quarantine folder. For the purpose of this post, I am looking at the detection of the aforementioned Mebroot rootkit. Some details including hashes and statistics from Virus Total are as follows.
Symantec: Trojan.Mebroot
MD5: fd543137a51fc24e07e00f9bc7c3c06e
SHA1: 357ac149ba2c864a5f0fc2276c2fa437b5c5533b
http://www.virustotal.com/file-scan/report.html?id=43cafc4464ac08a6b1be53958be377c70ded28ed6f0602449fbd7872604074fe-1303095131
Looking at a VBN file using X-Ways WinHex Editor we see the file begins with the original location of the detected malware. At offset 0x00000184 (byte 388) SEP stores additional information on detection of the malicious file including the system name, original location/name of file, time of detection, and Symantec unique record ID.


At offset 0x00000E68 (byte 3688) we see something else. It appears that the data has been obfuscated or encrypted. Note the that the value 0x5A is common throughout the file. What are the chances that these are actually spaces (0x20) and the data was XOR'd with the value of 0X5A?


Using Winhex to inverse XOR with the value of 0x5A gives us the malicious file. Note: the file signature of 0x4D5A (MZ) which is for a Windows/DOS executable file.


To carve out the Mebroot payload, simply copy the selected block to a new file and save it.

I would imagine this will work with previous versions of Symantec Corporate Edition but the offsets may be different. If anyone has any experience in that regard let me know.

Happy Hunting!

Friday, July 29, 2011

Dear Diary: AntiMalwareLab.exe File_Created

I have previously posted about the usefulness of parsing the NTFS Master File Table during static malware analysis.  The Master File Table ($MFT) is only one of the twelve metadata files in NTFS file system however. The $Extend object ($MFT Record Entry 11) is used for optional extensions to NTFS. Beginning with Windows 2000, Microsoft added change journaling ($UsnJrnl) to this list of NTFS extensions. $UsnJrnl is turned on by default in Windows Vista and 7, and records all changes that are made to the file system. It should be noted that changes recorded do not include what specific data changed, rather just the type of change and time stamp of when the change occurred.. This can still be useful however when attempting to establish a timeline of malicious changes to a system.

The $UsnJrnl is stored on the root of the volume in the \$Extend\$UsrJrnl file. The file has two $DATA attributes, the $Max attribute which contains general information about the journal and the $J attribute which contains the actual list of changes. Each journal record varies in size and includes an Update Sequence Number (USN). The USN is 64 bit in size and is stored in byte 64-71 of the $STANDARD_INFORMATION ($SI) attribute of the $MFT.  The following output is an example of the $SI XXD of a file named malicious.dll.

Searching a dd (raw) image for a suspected malicious file called malicious.dll with the The Sleuth Kit (TSK) tool “fls” produces the $MFT Record Number of the file.
fls -f ntfs -r /media/Passport/Images/Image001.dd | grep malicious.dll

 ++ r/r 1618-128-1:    malicious.dll
Using this entry number (1618) we can display the $SI attribute (type=16) from the $MFT record  $SI (type=16) with the TSK "icat" tool.
icat -f ntfs /media/Passport/Images/Image001.dd 1618-16 | xxd
The USN, in the above example, represents the byte offset in the $UsnJrnl (remember each record varies in size). It should also be noted that the $Usnjrnl is a sparse file, meaning it has a maximum size but old records are overwritten with zero's and any updates to it will be written to the end of the file and perpetually increase the USN (based on byte offset from the beginning of the file).

Microsoft MSDN has a fair amount of documentation on the structure of the $UsnJrnl $J file and what fields it stores. Additionally, Brian Carrier does a great job of breaking down the data structure and byte offsets in his book File System Forensic Analysis. The following is an example of a $UsnJrnl record structure.

We can obtain the $MFT entry address of the $Usnjrnl $J file by using the TSK "fls" tool (note: the $Extend Object will always be $MFT entry 11).
fls -f ntfs /media/Passport/Images/Image001.dd 11
Once the location of the $J file is obtained, the contents can be displayed by using the TSK "icat" tool as follows. Please note that the -h option skips holes in the sparse file.
icat -h –f ntfs /media/Passport/Images/Image001.dd 41455-128-3 | xxd
A quick search for our "malicious.dll" provides a good example of the structure a $UsnJrnl record.
Byte 40-43 is the USN_CHANGE flag and is well documented on MSDN. For reference purposes the following table summarizes the type of flags and their hexadecimal values recorded in the $UsnJrnl.


There are a few utilities and scripts available to automate the parsing of the records but for the purpose of this post I am using one I recently became aware of through the Windows Forensic Analysis Email list. The Windows Journal Parser (JP) is available for Windows, Linux, and Mac. JP pulls the allocated clusters from the sparse file and parses the records. Information pulled includes Time/Date of change, File/Folder Affected, Type of Change, and by using the verbose option (-v) it will add the $MFT Entry Number and Sequence Number. JP is able to parse a the $UsnJrnl from a live volume, dd image, or carved $J file and export to a variety of formats.

I recently came across a compromised Windows 7 system and had the opportunity to use JP during analysis. The following is the location, hash values, and Virus Total stats of the malicious (unsigned) process that was found on the system.
File name: VD90c_2121.exe
Submission date: 2011-07-21 14:13:39 (UTC)
Result: 14 /43 (32.6%)
MD5   : c8a695e4c411af859fa358eabb4127d1
SHA1  : 78e10150b3fd91b199adf0457a2e3902bc70eaf6
SHA256: 54e80b6d08bedf9210e6a0cead297a36d34f12170568c672e70ff6f750a69a00
After parsing the $UsnJrnl with JP, I searched for the aforementioned malicious process and was quickly able to obtain a timeline of changes made during infection.

Within a few minutes of analyzing the output from the $Usnjrnl I recognized some of the files and locations created as being similar of a malicious program I analyzed previously last November and outlined here. Hence significantly reducing the time necessary to find the origin, payload, and other infection locations on disk.

It should be noted again that $UsnJrnl records are not going to kept indefinitely. Moreover, if a file is deleted, related $MFT entries may be overwritten. More info on carving old $UsnJrnl records from unallocated space and other $UsnJrnl parsing utilities is posted over at the Forensics From the Sausage Factory Blog. I recommend you check it out.

Happy Hunting!

References:

Carrier, Brian (2005). File System Forensic Analysis. Addison Wesley.
.
Microsoft MSDN USN Record Structure.

Friday, May 27, 2011

Virtualizing Raw Disk Images

I have heard a lot of people ask about how to forensically handle raw (dd) disk images of systems that have been encrypted with whole disk encryption. Both PGP and Truecrypt support the use Recovery/Rescue ISO's to decrypt drives without booting the OS (Note: an administrator pass phrase is still going to be required). So if you could boot the raw image in VMware, for example, then you could mount the ISO and decrypt the image.

One Windows tool, Live View, can be used to convert dd images to a vmdk (Virtual Machine Disk Format) file. Live View was created at Carnegie Mellon University in 2009 but it unfortunately has not been updated since then. Consequently, there is no support for modern versions of Windows or VMWare Workstation or Server.

Fortunately, Tasos Laskos, expanded on their work and created the raw2vmdk utility. Raw2vmdk is an open source, OS independent (requires JRE 1.6.0_18 or higher), command line utility that can create a vmdk file with the appropriate disk type parameters that will allow you to boot directly from a dd image.

The readme outlines the syntax of the utility (Note: if disk type is not specified then it defaults to IDE).
java -Dtype=<ide|buslogic|lsilogic|legacyESX> -jar raw2vmdk.jar <raw image> <vmdk outfile>
Note the syntax of the slashes when running the command on a Windows system.
java -jar raw2vmdk.jar D:\\data001.dd D:\\data001.vmdk
Once run, the analysis and creation of the vmdk file only takes a few seconds.


Raw2vmdk creates a properly formatted vmdk with the appropriate path to the raw image, disk type, and parameters.
# Disk DescriptorFile
version=1
encoding="UTF-8"
CID=5c643bba
parentCID=ffffffff
isNativeSnapshot="no"
createType="monolithicFlat"

# Extent description
RW 156301488 FLAT "D:\data001.dd" 0

# The Disk Data Base
#DDB

ddb.virtualHWVersion = "7"
ddb.longContentID = "bf304434123a064225efde635c643bba"
ddb.uuid = "60 00 C2 91 8e 73 27 62-43 58 3b f8 05 ae 2e a0"
ddb.geometry.cylinders = "1023"
ddb.geometry.heads = "255"
ddb.geometry.sectors = "19"
ddb.adapterType = "ide"
The monolithic flat disk type is a pre-allocated disk type that is stored in one file. This format also supports raw dd images. Once the creation of the file is complete, create a new virtual system as you normally would within Vmware Workstation or Server and point the hard disk to the newly created vmdk file.


You should now able to boot your image within VMware (assuming it includes the boot partition). A word of caution, however. Always follow IR and Forensics best practices and use a second copy of your raw image. I also like to create the virtual system and vmdk in a separate folder from the raw dd image, so if the VM is accidentally deleted it does not also delete your raw disk image.

Happy Hunting.

Wednesday, May 18, 2011

Herding Cats: Windows Object Access Analysis on a Budget



I recently had to deal with a lot of archived Windows Security Logs (evtx files) spanning a fairly lengthy period of time. The evtx binary was introduced with Windows Vista and can be found on all modern version of windows. The author of EVTX Parser has posted his work on documenting the evtx file structure here and has created a utility called EVTX Parser that will parse evtx binaries and store them as xml. A good overview of his research and tool is posted in a slide deck from the SANS Forensic Summit in 2010.

There are a few additional free tools available to search and filter Windows event logs if you don't have a log management product. While the Windows event log supports the import of multiple evtx files, I can tell you through experience that the MMC will puke if you feed it a large amount of files. Moreover, there is limited support for many of the xpath string functions such as "contains" and "starts-with" which can be hindrance. All the same, I managed to come up with some useful expressions to query Object Access logs from Windows 7 and 2008 R2 Server.

Microsoft provides a decent spreadsheet on Windows Security Event ID's and some documentation on the schema of events. Looking at the XML of a few events, however, will certainly give you what you need.


When dealing with object access logs, you are going to need to distinguish between the types of access granted on the file system and registry. After much googling and experimentation I managed to scrape together the following Access Mask values and their associated bit wise equivalents used in the Windows Event log. These are the permissions that were exercised on the audited object(s).

1537 (0x10000) = Delete
4416 (0x1) = ReadData(or List Directory)
4417 (0x6) = WriteData(or Add File)  (0x2 on Windows 2008 Server)
4418 (0x4) = AppendData (or AddSubdirectory)
4432 (0x1) = Query Key Value
4433 (0x2) = Set Key Value
4434 (0x4) = Create Sub Key

So for example if you need to write and expression to see all successful and failed modifications by a particular user on files and folders.
<querylist>
<query id="0" path="Security">
<select path="Security">*[EventData[Data[@Name='SubjectUserName']='bugbear' and [@Name='AccessMask']='0x6']]</select>
</query>
</querylist>
After playing with different variations of this query, I began to get creative during dynamic analysis of the Renocide worm and its effects on the System32 and HKLM registry keys. After enabling auditing on both objects, I came up with the following query to produce all changes made by the payload and malicious process. Note: the syntax when working with an externally saved evtx file.
<querylist>
<query id="0" path="file://C:\Worm.evtx">
<select path="file://C:\Worm.evtx">*[System[Provider[@Name='Microsoft-Windows-Security-Auditing'] and EventID=4663 and (Task = 12800 or Task = 12801)] and EventData[Data[@Name='ProcessName']='\Device\HarddiskVolume2\02MAY2011\scffog.exe' or Data='C:\Windows\System32\csrcs.exe']]</select>
</query>
</querylist>
This produced some interesting logs I used for further analysis.


If filtering multiple archived evtx files you can import the files into the mmc event viewer, create a view including them, and filter on that view. But dont expect to be able to work with a large amount of data. In fact, Microsoft will generate a warning if you attempt to import more than ten evtx files. Fortunately, there are faster and more flexible alternatives. Microsoft Log Parser will parse the binary (specify evt as the input type). Specifying a wild card in the filename will parse multiple files located in a specified folder and Log Parser also provides additional flexibility by allowing the use of statements such as "LIKE". The following are valid data fields that can be used when parsing evt/evtx binaries.


Note: If filtering by user you will need to use the SID and much of the event data, such as access masks, are combined as a string in the "Message" data field. The following is an example of a query that will pull events from multiple evtx binaries that contain the specified WriteData and Delete Access Mask values.

LogParser.exe -i:evt -o:csv "Select * from C:\Logs\*.evtx where EventID=4663 and (Message Like '%Access Mask: 0x6%' or Message Like '%Access Mask: 0x10000%')" > C:\Logs\Out.csv

Another alternative is Windows Powershell. The following is a similar example as the one given above (all WriteData and Delete Access Masks) using the Get_WinEvent and Where_Object Cmdlet'.

 get-winevent -path "C:\Logs\Comp1.evtx", "C:\Logs\Comp2.evtx" | where {$_.Id -eq "4663" -and $_.message -like "*0x10000*" -or $_.Id -eq 4663 -and $_.message -like "*0x6*"} > C:\Logs\Out.csv

 Using "| Format-List" provides a view of the data fields available for use with the "Where" statement.


While not ideal, the IT Practicioner or Incident Responder can certainly wrangle with evtx files without a SIEM or Log management system. The recent release of the Verizon DBIR report (2011) included a statement on page 60 that notes an interesting but not unexpected finding.

"...discovery through log analysis and review has dwindled down to 0%. So the good news is that things are only looking up from here..." - Verizon DBIR 2011

Happy Hunting!

Updated May 19, 2011

I intentionally did not provide any detail on enabling Object Access auditing in Windows since there is a fair amount of documentation available on that. In retrospect, however, I did want to mention a few things and share a few tips.

First, choose what Accesses you audit carefully. Accesses such as "List Folder/Read Data" are very noisy and will only increase the amount of logs you have to parse and may fill up the event log completely so it begins to overwrite itself (note: there are settings for the size of the log too).

Second consider what user or group you audit access for carefully. The "Users" group may be fine for auditing access to files stored on a file server but consider using the "Everyone" group if auditing changes made by malicious code. This group will include the System account.

Lastly, enabling auditing of changes to the system folders or registry may become resource intensive and non-manageable in a production environment. Use with caution. That said, I do believe it can be useful during analysis of malicious code. I would include a few more locations than just the System32 and HKLM however. The C:\Users, C:\ProgramData, and HKCU keys come to mind.