Monday, December 19, 2011

DFIR Online Meetup

Last week, I had the opportunity to attend the first DFIR online meetup.  The meetup was hosted by Mike Wilkinson of Champlain College and featured a presentation from Mike on an interesting assault case, as well as a presentation from Harlan Carvey on accessing VSCs.  I really liked the layout of the technology that was used.  There were two chat areas: one directed towards the speaker/presenter and a general chat area for everything else.  As Mike mentioned in his blog, the conversation died down after the recording started, but I think that also may have been because the presentations had begun and people were paying attention to the speakers instead.

Mike’s case study was an interesting assault case that dealt with a machine that had four OS’s on two hard drives.  The main issue in this case was involving computer use during a specific timeframe.  I won’t go into details, but the presentation really hit on the fact that you should know your tools and understand how they present data to you as an examiner.  Time conversions from UTC to local or visa versa can make significant impacts on a case if you’re not aware that they’re happening in the background.

Harlan’s presentation went through the steps necessary to mount and access data stored in volume shadow copies.  He’s outlined the steps in his blog before, but it’s nice to hear it straight from the source to help reinforce the process.  What made this even better is that is you had a specific question about mounting and accessing VSCs or an issue you’ve had in the past when working with these, you could ask it during or after the presentation and receive an answer from Harlan or one of the other attendees that may have dealt with a similar situation.

There were around 30 total people present at the meetup, which was nice in the sense that it had more of a small group feeling than that of a huge seminar that many of us are used to when attending online meetups.  However, on the other hand, it would be nice to have more attendees in the future to increase the pool of knowledge for questions and answers.  Overall, it was a great experience and I’ll be ready for another one on January 19th.   

Tuesday, November 29, 2011

Using Log Parser in Timeline Analysis

Timeline analysis has become a key component of many/most forensic examinations nowadays.  Whether you're using the four step process detailed by Chris Pogue at The Digital Standard (or one of the many other great sources online), using fls for file system timestamps, or a mix somewhere in between, the output will typically be the same.  You'll end up with a csv file - either as a direct output from log2timeline or from running mactime against a bodyfile encompassing your timeline data.

Corey Harrell from Journey Into IR posted a great article on using Excel filtering and advanced filters to drill down into the timeline for relevant or key information, and there are a few other posts out there discussing a similar approach.  There are surely many ways that an examiner can filter or otherwise eliminate irrelevant data from the timeline, but I would like to discuss one in particular here.

For those who haven't used it, Log Parser is a free tool published by Microsoft (written by Gabriele Giuseppini) capable of interpreting data files as SQL records that can be readily queried by Log Parser using SQL commands.  Although Log Parser is capable of interpreting many types of logs and other data files, what specifically interests me in the realm of timeline analysis is the ability to query csv files.  Given the ability to process a timeline as a SQL database, a forensic examiner that has an idea of what they're targeting can easily, and more importantly, quickly drill down to the essential timeline information that is relevant to the case.

For those of you who may not be very well versed when it comes to SQL, rest assured that by no means do I consider myself an expert in constructing SQL queries.  That's the beauty here - you don't have to be.  You do need a basic understanding of building SQL queries and the fundamentals of using Log Parser, but in the end we're just performing basic queries.  If you need to gain a better understanding of SQL, there are tons of resources on the web (not to mention several books on the topic).  There's also plenty of resources on Log Parser (and at least one book that I know of).

One nuance that you have to deal with when using Log Parser on a csv created by mactime is the issue of column header naming.  On a side note, Log Parser creates two virtual columns that can be used if needed - "FileName" and "RowNumber".  "FileName" refers to the csv file that is taken as input and processed by LogParser, while "RowNumber" refers to the actual row number within the csv file that a match is found.  Mactime creates a column header titled "File Name".  The space between "File" and "Name" can be a nuisance to deal with in SQL, so I'll simply open the csv file in Notepad++ and change "File Name" to "File_Name".  This will make the queries easier and a bit cleaner.

Let's move into an example.  We'll assume that our timeline is already in csv format and that we used mactime against a bodyfile for the conversion (as opposed to the csv output module of log2timeline).  Note that the timeline used in this example is only a file system dump.  Suppose you were interested in files referencing CCleaner from any time in 2011.  By running the following query, you could view the matching rows in a table (defined by Log Parser as a datagrid).  Note that I specified to only display four columns in order to make the screenshot easier to read.
logparser -o datagrid "SELECT file_name, date, size, type FROM timeline.csv WHERE file_name LIKE '%ccleaner% AND date LIKE '%2011%''" 

Alternatively, you could export the result directly into another csv file by using the command below.
logparser "SELECT * INTO C:\timelineData\CCleaner2011.csv FROM timeline.csv WHERE file_name LIKE '%ccleaner%' AND date LIKE '%2011%'"

I will often start by viewing the results in the datagrid provided by Log Parser, and then export the rows to a separate csv file after I have verified that the query does indeed return the data that I'm interested in.  
The ability to run a SQL query against my timeline often greatly reduces the time and effort that I need to find relevant information.

While timeline analysis and the use of Log Parser is nothing new, I think that the two coupled together pose for an efficient means of analyzing timeline data.  This of course assumes that the examiner has an idea of what they're targeting.  So the next time you're working with a timeline, consider running a few queries on it using Log Parser.