Exchange Forensics

Introduction

The number one form of communication in corporate environments is email. Alone in 2015, the number of business emails sent and received per day were estimated to be over 112 billion [1] and employees spend on average 13 hours per week in their email inbox [2]. Unfortunately, emails are at times also misused for illegitimate communication. Back in the days when the concept of email was designed, security was not the main focus of the inventors and some of the design short comings are still problematic today. The sender rarely uses encryption and the receiver cannot check the integrity of unprotected emails. Not even the metadata in the header of an email can be trusted as an attacker can easily forge this information. Even though many attempts have been made into securing email communication, there are still a lot of unsecured emails sent every day. This is one of the reasons why attackers still exploit weaknesses in email communication. In our experience, a lot of forensic investigations include an attacker either stealing/leaking information via email or an employee unintentionally opening Malware he received via email. Once this has happened, there is no way around a forensic investigation in order to answer key question such as who did what, when and how? Because many corporate environments use Microsoft Exchange as mailing system, we cover some basics on what kind of forensic artifacts the Microsoft Exchange environment provides.

Microsoft Exchange Architecture

In order to understand the different artifacts we first take a look at the basic Microsoft Exchange architecture and the involved components. The diagram below this paragraph shows the architectural concepts in the On-premises version of Exchange 2016. Edge Transport Servers build the perimeter of the email infrastructure. They handle external email flow as well as apply antispam and email flow rules. Database availability groups (DAGs) form the heart of Microsoft’s Exchange environment. They contain a group of Mailbox servers and host a set of databases. The Mailbox servers contain the transport services that are used to route emails. They also contain the client access service, which is responsible for routing or proxying connections to the corresponding backend services on a Mailbox server. Clients don’t connect directly to the backend services. When a client sends an email through the Microsoft Exchange infrastructure, it always traverses at least one Mailbox server.

architecture[3] (Exchange 2016 Architecture, Microsoft)

Compliance Features

Microsoft Exchange provides multiple compliance features. Each of those compliance features provides a different set of information to an investigator and it is important to have a basic understanding of their behavior in order to understand which feature can provide answer to which question. The most important compliance features are covered in the following paragraphs.

Message Tracking

The message tracking compliance feature writes a record of all activity as emails flow through Mailbox servers and Edge Transport servers into a log file. Those logs contain details regarding the sender, recipient, message subject, date and time. By default the message tracking logs are stored for a maximum of 30 days if the size of the log files does not grow bigger than 1000MB.

The following example shows the message tracking log entries created when the user “alice@csnc.ch” sends a message with the MessageSubject “Meeting” to the user “bob@csnc.ch“. Note that in this example both users have their mailboxes on the same server.

EventId    Source      Sender        Recipients    MessageSubject
-------    ------      ------        ----------    --------------
NOTIFYMAPI STOREDRIVER               {}
RECEIVE    STOREDRIVER alice@csnc.ch {bob@csnc.ch} Meeting
SUBMIT     STOREDRIVER alice@csnc.ch {bob@csnc.ch} Meeting
HAREDIRECT SMTP        alice@csnc.ch {bob@csnc.ch} Meeting
RECEIVE    SMTP        alice@csnc.ch {bob@csnc.ch} Meeting
AGENTINFO  AGENT       alice@csnc.ch {bob@csnc.ch} Meeting
SEND       SMTP        alice@csnc.ch {bob@csnc.ch} Meeting
DELIVER    STOREDRIVER alice@csnc.ch {bob@csnc.ch} Meeting

The message content is not stored as part of message tracking logs. By default, the subject line of an email message is stored in the tracking logs, however this can be disabled in the configuration settings. [4]

Single Item Recovery

Single Item Recovery is a compliance feature that essentially allows you to recover individual emails without having to restore them from a full database backup. If a user deletes an email in Outlook, it goes to the “Deleted Items” folder. When the user deletes this email from the “Deleted Items” folder, the email will be placed into the “Dumpster” (soft delete). The following screenshots show how the “Dumpster” can be accessed:

recover_deleted_items1[5] (Recover deleted items in Outlook, Microsoft)

When clicking on the “Recover Deleted Items” trash symbol, the “Dumpster” gets opened as shown on the following screenshot:

recover_deleted_items2

[5] (Recover deleted items in Outlook, Microsoft)

From the “Dumpster”, messages can either be recovered or purged completely (hard delete). They can still be recovered if a backup of the mailbox is available of course. When Single Item Recovery is enabled it means that emails remain recoverable for administrators, even if the mailbox owner deletes the messages from the inbox, empties the “Deleted Items” folder and then purges the content of the “Dumpster”. Single Item Recovery is not enabled by default and has to be enabled prior to the date of an investigation. In order to recover a message, the following information is needed [6]:

  • The source mailbox that needs to be searched.
  • The target mailbox into which the emails will be recovered.
  • Search criteria such as sender, recipient or keywords in the message.

With the information above, an email can be found using the Exchange Management Shell (EMS) as shown in the following example.

Search-Mailbox "Alice" -SearchQuery "from:Bob" -TargetMailbox "Investigation Search Mailbox" -TargetFolder "Alice Recovery" -LogLevel Full

In-Place Hold

In-Place Hold can be used to preserve mailbox items. If this compliance feature is enabled, an email will be kept, even if it was purged by a user (deleted from the “Dumpster” folder). Also if an item is modified, a copy of the original version is retained. The In-Place hold is usually activated during investigations in order to preserve the Mailbox content of an individual. The individual do not notice that they are “on hold”. A query with parameters can be used to granularly define the scope of items to hold. By default In-Place Hold is disabled and if neither Single Item Recover nor the In-Place Hold is enabled, an email will be permanently deleted if a user purges (deletes) it from the “Dumpster.

Mailbox Auditing

Mailboxes can contain sensitive information including personally identifiable information (PII). Therefore it is important that it gets tracked who logged on to a mailbox and which actions were taken. It is especially important to track access to mailboxes by users other than the mailbox owner, the so called delegates.

By default mailbox auditing is disabled and when enabled it requires more space on the corresponding mailbox. If enabled, one can specify which user actions (for example, accessing, moving, or deleting a message) are logged per logon type (administrator, delegate user, or owner). Audit log entries also include further important information such as the client IP address, host name, and processes or clients used to access the mailbox. If the auditing policy is configured to only include key records such as sending or deleting items there is no noticeable impact in terms of storage and performance.

Administrator Auditing

This compliance feature is used to log changes that an administrator makes to the Exchange Server configuration. By default, the log files are enabled and kept for 90 days. Changes to the administrator auditing configuration are always logged. The log files are stored in a hidden dedicated mailbox which cannot be opened in Outlook or OWA.

Others

Exchange email flow rules, also known as transport rules can be used to look for specific conditions in messages that pass through an Exchange Server. Those rules are similar to the Inbox rules, a lot of email client’s offer. The main difference between an email flow rule and a rule one would setup in an email client is that email flow rules take action on messages while they are in transit, as opposed to after the message is delivered. Further, email flow rules have a richer set of conditions, exceptions as well as actions, which provide the flexibility to implement many types of messaging policies. [7]

Journaling allows recording a copy of all email communications and sending it to a dedicated mailbox on an Exchange Server. Archiving on the other hand can be used to backup up data, removing it from its native environments and store a copy on another system. Finally there is always the option of a full backup of an Exchange database. This creates and stores a complete copy of the database file as well as transaction logs.

Summary

As we have seen, Microsoft Exchange provides various compliance features that help during forensic investigations involving email analysis. Having an understanding of which artifacts are available is key. The following table summarises the compliance features discussed in this post:

summary_table

Courses and Beer-Talk Reference

In order to directly share our experience in this field we choose “Exchange Forensics” as topic for our upcoming beer talks. Don’t hesitate to sign up if you are interested. For more information click on the link next to the location you would like to attend:

If you like to dive even deeper, we provide the Security Training: Forensic Investigations. It covers:

  • Introduction to forensic investigations
  • Chain of custody
  • Imaging
  • Basic of file systems
  • Traces in slack space
  • Traces in office documents
  • Analysis of windows systems
  • Analysis of network dumps
  • Analysis of OSX systems
  • Analysis of mobile devices
  • Forensic readiness
  • Log analysis

If you are interested please visit our “Security Trainings” section to get more information: https://www.compass-security.com/services/security-trainings/kursinhalte-forensik-investigation/ or get in touch if you have questions.

Sources and References:

[0] E-mail Forensics in a Corporate Exchange Environment, Nuno Mota, http://www.msexchange.org/articles-tutorials/exchange-server-2013/compliance-policies-archiving/e-mail-forensics-corporate-exchange-environment-part1.html

[1] Email-Statistics-Report-2015-2019, The Radicati Group, Inc., http://www.radicati.com/wp/wp-content/uploads/2015/02/Email-Statistics-Report-2015-2019-Executive-Summary.pdf

[2] the-social-economy, McKinsey & Company, http://www.mckinsey.com/industries/high-tech/our-insights/the-social-economy

[3] Exchange 2016 Architecture, Microsoft, https://technet.microsoft.com/de-ch/library/jj150491(v=exchg.160).aspx

[4]  Message Tracking, Microsoft, https://technet.microsoft.com/en-us/library/bb124375(v=exchg.160).aspx

[5] Recover deleted items in Outlook, Microsoft, https://support.office.com/en-us/article/Recover-deleted-items-in-Outlook-2010-cd9dfe12-8e8c-4a21-bbbf-4bd103a3f1fe

[6] Recover deleted messages in a user’s mailbox, Microsoft, https://technet.microsoft.com/en-us/library/ff660637(v=exchg.160).aspx

[7] Mail flow or transport rules Microsoft, https://technet.microsoft.com/en-us/library/jj919238(v=exchg.150).aspx

APT Detection & Network Analysis

Until recently, the majority of organizations believed that they do not have to worry about targeted attacks, because they consider themselves to be “flying under the radar”. The common belief has been: “We are too small, only big organizations like financial service providers, military industry, energy suppliers and government institutions are affected”.

However, this assumption has been proven wrong since at least the detection of Operation ShadyRAT[0], DarkHotel[1] or the recent “RUAG Cyber Espionage Case”[2]. The analysis of the Command & Control (C&C) servers of ShadyRAT revealed that a large-scale operation was run from 2006 to 2011. During this operation 71 organizations (private and public) were targeted and spied on. It is assumed that these so-called Advanced Persistent Threats (APT) will even increase in the near future.

We at Compass Security are often asked to help finding malicious actions or traffic inside corporate networks.

The infection, in most cases, is a mix of social engineering methods (for example spear phishing) and the exploitation of vulnerabilities. This actually varies from case to case. Often we observe in proxy logs, that employees were lured into visiting some phishing sites which are designed to look exactly like the corporation’s Outlook Web Access (OWA) or similar applications/services as being used by the targeted company.

Typically, this is not something you can prevent exclusively with technical measures – user awareness is the key here! Nevertheless, we are often called to investigate when there still is some malware activity in the network. APT traffic detection can then be achieved with the correlation of DNS, mail, proxy, and firewall logs.

Network Analysis & APT Detection

To analyze a network, Compass Analysts first have to know the network’s topology to get an idea of how malware (or a human attacker) might communicate with external servers. Almost every attacker is going to exfiltrate data at some point in time. This is the nature of corporate/industrial espionage. Further, it is important to find out whether the attacker gained access to other clients or servers in the network.

For the analysis, log files are crucial. Many companies are already collecting logs on central servers [3], which speeds up the investigation process, since administrators don’t have to collect the logs from many different sources (which sometimes takes weeks), and off-site logs are more difficult to clear by attackers.

To analyze logs and sometimes traffic dumps, we use different tools like:

ELK offers many advantages when it comes to clustering and configuration, but it doesn’t offer many pre-configured log parser rules. If you are lucky, you can find some for your infrastructure on GrokBase[4]. Otherwise there are plenty of tools helping you to build them on your own, such as e.g. Grok Debugger[5].

However, when analysis has to be kick-started fast, and you do not have time to configure large rulesets – Splunk comes with a wide range of pre-configured parsers.

After we gathered all logs (and in some cases traffic dumps), we feed them into Splunk/ELK/Moloch for indexing.

In a first step we try to clean the data set by removing noise. To achieve this, we identify known good traffic patterns and exclude them. Of course it is not always straight forward to distinguish between normal and suspicious traffic as some malware uses for example Google Docs for exfiltration. It takes some time to understand what the usual traffic in a network looks like. To clean the data set even more, we then look for connections to known malware domains.

There are plenty of lists available for this:

If we are lucky, the attacker used infrastructure provided by known malware service providers (individuals and organizations are selling special services just for the purpose of hosting malware infrastructure). But more sophisticated attacks will most likely use their own infrastructure.

After cleaning out the data sets, we look for anomalies in the logs (e.g. large amounts of requests, single requests, big DNS queries, etc.). Some malware is really noisy and as a consequence, easy to find. Some samples are connecting to their C&C servers in a high frequency. Other samples are requesting commands form C&C servers at regular time intervals (Friday 20:00 for example). Others connect just once.

Sometimes we also detect anomalies in the network infrastructure which are caused by employees, for example heavy usage of cloud services such as Google Drive or Dropbox. Often these constitute to so-called false positives.

To share our experiences and knowledge in this field, we created the Security Training: Network Analysis & APT.

This training will cover:

  • Configuration of evidence (What logs are needed?)
  • Static and Dynamic Log Analysis with Splunk
    • Splunk Basics and Advanced Usage
    • Detecting anomalies
    • Detecting malicious traffic
  • Attack & Detection Challenges

If you are interested please visit our “Security Trainings” section to get more information: https://www.compass-security.com/services/security-trainings/kursinhalte-network-anlysis-apt/ or get in touch if you have questions.

The next upcoming training is on 22. and 23. September 2016 in Jona, click here to register.

Sources & References:
[0] Operation Shady RAT, McAfee, http://www.mcafee.com/us/resources/white-papers/wp-operation-shady-rat.pdf
[1] The Darkhotel APT, Kaspersky Lab Research, https://securelist.com/blog/research/66779/the-darkhotel-apt/
[2] Technical Report about the Malware used in the Cyberespionage against RUAG, MELANI/GovCERT, https://www.melani.admin.ch/melani/en/home/dokumentation/reports/technical-reports/technical-report_apt_case_ruag.html
[3] “Challenges in Log-Management”, Compass Security Blog, http://blog.csnc.ch/2014/10/challenges-in-log-management/
[4] GrokBase, http://grokbase.com/
[5] Grok Debugger, https://grokdebug.herokuapp.com/
[x.0] APT Network Analysis with Splunk, Compass Security, Lukas Reschke, https://www.compass-security.com/fileadmin/Datein/Research/White_Papers/apt_network_analysis_w_splunk_whitepaper.pdf
[x.1] Whitepaper: Using Splunk To Detect DNS Tunneling, Steve Jaworski, https://www.sans.org/reading-room/whitepapers/malicious/splunk-detect-dns-tunneling-37022

Wie stiehlt man KMU-Geheimnisse?

Ein Hintegrundartikel zur SRF Einstein Sendung vom Donnerstag, 3. September 2015 um 21:00 Uhr zum Thema “Cybercrime, wie sicher ist das Know-how der Schweiz”. (Trailer online)

In diesem Artikel zeigen wir Ihnen die Vorgehensweisen von Angreifern auf, die versuchen unerlaubten Zugriff auf fremde Systeme zu erlangen — beispielsweise im Netzwerk eines KMUs. Schematisch sind diese Vorgehensweisen auch im Rahmen der von SRF Einstein dokumentierten Angriffe gegen die SO Appenzeller durchgeführt worden. Der Artikel soll Sie nicht nur für die Angriffsseite sensibilisieren, sondern hält auch sechs einfache Tipps zur Abwehr bereit.

 

Direkte Angriffe

Direkte Angriffe richten sich unmittelbar gegen die IT-Infrastruktur eines Unternehmens. Typischerweise sucht ein Angreifer dabei nach Schwachstellen auf einem Perimeter System, dass ins Internet exponiert ist.
Direkte Angriffe

  1. Ein Angreifer versucht unerlaubten Zugriff auf interne Systeme zu erlangen.
  2. Der Angreifer, beispielsweise vom Internet her, sucht nach offenen Diensten die er möglicherweise für das Eindringen ausnutzen kann.
  3. Ein ungenügend geschützter Dienst erlaubt dem Angreifer Zugriff auf interne Systeme.

Indirekte Angriffe

Im Gegensatz zu direkten Angriffen, nutzen indirekte Angriffe nicht unmittelbar eine Schwachstelle auf einem ins Internet exponierten System aus. Vielmehr versuchen indirekte Angriffe die Perimeter Sicherheit eines Unternehmens zu umgehen.

Man-in-the-Middle / Phishing Angriffe

Indirekte Angriffe

  1. Ein Angreifer schaltet sich in den Kommunikationsweg zweier Parteien. Dies erlaubt ihm das Mitlesen sensitiver Informationen.
  2. Der Angreifer nutzt die erlangten Informationen um unbemerkt auf interne Systeme zuzugreifen.

Malware / Mobile Devices / W-LAN
Indirekte Angriffe

  1. Ein Angreifer infiziert ein Gerät mit Schadsoftware.
  2. Durch die Schadsoftware erlangt der Angreifer Kontrolle über das infizierte Gerät, welches Zugriff auf andere interne Systeme hat.
  3. Zusätzlich kann ein Angreifer über andere Zugriffspunkte ins interne Netzwerk gelangen, beispielsweise über unsichere Wireless-LAN Access Points.

Covert Channel
Indirekte Angriffe

  1. Ein Angreifer präpariert ein Medium wie USB-Sticks oder CD-ROMs mit Schadsoftware.
  2. Der Angreifer bringt sein Opfer dazu das Medium zu verwenden.
  3. Die Schadsoftware wird automatisiert ausgeführt und verbindet sich unbemerkt zurück zum Angreifer. Der Angreifer erhält die Kontrolle über das infizierte Gerät.

Sechs Tipps zur Abwehr

  1. Regelmässige Aktualisierung von Betriebssystem, Browser und Anwendungssoftware
  2. Schutz durch Verwendung von Firewall und Anti-Viren Software
  3. Verwendung von starken Passwörtern, sowie deren regelmässige Änderung
  4. Löschen von E-Mails mit unbekanntem Absender, Sorgfalt beim Öffnen angehängter Dateien
  5. Vorsicht bei der Verwendung von unbekannten Medien wie USB-Sticks oder CD-ROMs
  6. Regelmässige Erstellung von Backups

Wie kann Compass Security Ihre Firma unterstützen?

Gerne prüfen wir, ob auch Ihre Geheimnisse sicher sind!SRF Einstein, Compass, Appenzeller

Referenzen

Unter folgenden Referenzen finden Sie Tipps und Anregungen zu häufig gestellten Fragen.

Netzwerktraffic und APT Analyse

Compass Security wird vermehrt von Kunden bzgl. Verdacht auf Advanced Persistent Threat (APT) kontaktiert. Unter die Bezeichnung “APT” fallen komplexe, zielgerichtete und äusserst effektive Angriffe auf kritische und zuweilen gar unternehmenswichtige Computersysteme bzw. deren gespeicherte Informationen.
Die Analyse von potentiell infiltrierten Netzen und Systemen gestaltet sich jedoch als enorm aufwändig, da Unmengen von Datensätzen und Logs ausgewertet werden müssen. Compass hat deshalb immer wieder verschiedene Aspekte im Bereich APT, Forensik und Incident Response beleuchtet. Einerseits betreiben wir Research mit internen “Hack Labs” und “Research Weeks”, wo unsere Spezialisten sich mit den neusten Erkenntnissen der Scene auseinandersetzen bzw. diese weiter treiben und andererseits bearbeitet Compass in Zusammenarbeit mit den Security Fachabteilungen einer Vielzahl von Hochschulen, entsprechende Themen.

Eine entsprechend gewürdigte Maturaarbeit aus dem letzten Sommer möchten wir der Öffentlichkeit nicht länger vorenthalten und publizieren darum die Resultate im Rahmen dieses Posts. Im Mittelpunkt des Whitepapers steht die Analyse von APT mittels Splunk, einer spezialisierten Software zur Analyse von grossen Mengen maschinengenerierter Logdaten. Es werden darin auch alternative Wege zur Auswertung eruiert und ein Standardvorgehen für APT Fälle vorgeschlagen. Das Paper greift auch das bei Compass übliche Vorgehen bei forensischen Analysen auf und gibt dem technischen Leser in gewohnter Compass manier, viele technische Details mit auf den Weg. Natürlich auch einige Ideen, wie man das Logging von bestimmten Diensten optimieren könnte.

Möchten Sie gerne mehr zum Thema wissen? Möchten Sie auch erfahren, was dies in der Praxis bedeutet? Dann können wir Ihnen unser nächstes “Hands-on Seminar” mit dem Titel: Network Analysis & Advanced Persistent Threats vom 25. und 26. August 2015 in Bern empfehlen.

In der Zwischenzeit wünschen wir Ihnen viel Spass beim Schmökern. Behalten Sie einen kühlen Kopf compass_security_schweiz_whitepaper_apt_network_analysis_w_splunk_v1.1.pdf.

Compass Security Crew,
Cyrill Brunschwiler

Challenges in Log Management

Recently, SANS Institute has published the 9th log management survey (2014). The paper identifies strengths and weaknesses in log management systems and practices. It further provides advice to improve visibility across systems with proper log collection, normalization and analysis. Log management is very important to Compass as it heavily influences forensic investigations. Evidently, accurate information needs to be available to track down incidents. This post provides a short summary of the paper and reflects Compass research and experiences in these fields.

TL:DR; Positive is, that most of the companies have some sort of log management, at least most collect logs in some form – many do log to a central log server. In summary, log management is a well-established control within companies, but there are challenges (e.g. cloud services, differences in logging by different vendors) which companies cannot solve on their own and depend on the vendors and hosting providers. To differentiate between “good” and “bad” traffic is one of the biggest challenges.

The respondents of the survey rated the following activities as the biggest challenges in log management:

  • Distinguish between normal and suspicious traffic
  • Analysis of “big data” (large amount of volumes and types of log and events)
  • Normalization and categorization of logs and security information
  • Correlation of logs from various sources
  • Cloud causes log management headaches
  • Vendors log similar events differently

The first point “distinguish between normal and suspicious traffic” is clearly a problem – especially, if the infrastructure includes different technologies and vendors and exceeds a small environment. The bad thing is, malware and therefore the malicious traffic uses also “good” traffic to communicate with C&C servers. Here, baselining your logs could help. You might also want to understand applications in-depth and get some meaning from the user’s behavior – network analysis of the given parts could help you to understand the ‘average’ traffic.

The challenges with the cloud log management are rather new – but behind the scenes the same challenges exust. Look at cloud systems as they would be systems managed “by others” and not simply “by the cloud”. Challenge yourself with the same questions as you would challenge a hosted Unified Communications (UC) or a storage provider etc. What is logged? Where is it logged? How long are the logs being kept? Are the logs collected by the central log server? Will they be processed by the security information and event management (SIEM) systems?

The respondents in the survey clearly stated that collecting logs from the cloud is still difficult. Around half of the respondents say that they feel no need to monitor apps in the cloud. Many respondents say they rely on their cloud operator’s ISMS and security services, management and controls. Compass Security has some concern with this view – ONE SHOULD log and monitor all the required information as one would with in-house services. Graham Cluley said in a blog post: “Don’t call it ‘the cloud’. Call it ‘someone else’s computer’.”. Moreover, with the shift to the “cloud”, forensic analysis is getting a big challenge which companies are facing. If a cloud provider is not willing or simply unable to provide logs, you might want to evaluate another one. Some cloud providers actually allow to export logs. See Amazon and Cloudstack.

The top three reasons to collect logs are

  • detect and/or track suspicious behavior (e.g. unauthorized access, insider abuse)
  • support IT/Network routine maintenance and operations
  • support forensic analysis

Unfortunately, the respondents have issues to make meaningful use of the logs for:

  • detection/tracking of suspicious behavior
  • detection of APT-style malware
  • prevention incidents

In a recent presentation in Jona, Compass Security highlighted the difficulties to detect suspicious behavior and thus to detect APTs. It was presented how monitoring and APT traffic detection can be achieved with the correlation of logs of DNS, Mail, Proxy and Firewall. For this purpose, the logs have been enriched with external data like IP Reputation Lists, ZeuS Tracker, DNS Blacklists, Mail Black- and Greylists to identify potential malicious traffic. There are lots of other tricks which help to identify malicious traffic.

Besides the challenges and difficulties, the survey pointed out that SIEM infrastructures have become widely used to claim some form of automated processing and/or alerting of suspicious events. Automation is the key to managing and analyzing the large amounts of data. In recent years, normalization improved but fully “normalized” log information is still not available. Log engines will help to normalize and categorize events and log information systems for many different formats.

Interesting to see is how long the different respondents spend their time on analysis their logs. Most of them spend around 4h-8h a week on log analysis (of course, this depends on the company size). Not surprising was the fact, that regulatory compliance has been one of the main drivers for determining log data retention policies.

Regarding the current SSL (padding vulnerability) discussions, here are two examples of SSLv3 logging shown to identify downgrade attacks or to just see which clients still uses SSLv3. For apache this could be used:

CustomLog logs/ssl_request_log "%t %h \"{User-agent}i\" %{SSL_PROTOCOL}x %{SSL_CIPHER}x "

For nginx the following line could be used within your nginx configuration:

log_format ssl ''$remote_addr "$http_user_agent" $ssl_cipher $ssl_protocol

Offtopic: there is a good overview of different products and how to disable SSLv3.

How Compass can help you

If you like to have some hands-on practice and get a deeper inside how to detect APTs and how they work, Compass has the following upcoming courses regarding this hot topic:

These trainings use our Hacking-Lab in order to practice with log engines to analyze real-world examples. Furthermore, our classic “Beer Talk” series in September was about APT.

Compass Security can help you in the regards of testing your log environment with simulating directed attacks or simulating APT-style malware or by analyzing your log management concept.

Conclusion

While companies implemented log management with some basic log search functionality, detecting malware in real environments or collecting logs from the cloud is still difficult. Environments grow overtime and understanding the traffic within the infrastructure is key but a somewhat tedious and time consuming task. Log engines (e.g. IDH Framework, Splunk , Log Correlation Engine (LCE from TENABLE), ELK) help to collect and analyze log information. SIEM systems help to match and correlate different events. Script languages are needed to normalize data where the log engines reach their limits. Cloud providers must support the companies to log the relevant information or provide connectors for log engines. Furthermore, there is also a “Splunk in the cloud” solution.

Please comment, if I missed challenges or difficulties. I would also be interested in your experiences regarding log management.

Keywords: SIEM, log management, logging, normalization, APT, cloud, SANS

References

Forensic Investigation Kurs in Bern

Die Teilnehmer lernen die Grundlagen der forensichen Untersuchungen anhand eines fiktiven Hacker-Angriffs. Dazu startet das Seminar mit einem Szenario, welches Schritt für Schritt aufgeklärt werden soll. Dabei werden verschiedene Übungen mit unterschiedlichen Technologien und Systemen gemacht. Diesen November führt Compass Security das erste Mal in Bern den Forensic Investigation Kurs durch.

Sind Sie an Computer Forensik interessiert? Dann ist dieser Kurs genau richtig für Sie!

Die Compass Kurse vermitteln Ihnen Theorie mit vielen praktischen Fallbeispielen, welche Sie in der geschützten Labor-Umgebung (Hacking-Lab) üben können. Anmeldungen sind bis Anfang November 2014 möglich.

Weitere Security Trainings bei Compass

APT Detection Engine based on Splunk

Compass Security is working on an APT Detection Engine based on Splunk within the Hacking-Lab environment. Hacking-Lab is a remote training lab for cyber specialists, used by more then 22’000 users world-wide, run by Security Competence GmbH.

An advanced persistent threat (APT) is a network attack in which an unauthorized person gains access to a network and stays there undetected for a long period of time. The intention of an APT attack is to steal data. APT attacks target high-profile individuals, organizations in sectors with incredibly valuable information assets, such as manufacturing, financial industry, national defense and members of critical infrastructures.

Although APT attacks are difficult to identify, the theft of data can never be completely invisible. Detecting anomalies in outbound data is what our prototype of an APT Detection Engine does. Helping your company discovering that your network has been the target of an APT attack.

We will present our efforts and findings at the upcoming Beer-Talk (September 25, 2014) in Rapperswil-Jona. If you are near Switzerland, drop in for a chat on APT and to enjoy some beer and steaks.

  • Where? Rapperswil-Jona Switzerland
  • When? September 25, 2014
  • Time? 18:00 (6pm)
  • Costs? Free (including beer & steak)

Get a glimpse on our Beer-Talk flyer and spread the word. The Compass Crew is looking forward to meeting you.

Embedded devices and cell phone flash memory acquisition using JTAG

Back in Black (back from Black Hat with a bag full of schwag and branded black shirts). 

Black Hat and DEF CON again allowed insights into latest research and concerns. Where some topics loose grip ( vulnerability scanning, IPv4, DNS, general web issues) others gain momentum (DDoS, mobile computing, smart energy, industrial control and embedded systems). Myself was speaking on the advanced metering infrastructure and specifically on the security of the wireless M-Bus protocol. Slide deck and whitepaper are available for download from the Compass Security news page[1].

At that time, I would like to let you know about a little invention that makes reversing of embedded systems and industrial control devices partially easier. JTAGulator [2]. A device designed by Joe Grand, aka Kingpin and former DEF CON badge designer, with the sole purpose of identifying JTAG PINs and UART serial lines on printed circuit boards (PCB). There is no need to unomunt or desolder devices. JTAGulator can be configured to run on a range of voltages (1.2-3.3V) and features 24 I/Os that are arbitrarily connected to the board in order to identify the relevant pins. Note, that testing for the valid pinout might cause your little device behave strangely while JTAGulator tries to pull lines up and down. Thus, make sure you stay in safe distance 🙂

Now, you wonder !!!@#$ JTAG!!!…understandably. Joint Test Action Group[3], is the name for a standardized hardware interface (IEEE 1149.1) that allows to test and debug integrated circuits. Most embedded devices (cell phones, wireless routers, …) nowadays implement the interface. Having enough information of the target device, the chip and its peripherals could be initialized and accessed using the JTAG interface. Specifically, the interface could allow access to flash memory contents. Thus, the technology comes in handy to acquire cell phone data on a low level or to extract the firmware of embedded devices.

JTAG interfaces are small boxes that interface between the embedded hardware and a common computer. For example, the Swiss based company Amontec[4] provides a high-speed general purpose interface at low cost (120 Euros). The box and its drivers are compatible with the OpenOCD software[5] an on-chip debugger that allows for programming and debugging of embedded devices using some specific command set and the GNU debugger[6]. The Android community[7] has adopted the approach for debug purposes of the Android kernel [8].

With that, I leave you for the moment and I promise we get back to you soon with more summaries on topics of interest.

References
[1] Slides and Whitepaper wireless M-Bus Security, http://www.csnc.ch/en/modules/news/news_0088.html
[2] JTAGulater, http://www.grandideastudio.com/portfolio/jtagulator/
[3] JTAG, http://standards.ieee.org/findstds/standard/1149.1-1990.html
[4] Amontec, http://www.amontec.com/
[5] OpenOCD, http://openocd.sourceforge.net/
[6] GNU Debugger, http://www.gnu.org/software/gdb/
[7] Android Kernel, http://source.android.com/source/building-kernels.html
[8] Video Android Kernel Debugging, http://www.youtube.com/watch?feature=player_embedded&v=JzMj_iU4vx

Blogilo Forensics

The analysis of Social Media apps gets more and more weight as these applications gain momentum with end users. Thus, forensic analysts must not only understand how to grab files and content from a suspects computer but also from its online services (not to use the damn Cloud word). Therefore, it is crucial to understand the full functionality of online Social Media applications since not only publicly published contents but also hidden and drafted files may be of interest to investigatory entities.

In the end, investigators would need to understand how to recover passwords from supporting desktop software such as blog client programs. This article should point out on how to recover user accounts and passwords from the well used Blogilo KDE (Linux) blog client software.

All KDE applications configuration files are stored within the user home ~/.kde/share/apps folder. Blogilo does store its configuration within that path as well.

cbrunsch@tubarao:~$ ls -laR .kde/share/apps/blogilo/
.kde/share/apps/blogilo/:
total 92
drwx------  4 cbrunsch cbrunsch  4096 2012-01-06 08:21 .
drwx------ 11 cbrunsch cbrunsch  4096 2011-12-29 16:10 ..
drwx------  2 cbrunsch cbrunsch  4096 2012-01-02 23:03 1
drwx------  2 cbrunsch cbrunsch  4096 2011-12-28 17:10 -1
-rw-r--r--  1 cbrunsch cbrunsch 62464 2012-01-06 08:21 blogilo.db

.kde/share/apps/blogilo/1:
total 48
drwx------ 2 cbrunsch cbrunsch  4096 2012-01-02 23:03 .
drwx------ 4 cbrunsch cbrunsch  4096 2012-01-06 08:21 ..
-rw-rw-r-- 1 cbrunsch cbrunsch 29586 2012-01-02 23:03 style.html

.kde/share/apps/blogilo/-1:
total 8
drwx------ 2 cbrunsch cbrunsch 4096 2011-12-28 17:10 .
drwx------ 4 cbrunsch cbrunsch 4096 2012-01-06 08:21 ..

Actually, the file of interest is the blogilo.db file. Let’s see whether we can read the accounts directly from that file.

We could try to guess from the output what the username and password might be. However, there is also some more binary content. Thus, let’s have a closer look.

cbrunsch@tubarao:~/.kde/share/apps/blogilo$ file blogilo.db
blogilo.db: SQLite 3.x database

The file command reports an SQLite database. To store the configuration of applications within the file based SQLite format is becoming very popular. Also Firefox does store passwords and history information within databases of the SQLite format. Luckily, these files could be queried very conveniently using an SQLite client. The schema information of that specific Blogilo database can be queried from the sqlite_master table contained within the same file. The schema does also contain information on existing tables.

cbrunsch@tubarao:~/.kde/share/apps/blogilo$ sqlite3 blogilo.db
SQLite version 3.7.9 2011-11-01 00:52:41
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> select name from sqlite_master where type="table";
blog
post
comment
category
file
post_cat
post_file
local_post
local_post_cat
temp_post
temp_post_cat
sqlite> select * from blog;
1|30925834|https://cybrs.wordpress.com/xmlrpc.php|cybrs123|Ult1mate.PW!|http://cybrs.wordpress.com/|3|CYBR's Blog|0||
sqlite>

Here we go. For each configured blog, there will be an entry within the blog table. Each of the records will contain the XML-RPC interface URL as well as the username and password of the blog account. That logon information will also grant access on the online service and would allow to seize hidden and drafted evidence.

NOTE: You must install the SQLite version 3.x client otherwise you won’t be able to query the file.