Wrap-up: Hack-Lab 2017#1

What is a Hack-Lab?

Compass Security provides a monthly playful occasion for the security analysts to get-together and try to hack new devices, dive into current technologies and share their skills with their fellows.

This also includes the improvement of internal tools, the research of newly identified publicly known attacks, and security analysis of hardware and software we consider useful for our future engagements.

   

Topics

The following topics, tools and technology has been discussed during this Hack-Lab:

  1. SharePoint Security
  2. Bypassing Android 7.0 HTTPS Apps Certificates Restriction
  3. JWT4B
  4. CodeInspect
  5. Smart Meter
  6. DNS Tunnel Debugging

Wrap-Up

Topic #1 – SharePoint Security Lab and Knowledge Sharing

SharePoint is a very popular browser-based collaboration and content management platform. Due to its high complexity, proprietary technology and confusing terminology it is often perceived as a black-box that IT and security professionals do not feel very comfortable with.

In a combination of talks and hands-on workshop sessions, Thomas Röthlisberger shared his research work with colleagues. They challenged his findings and shared their thoughts on pros & cons of security relevant settings. The outcome of this Hack-Lab session will be shared in a series of blog posts within the next couple of weeks.

The research in our very own hands-on SharePoint lab allows us to gain an in-depth understanding of any type of SharePoint environment, be it a purely internal collaboration web application, a platform to share information with external partners or a publishing site hosting the company website. To build or assess a secure SharePoint environment one needs to understand the importance of governance, logical and physical architecture, network topology, active directory considerations, authentication and authorization, segregation of classified data, hardening and most importantly web security relevant settings to make sure the built-in protection measures are effective. Like other modern Microsoft products, an out-of-the-box SharePoint installation can be considered secure. However, there are so many weirdly named settings which heavily depend on each other that misconfiguration is likely to happen, leaving the door wide open for unauthorized access by adversaries with SharePoint skills.

TECHNOLOGY:

  • SharePoint Server 2010 & 2013
  • Web Applications, Site Collections, (Sub-)Sites, (Custom) Lists, Document Libraries, Web Part Pages, Web Parts, Apps
  • Web Security, Cross-site Scripting (XSS), Cross-site Request Forgery (CSRF)
  • Navigation Links
  • Web Sensitive Files, permission to Add & Customize Pages and Scriptable Web Parts, e.g. Content Editor and Script Editor (“SafeAgainstScript=False”)
  • Browser File Handling
  • Web Page Security Validation (aka Anti-CSRF token)
  • Lockdown Mode Feature
  • Remote Interfaces SOAP, CSOM, WCF Service, REST Interface
  • Server-Side Controls
  • .NET Sandboxing, Sandboxed Solutions and Apps
  • Self-Service Site Creation
  • Developer Dashboard
  • Audit Logs
  • People Picker

Topic #2 – Bypassing Android 7.0 HTTPS Apps Certificates Restriction

With Android 7.0, apps do not trust user imported certificates anymore.  Intercepting app network traffic with a proxy has become more complicated.

The goal is to find or create a custom application which is explicitly developed for Android 7.0. Then to configure the app with the network_security_config.xml file, which is used to bypass this restriction,  and therefore enables user defined certificates.

Technology:

  • Android Studio
  • Android 7.0
  • Apktool

Topic #3 – JWT4B

Create a Burp plugin which helps the analyst when testing an app that uses JSON Web Tokens (JWT.IO).

Frist step is to create a prototype which enables Burp to visualize the tokens. On further hacklabs it should be possible to automatically perform JWT attacks.

Technology:

  • Java
  • JJWT (library)
  • JWT

Topic #4 – CodeInspect

Evaluation of CodeInspect’s features.

Determine if CodeInspect could be used to make future  Android app analysis assessments more efficient.

Technology:

  • Java
  • Android

Topic #5 – Smart Meter

Description:

An Energy Monitoring System was provided for testing. It is used to measure the current consumption and provides various interfaces. Web browser (TCP/IP) and Modbus are the main ones.

Assess the security of the interfaces. What can an attacker exploit if given network access to the device?

Technology:

  • TCP/IP
  • Modbus
  • HTTP Web Application

Topic #6 – DNS Tunnel Debugging

Compass Security has its own trojan toolkit which we use for responsible phishing attacks in mandate for our customers, and also demos and proof of concepts. The trojan also implements DNS tunneling.

Analyze the source code and perform debugging to identify and fix some reliability issues while performing DNS tunneling with multiple clients.

Technology:

  • C++

Exchange Forensics

Introduction

The number one form of communication in corporate environments is email. Alone in 2015, the number of business emails sent and received per day were estimated to be over 112 billion [1] and employees spend on average 13 hours per week in their email inbox [2]. Unfortunately, emails are at times also misused for illegitimate communication. Back in the days when the concept of email was designed, security was not the main focus of the inventors and some of the design short comings are still problematic today. The sender rarely uses encryption and the receiver cannot check the integrity of unprotected emails. Not even the metadata in the header of an email can be trusted as an attacker can easily forge this information. Even though many attempts have been made into securing email communication, there are still a lot of unsecured emails sent every day. This is one of the reasons why attackers still exploit weaknesses in email communication. In our experience, a lot of forensic investigations include an attacker either stealing/leaking information via email or an employee unintentionally opening Malware he received via email. Once this has happened, there is no way around a forensic investigation in order to answer key question such as who did what, when and how? Because many corporate environments use Microsoft Exchange as mailing system, we cover some basics on what kind of forensic artifacts the Microsoft Exchange environment provides.

Microsoft Exchange Architecture

In order to understand the different artifacts we first take a look at the basic Microsoft Exchange architecture and the involved components. The diagram below this paragraph shows the architectural concepts in the On-premises version of Exchange 2016. Edge Transport Servers build the perimeter of the email infrastructure. They handle external email flow as well as apply antispam and email flow rules. Database availability groups (DAGs) form the heart of Microsoft’s Exchange environment. They contain a group of Mailbox servers and host a set of databases. The Mailbox servers contain the transport services that are used to route emails. They also contain the client access service, which is responsible for routing or proxying connections to the corresponding backend services on a Mailbox server. Clients don’t connect directly to the backend services. When a client sends an email through the Microsoft Exchange infrastructure, it always traverses at least one Mailbox server.

architecture[3] (Exchange 2016 Architecture, Microsoft)

Compliance Features

Microsoft Exchange provides multiple compliance features. Each of those compliance features provides a different set of information to an investigator and it is important to have a basic understanding of their behavior in order to understand which feature can provide answer to which question. The most important compliance features are covered in the following paragraphs.

Message Tracking

The message tracking compliance feature writes a record of all activity as emails flow through Mailbox servers and Edge Transport servers into a log file. Those logs contain details regarding the sender, recipient, message subject, date and time. By default the message tracking logs are stored for a maximum of 30 days if the size of the log files does not grow bigger than 1000MB.

The following example shows the message tracking log entries created when the user “alice@csnc.ch” sends a message with the MessageSubject “Meeting” to the user “bob@csnc.ch“. Note that in this example both users have their mailboxes on the same server.

EventId    Source      Sender        Recipients    MessageSubject
-------    ------      ------        ----------    --------------
NOTIFYMAPI STOREDRIVER               {}
RECEIVE    STOREDRIVER alice@csnc.ch {bob@csnc.ch} Meeting
SUBMIT     STOREDRIVER alice@csnc.ch {bob@csnc.ch} Meeting
HAREDIRECT SMTP        alice@csnc.ch {bob@csnc.ch} Meeting
RECEIVE    SMTP        alice@csnc.ch {bob@csnc.ch} Meeting
AGENTINFO  AGENT       alice@csnc.ch {bob@csnc.ch} Meeting
SEND       SMTP        alice@csnc.ch {bob@csnc.ch} Meeting
DELIVER    STOREDRIVER alice@csnc.ch {bob@csnc.ch} Meeting

The message content is not stored as part of message tracking logs. By default, the subject line of an email message is stored in the tracking logs, however this can be disabled in the configuration settings. [4]

Single Item Recovery

Single Item Recovery is a compliance feature that essentially allows you to recover individual emails without having to restore them from a full database backup. If a user deletes an email in Outlook, it goes to the “Deleted Items” folder. When the user deletes this email from the “Deleted Items” folder, the email will be placed into the “Dumpster” (soft delete). The following screenshots show how the “Dumpster” can be accessed:

recover_deleted_items1[5] (Recover deleted items in Outlook, Microsoft)

When clicking on the “Recover Deleted Items” trash symbol, the “Dumpster” gets opened as shown on the following screenshot:

recover_deleted_items2

[5] (Recover deleted items in Outlook, Microsoft)

From the “Dumpster”, messages can either be recovered or purged completely (hard delete). They can still be recovered if a backup of the mailbox is available of course. When Single Item Recovery is enabled it means that emails remain recoverable for administrators, even if the mailbox owner deletes the messages from the inbox, empties the “Deleted Items” folder and then purges the content of the “Dumpster”. Single Item Recovery is not enabled by default and has to be enabled prior to the date of an investigation. In order to recover a message, the following information is needed [6]:

  • The source mailbox that needs to be searched.
  • The target mailbox into which the emails will be recovered.
  • Search criteria such as sender, recipient or keywords in the message.

With the information above, an email can be found using the Exchange Management Shell (EMS) as shown in the following example.

Search-Mailbox "Alice" -SearchQuery "from:Bob" -TargetMailbox "Investigation Search Mailbox" -TargetFolder "Alice Recovery" -LogLevel Full

In-Place Hold

In-Place Hold can be used to preserve mailbox items. If this compliance feature is enabled, an email will be kept, even if it was purged by a user (deleted from the “Dumpster” folder). Also if an item is modified, a copy of the original version is retained. The In-Place hold is usually activated during investigations in order to preserve the Mailbox content of an individual. The individual do not notice that they are “on hold”. A query with parameters can be used to granularly define the scope of items to hold. By default In-Place Hold is disabled and if neither Single Item Recover nor the In-Place Hold is enabled, an email will be permanently deleted if a user purges (deletes) it from the “Dumpster.

Mailbox Auditing

Mailboxes can contain sensitive information including personally identifiable information (PII). Therefore it is important that it gets tracked who logged on to a mailbox and which actions were taken. It is especially important to track access to mailboxes by users other than the mailbox owner, the so called delegates.

By default mailbox auditing is disabled and when enabled it requires more space on the corresponding mailbox. If enabled, one can specify which user actions (for example, accessing, moving, or deleting a message) are logged per logon type (administrator, delegate user, or owner). Audit log entries also include further important information such as the client IP address, host name, and processes or clients used to access the mailbox. If the auditing policy is configured to only include key records such as sending or deleting items there is no noticeable impact in terms of storage and performance.

Administrator Auditing

This compliance feature is used to log changes that an administrator makes to the Exchange Server configuration. By default, the log files are enabled and kept for 90 days. Changes to the administrator auditing configuration are always logged. The log files are stored in a hidden dedicated mailbox which cannot be opened in Outlook or OWA.

Others

Exchange email flow rules, also known as transport rules can be used to look for specific conditions in messages that pass through an Exchange Server. Those rules are similar to the Inbox rules, a lot of email client’s offer. The main difference between an email flow rule and a rule one would setup in an email client is that email flow rules take action on messages while they are in transit, as opposed to after the message is delivered. Further, email flow rules have a richer set of conditions, exceptions as well as actions, which provide the flexibility to implement many types of messaging policies. [7]

Journaling allows recording a copy of all email communications and sending it to a dedicated mailbox on an Exchange Server. Archiving on the other hand can be used to backup up data, removing it from its native environments and store a copy on another system. Finally there is always the option of a full backup of an Exchange database. This creates and stores a complete copy of the database file as well as transaction logs.

Summary

As we have seen, Microsoft Exchange provides various compliance features that help during forensic investigations involving email analysis. Having an understanding of which artifacts are available is key. The following table summarises the compliance features discussed in this post:

summary_table

Courses and Beer-Talk Reference

In order to directly share our experience in this field we choose “Exchange Forensics” as topic for our upcoming beer talks. Don’t hesitate to sign up if you are interested. For more information click on the link next to the location you would like to attend:

If you like to dive even deeper, we provide the Security Training: Forensic Investigations. It covers:

  • Introduction to forensic investigations
  • Chain of custody
  • Imaging
  • Basic of file systems
  • Traces in slack space
  • Traces in office documents
  • Analysis of windows systems
  • Analysis of network dumps
  • Analysis of OSX systems
  • Analysis of mobile devices
  • Forensic readiness
  • Log analysis

If you are interested please visit our “Security Trainings” section to get more information: https://www.compass-security.com/services/security-trainings/kursinhalte-forensik-investigation/ or get in touch if you have questions.

Sources and References:

[0] E-mail Forensics in a Corporate Exchange Environment, Nuno Mota, http://www.msexchange.org/articles-tutorials/exchange-server-2013/compliance-policies-archiving/e-mail-forensics-corporate-exchange-environment-part1.html

[1] Email-Statistics-Report-2015-2019, The Radicati Group, Inc., http://www.radicati.com/wp/wp-content/uploads/2015/02/Email-Statistics-Report-2015-2019-Executive-Summary.pdf

[2] the-social-economy, McKinsey & Company, http://www.mckinsey.com/industries/high-tech/our-insights/the-social-economy

[3] Exchange 2016 Architecture, Microsoft, https://technet.microsoft.com/de-ch/library/jj150491(v=exchg.160).aspx

[4]  Message Tracking, Microsoft, https://technet.microsoft.com/en-us/library/bb124375(v=exchg.160).aspx

[5] Recover deleted items in Outlook, Microsoft, https://support.office.com/en-us/article/Recover-deleted-items-in-Outlook-2010-cd9dfe12-8e8c-4a21-bbbf-4bd103a3f1fe

[6] Recover deleted messages in a user’s mailbox, Microsoft, https://technet.microsoft.com/en-us/library/ff660637(v=exchg.160).aspx

[7] Mail flow or transport rules Microsoft, https://technet.microsoft.com/en-us/library/jj919238(v=exchg.150).aspx

Netzwerktraffic und APT Analyse

Compass Security wird vermehrt von Kunden bzgl. Verdacht auf Advanced Persistent Threat (APT) kontaktiert. Unter die Bezeichnung “APT” fallen komplexe, zielgerichtete und äusserst effektive Angriffe auf kritische und zuweilen gar unternehmenswichtige Computersysteme bzw. deren gespeicherte Informationen.
Die Analyse von potentiell infiltrierten Netzen und Systemen gestaltet sich jedoch als enorm aufwändig, da Unmengen von Datensätzen und Logs ausgewertet werden müssen. Compass hat deshalb immer wieder verschiedene Aspekte im Bereich APT, Forensik und Incident Response beleuchtet. Einerseits betreiben wir Research mit internen “Hack Labs” und “Research Weeks”, wo unsere Spezialisten sich mit den neusten Erkenntnissen der Scene auseinandersetzen bzw. diese weiter treiben und andererseits bearbeitet Compass in Zusammenarbeit mit den Security Fachabteilungen einer Vielzahl von Hochschulen, entsprechende Themen.

Eine entsprechend gewürdigte Maturaarbeit aus dem letzten Sommer möchten wir der Öffentlichkeit nicht länger vorenthalten und publizieren darum die Resultate im Rahmen dieses Posts. Im Mittelpunkt des Whitepapers steht die Analyse von APT mittels Splunk, einer spezialisierten Software zur Analyse von grossen Mengen maschinengenerierter Logdaten. Es werden darin auch alternative Wege zur Auswertung eruiert und ein Standardvorgehen für APT Fälle vorgeschlagen. Das Paper greift auch das bei Compass übliche Vorgehen bei forensischen Analysen auf und gibt dem technischen Leser in gewohnter Compass manier, viele technische Details mit auf den Weg. Natürlich auch einige Ideen, wie man das Logging von bestimmten Diensten optimieren könnte.

Möchten Sie gerne mehr zum Thema wissen? Möchten Sie auch erfahren, was dies in der Praxis bedeutet? Dann können wir Ihnen unser nächstes “Hands-on Seminar” mit dem Titel: Network Analysis & Advanced Persistent Threats vom 25. und 26. August 2015 in Bern empfehlen.

In der Zwischenzeit wünschen wir Ihnen viel Spass beim Schmökern. Behalten Sie einen kühlen Kopf compass_security_schweiz_whitepaper_apt_network_analysis_w_splunk_v1.1.pdf.

Compass Security Crew,
Cyrill Brunschwiler

Challenges in Log Management

Recently, SANS Institute has published the 9th log management survey (2014). The paper identifies strengths and weaknesses in log management systems and practices. It further provides advice to improve visibility across systems with proper log collection, normalization and analysis. Log management is very important to Compass as it heavily influences forensic investigations. Evidently, accurate information needs to be available to track down incidents. This post provides a short summary of the paper and reflects Compass research and experiences in these fields.

TL:DR; Positive is, that most of the companies have some sort of log management, at least most collect logs in some form – many do log to a central log server. In summary, log management is a well-established control within companies, but there are challenges (e.g. cloud services, differences in logging by different vendors) which companies cannot solve on their own and depend on the vendors and hosting providers. To differentiate between “good” and “bad” traffic is one of the biggest challenges.

The respondents of the survey rated the following activities as the biggest challenges in log management:

  • Distinguish between normal and suspicious traffic
  • Analysis of “big data” (large amount of volumes and types of log and events)
  • Normalization and categorization of logs and security information
  • Correlation of logs from various sources
  • Cloud causes log management headaches
  • Vendors log similar events differently

The first point “distinguish between normal and suspicious traffic” is clearly a problem – especially, if the infrastructure includes different technologies and vendors and exceeds a small environment. The bad thing is, malware and therefore the malicious traffic uses also “good” traffic to communicate with C&C servers. Here, baselining your logs could help. You might also want to understand applications in-depth and get some meaning from the user’s behavior – network analysis of the given parts could help you to understand the ‘average’ traffic.

The challenges with the cloud log management are rather new – but behind the scenes the same challenges exust. Look at cloud systems as they would be systems managed “by others” and not simply “by the cloud”. Challenge yourself with the same questions as you would challenge a hosted Unified Communications (UC) or a storage provider etc. What is logged? Where is it logged? How long are the logs being kept? Are the logs collected by the central log server? Will they be processed by the security information and event management (SIEM) systems?

The respondents in the survey clearly stated that collecting logs from the cloud is still difficult. Around half of the respondents say that they feel no need to monitor apps in the cloud. Many respondents say they rely on their cloud operator’s ISMS and security services, management and controls. Compass Security has some concern with this view – ONE SHOULD log and monitor all the required information as one would with in-house services. Graham Cluley said in a blog post: “Don’t call it ‘the cloud’. Call it ‘someone else’s computer’.”. Moreover, with the shift to the “cloud”, forensic analysis is getting a big challenge which companies are facing. If a cloud provider is not willing or simply unable to provide logs, you might want to evaluate another one. Some cloud providers actually allow to export logs. See Amazon and Cloudstack.

The top three reasons to collect logs are

  • detect and/or track suspicious behavior (e.g. unauthorized access, insider abuse)
  • support IT/Network routine maintenance and operations
  • support forensic analysis

Unfortunately, the respondents have issues to make meaningful use of the logs for:

  • detection/tracking of suspicious behavior
  • detection of APT-style malware
  • prevention incidents

In a recent presentation in Jona, Compass Security highlighted the difficulties to detect suspicious behavior and thus to detect APTs. It was presented how monitoring and APT traffic detection can be achieved with the correlation of logs of DNS, Mail, Proxy and Firewall. For this purpose, the logs have been enriched with external data like IP Reputation Lists, ZeuS Tracker, DNS Blacklists, Mail Black- and Greylists to identify potential malicious traffic. There are lots of other tricks which help to identify malicious traffic.

Besides the challenges and difficulties, the survey pointed out that SIEM infrastructures have become widely used to claim some form of automated processing and/or alerting of suspicious events. Automation is the key to managing and analyzing the large amounts of data. In recent years, normalization improved but fully “normalized” log information is still not available. Log engines will help to normalize and categorize events and log information systems for many different formats.

Interesting to see is how long the different respondents spend their time on analysis their logs. Most of them spend around 4h-8h a week on log analysis (of course, this depends on the company size). Not surprising was the fact, that regulatory compliance has been one of the main drivers for determining log data retention policies.

Regarding the current SSL (padding vulnerability) discussions, here are two examples of SSLv3 logging shown to identify downgrade attacks or to just see which clients still uses SSLv3. For apache this could be used:

CustomLog logs/ssl_request_log "%t %h \"{User-agent}i\" %{SSL_PROTOCOL}x %{SSL_CIPHER}x "

For nginx the following line could be used within your nginx configuration:

log_format ssl ''$remote_addr "$http_user_agent" $ssl_cipher $ssl_protocol

Offtopic: there is a good overview of different products and how to disable SSLv3.

How Compass can help you

If you like to have some hands-on practice and get a deeper inside how to detect APTs and how they work, Compass has the following upcoming courses regarding this hot topic:

These trainings use our Hacking-Lab in order to practice with log engines to analyze real-world examples. Furthermore, our classic “Beer Talk” series in September was about APT.

Compass Security can help you in the regards of testing your log environment with simulating directed attacks or simulating APT-style malware or by analyzing your log management concept.

Conclusion

While companies implemented log management with some basic log search functionality, detecting malware in real environments or collecting logs from the cloud is still difficult. Environments grow overtime and understanding the traffic within the infrastructure is key but a somewhat tedious and time consuming task. Log engines (e.g. IDH Framework, Splunk , Log Correlation Engine (LCE from TENABLE), ELK) help to collect and analyze log information. SIEM systems help to match and correlate different events. Script languages are needed to normalize data where the log engines reach their limits. Cloud providers must support the companies to log the relevant information or provide connectors for log engines. Furthermore, there is also a “Splunk in the cloud” solution.

Please comment, if I missed challenges or difficulties. I would also be interested in your experiences regarding log management.

Keywords: SIEM, log management, logging, normalization, APT, cloud, SANS

References