ASFWS – Cybercrime to Information Warfare & “Cyberwar”: a hacker’s perspective

Slides available on http://asfws12.files.wordpress.com/2012/11/asfws2012-raoul_chiesa-ioan_landry-infowar_and_infoops.pdf

Raoul Chiesa & Loan Landry had the last words of AppSec Forum Western Switzerland for the concluding presentation. Let’s be honest, trying to resume Raoul and Loan’s presentation within a few lines is a hard task. And this task gets more complicated or even impossible as they explicitly asked us not to disclose some of the information they presented us. So instead of trying to sum up a very rich presentation and maybe disclose things I should not, I prefer referring you back to the detailed slides of the presentation they gave us.

After this final talk, there was still time for a good apéritif before joining the Château d’Yverdon for a final meal together. It was great having all remaining people around a single table and be able to exchange further about all the seen topics over the two last days. Unfortunately I couldn’t attend too long in the evening, as I had to catch the last train to Zürich on the same evening, but definitely enjoyed my time in Yverdon-les-Bains.

From a personal point of view, I really enjoyed the conference as well as the social events, which were really enriching. I would have loved to present further topics from the conference such as news about mobile devices, web security or how to hack the Twitter API, but this was simply impossible, as I could not attend the presentation of both tracks at the same time.

As a conclusion, I hope you enjoyed this detailed articles about ASFWS 2012 as much as I liked attending it. Therefore I look forward for the next edition and hope, as expressed in this slide from Raoul’s presentation, you will be keen as well to participate at the next edition too!

ASFWS – SuisseID talk

Due to a canceled presentation, a slot became available Thursday afternoon and Dominique Bongard used this time for an improvised talk about SuisseID. Without any slides but by dynamically switching between different websites and documents, he started an interesting and interactive discussion with his public around the goals, limitations and risks linked to a SuisseID, a by-(Swiss)-law recognized electronic signature device.

Dominique started with an inventory of all the usages and abilities provided by SuisseID, as marketed on various websites. His focus was on how SuisseID could help a small Swiss startup to authenticate users and confronted this view to the current reality of e-commerce in Switzerland. He explicitly excluded from the scope of his talk all advantages of SuisseID for e-government tasks, as it gives no benefit for B2C interactions for a small company.

His observation was that all e-commerce websites having implemented SuisseID used it at best to replace the username/password authentication scheme, without any further data extract (e.g. name or address). His investigations also showed that several merchants who implemented SuisseID removed this feature since or plan to do so in the near future.

So why use SuisseID just as a login/password replacement, and not leverage further all the information contained within this famous electronic ID card? Well, this is exactly the issue in his opinion: while the marketing of SuisseID tries to sell it as an electronic ID card, it does just contain enough information to generate digital signatures, recognized by the Swiss law as an equivalent of your analogic signature.

Dominique investigated further and went ordering SuisseID devices from several providers, doing some social engineering at a Swiss Post counter, Mobilezone or at the desk of the local community administration. His conclusions are harsh, as some registration authorities did not run adequate identity checks before delivering a SuisseID.

The discussions and interactions of the public were really interesting, as it’s definitely a controversial topic. On one hand we have the fear of a Big Brother where all our data is recorded, on the other side there are many reasons why such a device should include as many data as possible for the sake of simplicity and convenience.

Swiss government instances are trying to push SuisseID as it’s (in my opinion) a required step for good e-Government solutions. As a final thought, a participant mentioned that the probable entry of Swisscom as SuisseID provider, combined with an offer based on mobile devices, may accelerate the trend and result in a tighter and more convenient integration between a digital signature, customer details and payment features.

ASFWS – OAuth: un protocole d’autorisation qui authentifie?

Slides available on http://asfws12.files.wordpress.com/2012/11/asfws2012-maxime_feroul-oauth_un_protocole_qui_authentifie.pdf

Maxime Feroul started his presentation with the fact that we all currently have many different identities on the Internet. Federating a common solution is far from being easy, as it must be secure and easy for all stakeholders. Ideally, you would want to use your LinkedIn, Xing or SalesForce profile to identify yourself on other business related sites, while login to private related websites may rely on Facebook.

There are already different identity federation protocols and providers available. On one side we find more business / SOAP / XML (WS-* or SAML) solutions, while on the other solutions are more “web” / REST / JSON (OpenID connect, Oauth 2.0, UMA) oriented.

But let’s come back to OAuth and compare it to OpenID Connect. While OpenID Connect authenticate users, OAuth is used to authorize them. Initially the goal was to allow applications or services to use resources from a user in his name, but without having to store his credentials. OAuth therefore does not specify how to authenticate the user, the identity provider (IDP) is solely responsible for this and is free to perform this duty as he intends.

OAuth 2.0 features many ways to authorize users, but the two most common are the authorization code and the implicit method. Relying on the “implicit” method can be dangerous, as a rogue application may impersonate its victim by reusing the token issued by the IDP. This vulnerable “implicit” method is often in use, according to the referent, in the mobile world.

A look in the norm for the OAuth 2.0 “implicit” method shows that the authors noticed there was a security loophole, and signaled it in the RFC. Facebook’s OAuth implementation is not 100% conform to the protocol, but mitigates the security issue introduced by the “implicit” method by adding additional checks on the server side.

As a practical take-away for your next “social” application, ensure your application does not allow passing around just tokens, but add extra checks to delimit for which services these identifiers were issued. Double-check as well that the whole authentication/authorization process occurs over a secure channel and protect your application against Cross-Site Request Forgery ([C|X]SRF) attacks.

ASFWS – Node.js Security – Old vulnerabilities in new dresses

Slides available on http://asfws12.files.wordpress.com/2012/11/node_security_presentation_v3_asfws.pdf

In a similar way than the previous day’s OPA presentation of Alok, Sven Vetsch guided us through Node.js, a high performance JavaScript web server based on Google’s V8 engine. Node.js (abbreviated Node afterwards) features a full API with no blocking actions. With a simple “Hello World!” example in JavaScript we got an understanding how easily client-side scripting skills can be reused on the server-side. Fun fact, console.log also works in Node and having character %07 (BEL) printed via this function will inevitably lead to a chorus on your server.

But reuse of insecure client-side scripting patterns may result in much more dramatic results when occurring on the server-side. Vulnerable functions such as eval() will not only “just” result in DOM based XSS flaws within a browser but now can be leveraged to quickly compromise your whole web server. As a proof-of-concept, Sven developed his own Metasploit module to backdoor a Node server within one request containing a vulnerable script. And due to the magic of JavaScript, you can redefine any existing function to your needs and not only within the scope of your request but for the whole server, achieving persistency until the next server reboot.

As listed on slide 35, many features aren’t natively supported by Node. But the package manager npm allows you to complement your installation and also takes care of package dependencies. A vulnerability or backdoor in a popular package may therefore impact the security of many websites. As often, the quality of the different modules varies enormously within the repository, making reviews of all involved dependencies tough. Many other vulnerable examples are provided in Sven’s slide set, featuring a set of code you absolutely don’t want to see in any of your productive application.

ASFWS – Hash-flooding DoS reloaded: attacks and defenses

Slides available on http://asfws12.files.wordpress.com/2012/11/asfws2012-jean_philippe_aumasson-martin_bosslet-hash_flooding_dos_reloaded.pdf

As denial of service attacks based on hash-flooding are not a new topic, Jean-Philippe Aumasson and Martin Boßlet started with an introduction about this topic. Storage of data in hash tables is usually done for any array-based information, such as data sent for a GET or a POST request towards a website. Instead of relying on the parameter name for the array index, a hash gets generated and stored for performance reasons. If now an attacker is able to generate several parameter names resulting in the same hash, the effort to search a given value in a hash table passes from a linear time (o(n)) to an order of n2. As an example, submitting a 2 MB POST request containing special crafted data is handled on a recent machine will require 10 seconds for the process, as ~ 40 billion string compares will need to be performed.

Such attacks aren’t new, the CCC having featured this topic back in 2011 as well. As a fix after these attacks, an improved hash generation algorithms called MurmurHash2 and MurmurHash3 were released, involving better hash generation and the introduction of some random values in the calculation. Jean-Philippe decided to apply some differential cryptanalysis on this algorithm and discovered it was still vulnerable to the same root issues. Another hash algorithm, named cityhash developed by Google was also found vulnerable to the same issues.

Armed with the theoretical knowledge on how to perform such an attack brought by Jean-Philippe, Martin decided to have a look at RAILS implementation to see if it was exploitable in real conditions. A first attempt to exploit this in a POST request failed due to encoding issues and finally due to size limitations implemented in the framework. Is RAILS therefore safe? No, because other features such as the JSON parser use a vulnerable hash table implementation and a demo showed us how submitting a request containing 211 (2048) chosen values took ~ 5 seconds to execute, while 214 (16384) chosen values took two minutes, close to the defined timeout of Rails. Facetious people might argue that this is just (yet another) example of Ruby just being slow, compare to other languages, so what about Java? A submission of 214 not colliding values was handled by Java in 0.166 seconds. But 214 colliding values in 9 seconds…

What is the solution to fix once for all this issue? Implement submission size limits as it was done for POST requests within Rails? While this may sound appealing at a first look, it turns to be unrealistic as many user influenced values may rely on hash tables. Instead of fixing all usage scenarios involving hash tables, let’s fix the algorithm generating the hashes. This is where Jean-Philippe and its new algorithm SipHash steps in again. SipHash is based on diffusion and confusion with 4 rounds and got implemented recently in Ruby by Martin. Oracle, alerted on September 11st, did not yet answer to the researchers at the time of the conference.

Update – the following timeline of events which happened after the conference illustrate the novelty of the presentation we had on 08.11.2012:

On 13.11.2012, CERTA issued an advisory http://www.certa.ssi.gouv.fr/site/CERTA-2012-AVI-643/CERTA-2012-AVI-643.html based on the Ruby security bulletin of 09.11.2012, crediting the authors for the discover and the fix of the issue in this language: http://www.ruby-lang.org/en/news/2012/11/09/ruby19-hashdos-cve-2012-5371/.

On 23.11.2012, oCERT issued an advisory referring the affected software and their contact with Oracle and other vendors: http://www.ocert.org/advisories/ocert-2012-001.html

ASFWS – Keynote 2 – From Pay-TV to cyber security

Original Prezi presentation available on http://prezi.com/qhv0ra2qhxoz/asfws-2012-keynote-2/. Prezi converted slides available on http://asfws12.files.wordpress.com/2012/11/asfws2012_keynote2.pdf

Olivier Brique, VP Cybersecurity Technology of Swiss company Kudelski offered us an insightful and dynamic dive into the history of his company, initially producing high quality microphone before developing Pay-TV solution and finally announcing, on the 21st of last month, a new cyber security division.

But before talking about the new division officially launched two weeks after this presentation, Olivier gave us more details of what a Pay-TV solution was composed of, how it evolved and how it was attacked. Based on the first attacks back in the end 1990’s, Kudelski developed an internal intelligence unit, gathering information on internet and monitoring forums talking about reverse engineering smart cards. At the same time, efforts for research and development were done on various fronts, especially in terms of a lab to test internal products from an attacker’s perspective before there are released.

Of course, the cat-and-mouse game continued between the company and hackers. Around 2005, Kudelski launched a new generation of Smart Cards considered as secure but the game did not end there either. With the progress of Internet, attackers could now, using legitimate Smart Cards, decrypt given TV channels and distribute the clear text signal via the network. The emergence of such “Piracy as a Service” platforms owned by organized crime triggered the need of further Internet monitoring from Kudelski and developed internal competences in network forensic. A world-wide network of lawyers was also set up to be able to response to the threat via legal means.

With such a history, starting at securing hardware and becoming an “insurance-safety service”, this company featuring 1’000 security engineers on a total of 3’000 employees certainly has some cards to play on the market nowadays. But how does this specific knowledge of Pay-TV apply for other, service oriented, companies such as banks? According to Olivier, strong similarities exist with issues such as migration to cloud services (how to secure data on the move and at rest on uncontrolled and partially untrusted equipment) or Bring-Your-Own-Device (where the device must be resilient against attacks).

The initial slide set, based on Prezi, gave this little added dynamic touch to the whole presentation, which got lost in the PDF versions of the slides. Despite this, I recommend you the read of the slides for further details, especially about all the Pay-TV relevant data. Enjoy viewing the Prezi set, as it will lead you through a dynamic history of Kudelski and of the Pay TV.

[Updated on 10.12.2012 to include the link to the Prezi presentation Olivier submitted in the comments and alter slightly the conclusion]

ASFWS – A critical analysis of Dropbox software security

Slides available on http://asfws12.files.wordpress.com/2012/11/dropbox-asfws-version.pdf

It was a full (or even an overfilled) room, in which several people did not find a seat, which listened to Nicolas Ruff and Florian Ledoux’s presentation. The topic is certainly appealing but the reputation of Nicolas Ruff aka newsoft (“Security researcher, hacker, blogger, serial speaker, troll herder, happy father & more” as he describes himself) is also a guarantee for an interesting and entertaining presentation. After a short introduction of their employers at EADS Innovation Works (a few hundreds of people within a holding of over 170’000 employees) we got really into the technical details on how Nicolas and his trainee Florian approached the challenge of reversing the Dropbox client as well as the communication protocol.

But first, why starting this investigation? The legend says it started due to strange broadcast packets on the LAN, sent by contractors located in the adjacent room. Furthermore, no in-depth analysis or proven record track of the Dropbox concept and protocol existed, leaving many questions open.

A first look at the client binaries across the supported OSes shows that all Linux, Mac and Windows clients use a similar binary based on Python. All PYC files (compiled Python sources) are available on the client, stored within an embedded zip in PE resources but with scrambled content. The client also includes a full Python interpreter but not quite identical to the original one as 53 files have been modified. It turns out the altered Python interpreter implements TEA encryption for the previously seen PYC files.

We now get decrypted PYC files which are still in a compiled form. But all PYC decompilers work only for files compiled with version 2.7, not for version 2.5 which is currently used by Dropbox. Never mind, instead of writing their own decompiler, Nicolas and Florian wrote two wrappers to upgrade a PYC file from Python 2.5 to 2.6 and from 2.6 to 2.7 and so be finally able to get an insight into the source code of the Dropbox client.

Compared to previous versions, the current client uses an encrypted SQLite database (with the default value / license key…) and the Windows DPAPI for storing sensitive information. A custom but robust implementation to store sensitive data is also available for MACs. From a network perspective, autoupdates of the clients are signed and they use a hard-coded list of root CAs for establishing the SSL communications.

While the SSL library wasn’t of the best in Python 2.5, Dropbox implemented their own solution based on OpenSSL and nCrypt. But this workaround also introduces security issues as both binaries are dating back to 2009, respectively 2007 and have known security vulnerabilities which may be exploitable for remote code execution. No update of nCrypt has been published since then and therefore no easy patch exists for this issue.

When sending a file via Dropbox, its content gets split into chunks of 4MB and a SHA-256 hash is calculated on it. Of all network communication between the Dropbox client and the server only one request is made over http (statistic about the user account using non sensitive data). All the remaining traffic is sent over an HTTPS connection. Note that this includes all errors and stack traces occurring on your client which are sent to Dropbox automatically.

A review of the known attacks about Dropbox start with Dropship where access to blocks based on their SHA-256 hash was possible without any authentication. This attack was successfully mitigated by enforcing user ACL checks. A second attack based presented at Usenix 2011 was also successfully mitigated with the analyzed binary.

Nicolas and Florian’s focus for attack was on the LAN sync feature, relying on port udp/17500 for discovery (via broadcast) and tcp/17500 for data exchange. Data exchange is done over TLS with client certificates as each Dropbox installation has a public and private key signed by Dropbox. Each installation (or node) can act as a server sending files or a client gathering files. The only identified issue here is that no certificate check is performed by the client while connecting to a (potentially rogue) Dropbox server.

The conclusion of the talk were positive as most identified issues in the past are fixed and the protocol and the client, aside a few flaws, are pretty robust. All aforementioned points were signaled to Dropbox which recorded the bugs and acknowledged them. On the other hand, their investigations resulted in having now software to decompile PYC files generated for Python 2.5 & 2.6 as well as a DBX decryption routine for SQLite encrypted databases.

I really encourage all interested persons to have a look at both the slides (featuring much more technical details) and their online tool repository (https://github.com/Mysterie/uncompyle2 & https://github.com/newsoft) where most of the software they wrote for this analysis is available.

 

After this great last presentation of the day it’s time for the traditional “apéritif” and then the departure for the evening’s event which takes place in the height of Lausanne at the Chalet Suisse. There’s nothing as good as a nice Swiss fondue and a huge dessert to have interesting discussions and exchanging experiences around the table. It’s also a great occasion to meet new people or catch up with others, such as Bruno Kerouanton which made the trip just for the evening. The bus trip back and the last drinks at the hotel were also precious moments in this kind of conferences.

ASFWS – Bee Ware WAF

Slides available on http://asfws12.files.wordpress.com/2012/11/yverdon-2012-secweb-analyse-tech-vs-contextuelle.pdf

This talk from Matthieu Estrade (CTO of Bee Ware), entitled officially “Sécurité des application web, analyse technique versus analyse contextuelle” was in fact a kind of sales pitch for Bee Ware, a special kind of Web Application Firewall (WAF). Compass Security has an extensive knowledge about leading WAF products in the German part of Switzerland (think of AirLock, Secure Entry Server or Nevis) but I never heard of Bee Ware until then. Let’s try to understand the idea of this product which obviously never crossed the “Rösti Graben” (yet).

The first part of the presentation focused on the current challenges you have when using “standard” WAFs based on a technical analysis of the request. Depending on the application you may end up with having numerous false positives handled by less trained security engineers not aware of what is relevant to the protected web application. The race between defenders and attackers is endless and pattern blacklisting will always run behind innovative attackers. Furthermore, the quality and the attack surface of web applications vary a lot and taking an informed decision isn’t easy. While getting everything right is possible, it involves a good communication between all stakeholders of a project which unfortunately is not often the case.

This is where the idea behind Bee Ware comes into play. Instead of focusing on technical aspects only, Bee Ware includes a contextual analysis of the requests and all previously related interactions. Therefore, the focus is put on the lower two percent of abnormal web traffic, not matching the usual usage pattern. Several agents analyze each request and keep track of a score per “client” based on the algorithms used by ants to find an optional way. Some agents are of very technical nature, e.g. ensuring the claimed user-agent by the client really is sending HTTP headers as expected. Another agent may track the navigation habits and timing, ensuring pages get viewed on a reasonable rhythm, loading all resources (e.g. pictures) adequately. Yet another agent will assess the navigation path, detecting uncommon navigation pattern (e.g. direct POST on a form before a GET is done or direct access to a hidden feature). Other possible information may base on geographical region (e.g. Russian customer for a local French bank) or on used OS/browser for an intranet which is only accessible over managed machines. All these agents return a score to the engine which correlates them and decides if the client is genuine, suspicious or considered as an identified offender.

New resources will be considered with a high degree of vigilance at the beginning but once a standard usage pattern got learned, the vigilance level is reduced unless other agents signal uncommon properties. For security analysts this kind of WAF may become a challenge as identical requests issued at different times – and therefore different levels of client reputation – will yield different results.