Slides available on http://asfws12.files.wordpress.com/2012/11/asfws2012-jean_philippe_aumasson-martin_bosslet-hash_flooding_dos_reloaded.pdf
As denial of service attacks based on hash-flooding are not a new topic, Jean-Philippe Aumasson and Martin Boßlet started with an introduction about this topic. Storage of data in hash tables is usually done for any array-based information, such as data sent for a GET or a POST request towards a website. Instead of relying on the parameter name for the array index, a hash gets generated and stored for performance reasons. If now an attacker is able to generate several parameter names resulting in the same hash, the effort to search a given value in a hash table passes from a linear time (o(n)) to an order of n2. As an example, submitting a 2 MB POST request containing special crafted data is handled on a recent machine will require 10 seconds for the process, as ~ 40 billion string compares will need to be performed.
Such attacks aren’t new, the CCC having featured this topic back in 2011 as well. As a fix after these attacks, an improved hash generation algorithms called MurmurHash2 and MurmurHash3 were released, involving better hash generation and the introduction of some random values in the calculation. Jean-Philippe decided to apply some differential cryptanalysis on this algorithm and discovered it was still vulnerable to the same root issues. Another hash algorithm, named cityhash developed by Google was also found vulnerable to the same issues.
Armed with the theoretical knowledge on how to perform such an attack brought by Jean-Philippe, Martin decided to have a look at RAILS implementation to see if it was exploitable in real conditions. A first attempt to exploit this in a POST request failed due to encoding issues and finally due to size limitations implemented in the framework. Is RAILS therefore safe? No, because other features such as the JSON parser use a vulnerable hash table implementation and a demo showed us how submitting a request containing 211 (2048) chosen values took ~ 5 seconds to execute, while 214 (16384) chosen values took two minutes, close to the defined timeout of Rails. Facetious people might argue that this is just (yet another) example of Ruby just being slow, compare to other languages, so what about Java? A submission of 214 not colliding values was handled by Java in 0.166 seconds. But 214 colliding values in 9 seconds…
What is the solution to fix once for all this issue? Implement submission size limits as it was done for POST requests within Rails? While this may sound appealing at a first look, it turns to be unrealistic as many user influenced values may rely on hash tables. Instead of fixing all usage scenarios involving hash tables, let’s fix the algorithm generating the hashes. This is where Jean-Philippe and its new algorithm SipHash steps in again. SipHash is based on diffusion and confusion with 4 rounds and got implemented recently in Ruby by Martin. Oracle, alerted on September 11st, did not yet answer to the researchers at the time of the conference.
Update – the following timeline of events which happened after the conference illustrate the novelty of the presentation we had on 08.11.2012:
On 13.11.2012, CERTA issued an advisory http://www.certa.ssi.gouv.fr/site/CERTA-2012-AVI-643/CERTA-2012-AVI-643.html based on the Ruby security bulletin of 09.11.2012, crediting the authors for the discover and the fix of the issue in this language: http://www.ruby-lang.org/en/news/2012/11/09/ruby19-hashdos-cve-2012-5371/.
On 23.11.2012, oCERT issued an advisory referring the affected software and their contact with Oracle and other vendors: http://www.ocert.org/advisories/ocert-2012-001.html
Leave a Reply