Password Security Part 3

In the previous post, we discussed how offline dictionary attacks are carried out, and explained what the “best practice” countermeasures are. Namely, using public random salt (to prevent time-space tradeoff attacks like Rainbow tables) and multiple hash iterations (to slow down attackers).

In this post, I want to begin by describing a few directions that have been suggested to make the attacker’s job harder. I stress at the onset that none of these measures make it infeasible for the attacker, nor even make it much harder for the attacker. Nevertheless, making it even mildly harder for the attacker has advantages.

In the basic scheme, a fixed and known number of iterations are used. If 10,000 iterations are used, then this slows the user and attacker down by 10,000. Another possibility is to use an unspecified and unknown number of iterations. For example, choose a random number between 5,000 and 15,000. When the user logs in, the authenticating server iteratively hashes the password 5,000 times. Then, it iteratively computes another hash and compares the result to what is stored. If the results are equal, then the authenticating server halts and authorizes the login. Otherwise, it continues until 15,000 iterations (with no successful comparison) have been carried out, and it concludes that the password is incorrect. This idea provides a mild benefit since the legitimate user on average has to wait for 10,000 iterations, while the attacker needs to run 15,000 iterations to be sure that the password guess is incorrect. I have to admit that I don’t think that the benefit of using this method is very significant. However, it’s worth mentioning that it is possible to slightly tilt the odds against the attacker. (Note that just raising everyone’s iterations to 15,000 would be even better, and if you can tolerate 15,000 for any single user then you should be able to tolerate it for all users, and if you cannot tolerate it for a single user then this method isn’t for you anyway.)

A more significant method is to use a hash scheme that is harder to parallelize or uses a lot of memory (or requires frequent memory accesses). This idea is especially attractive today since the most powerful password hash crackers use ASICs, FPGAs and GPUs. (Note that an ASIC costing about $10,000 can compute 5 billion SHA-256 hashes per second, which is enormous.) This is the idea behind the scrypt methodology proposed in 2009.

Due to the massive damage being incurred due to password breaches, a group of concerned cryptographers and security professionals are running a password hashing competition. The competition website is here. One of the design principles of submissions is that the “speed-up or other efficiency improvement of cracking-optimized ASIC, FPGA, and GPU implementations compared to CPU implementations intended for password validation should be minimal”. Although hashing passwords will never be secure, such efforts can go a long way to making it much harder for attackers.

I will complete this post by explaining why hashing passwords will never be secure, or at least without fundamentally changing our biology. The bottom line is that human beings are not capable of remembering long random values. This problem is seriously exacerbated by the fact that we are now expected to remember many passwords, and many of these are passwords that we use rather infrequently and so are even harder to remember (most of us don’t access our bank website every day). Coupling this with the fact that the amount of time that users can legitimately wait to logon is not too great, and the fact that attackers will always be able to use much stronger hardware, we have to conclude that offline dictionary attacks will continue to be devastating when hashed password files are stolen. (Just to make sure that I am not misunderstood: I am very much in favor of the type of work proposed in the password hashing competition, and see it as important work. However, it won’t ever make password hashing “secure” in the sense that AES-based encryption is secure as long as the secret key is not stolen. Rather, it will make the “best practice” of salting and iterated hashing better.) This argument is well backed up by a number of studies that have been carried it on the passwords users choose, as revealed in publicly posted results of breaches. For example, Mark Burnett has compiled the 10,000 most common passwords and claims that from his studies, 98.8% of users have passwords from the 1,000 most frequent passwords in the list. A more scientific study by Joseph Bonneau while at Cambridge University analyzed 70 million passwords online and reached the conclusion that passwords on average provide protection to the level of about 20 bits (meaning that a million guesses suffice), but in many many concrete cases the situation is much worse.

Passwords are here to stay, and the best practice of iterated hashing with salt isn’t good enough (but is certainly far better than doing nothing, or just hashing without any iterations or salt).

One option is to use two-factor authentication methods. These are a good solution, but for many organizations (like eCommerce websites) they aren’t a viable alternative. In the next post, I’ll discuss what can nevertheless be done!

Prof. Yehuda Lindell

Prof. Yehuda Lindell

Yehuda Lindell is a professor of Computer Science at Bar-Ilan University in Israel. He is an expert on cryptography, has published over 90 scientific articles, and has authored one of the most widely used textbooks on the subject. He has years of industry experience in the application of cryptography to computer security.

Subscribe to BLOG

shares