// we are

InfoSec RTFM: Password Expiration Policies

This post is the first of a hopefully irregular series of articles on the consequences, in the information security industry, of decisions not based on most recent research or even on basic threat modelling but on common sense faux-amis. Password Expiration Policies (PEP from now on) are quite widespread these days, and are justified by a various assumptions about attacker and defender behaviour. They consist of forcing you to change your password on a regular basis to access a service, often restricting syntactically similar passwords.

PEP may sound brilliant at first. However, a superficial economics overview and an attack/attacker breakdown of the problem help to understand why PEP do more harm than good in practice. Demonstration.

First of all let’s remind our readers about the excellent and very relevant study of useless security measures: So Long and no Thanks for the Externalities by Cormac Herley. Even though I haven’t read “So long” for a while, it does address passwords, and it’s one of the best reads ever on the economics of information security.

What are PEPs?

They’re policies that require you to change your password on a regular basis, usually every one to six months. They often require the new password to be significantly different from previous ones (going as far as using weaker storage methods to allow for more elaborate comparisons).

Why Expiration Policies?

There are a number of scenarii that PEPs have been advocated to address, with various degrees of efficiency:

  • Continuous attacker access: An attacker may get hold of a password at a certain time, but have interests in accessing the system later. This is especially true when the attacker must remain stealthy, has no means to implement a backdoor or repeat their attack, and benefits from repeated intrusions (for instance, spying the e-mail account of an individual). Other countermeasures include intrusion detection systems that can figure out statistically unlikely connections to an account occur.

  • Stolen hashes or backups: PEP prevent later exploitation of stolen password hashes (through intrusions or backup theft) that are broken. Other mitigations would include proper password storage with large salts (if not stolen, blocks rainbow tables) and slower hash algorithms (bruteforce mitigation), and stronger passwords passwords with a larger character set or length (rainbow tables too). PEPs offer protection by bounding the time available to the attacker to break the cryptographic secrets protecting the password.

  • Password reuse + third-party compromission scenario: As people have to maintain a large pool of authentication factors, they tend to write them down in silly places and to reuse them across services. Just like the previous scenario, your password might get stolen from another service and it might be used to access your account. However, this protection is very partial as the time taken to exploit your account once the password has been compromised is in the range of minutes, and most PEPs require you to change your password on a monthly-quarterly basis. Besides, having to change your password often incentivizes to reuse existing third-party service passwords because of the memory burden.

And finally, PEPs like many other “security” measures are often advocated because ‘everyone uses them’. When you’re a clueless CISO and need to look smart to your boss, you just accumulate security mechanisms and procedures and processes indiscriminately so that it looks like you’re doing something. It doesn’t really matter if these measures have anything to do with the threats you’re facing, apparently.

Are PEPs Good Security-wise?

It is quite common for CISOs to prioritise actions they must take by performing a cost analysis of the risks in the environment where their information systems evolve. A very common or very costly risk will receive more attention than one that almost never occurs and causes little harm. Likewise, security measures should be evaluated post-deployment by verifying whether their expected outcome is of greater benefit to an organisation or system than their externalities (reduced range of possible actions, extraneous user effort, loss of retro-compatibility, etc).

Knowledge of the assets and adversaries of a system, and of how passwords are stored in a system and authentication managed are necessary to evaluate whether the above scenarii may occur… We’ve already discussed the few cases where PEPs are useful. The following may cause a password to be compromised:

  • intrusions into a server/service through exploits combined with poor password storage
  • bad authentication interfaces that allow brute-force attacks
  • control and abuse of the client’s platform
  • phishing and other social engineering attacks, or password guesses

Of all these, only the latter can be partly mitigated by PEPs, and only under the condition that no backdoors can be installed. As I’ve said this is interesting for an attacker only if there is significant value in accessing the service again a somewhat distant future. I don’t think there is much more to say about PEP security than that… Points one and two are basic secure software engineering, and points three and four are partly or entirely addressed through two-factor authentication.

An interesting breakdown of when PEPs are useful is What is the attacker looking for?. I can think of four types of exploitable assets: targeting current data; targeting specific future data; fully exploratory attack on future data; and availability/computing means attack (where the data is of no interest). In the first case, any form of post-incident mitigation would fail. In the second one, an attacker is more likely to attack close to the event of interest - if known. PEPs would only make sense if used right when a sensitive operation is expected to be performed by the system (putting it into a clean-state, essentially). In the third case, PEP would cap the amount of utile information obtained or altered by the attacker. Likewise, the fourth case would be bound by PEP.

Last but not least, often PEPs forbid re-using a password too similar to the expiring one (for instance, there needs to be a distance of at least two characters). But how exactly is that achieved? Yep, that’s correct: systems that implement such policies store passwords in clear (or hash them character by character which is pretty much the same). Now that is secure for sure!

The Reality: PEP have a very poor benefit/cost ratio

It is quite obvious that a measure asking users to make an extra effort and that increases their security-related cognitive load has a cost. I’ve already exposed (roughly) the limited benefits of PEPs, but what about their usability consequences if they were actually widespread?

Firstly, it should be noted that PEPs reject the responsibility of managing some security issues – password storage, detection of compromissions and proper authentication interface design – straight onto the user. Users are expected to regularly replace their credentials in total ignorance of the fact that the window of opportunity for an attacker who got hold of said credentials is several orders of magnitude greater than the usually recommended (and never justified) duration before expiration.

In fact, the Verizon DBIR 2013 leads me to believe users should change their password at least every minute to reap the expected benefits of PEPs, as it reports that attackers exploit any breach they find within minutes – surely if that figure also concerns compromised credentials then one should change her password often. Problem: I doubt changing your password in most information systems takes you any less time than a whole minute!

Truth is, if you wanted never to have to suffer the consequences of compromised credentials, you should replace them faster than it takes for an attacker to log in. Including automated login scripts. 24/7, naturally. And please do enjoy all the beauty of absurd security reasoning! Note that this is also what you should be doing if you didn’t trust a service provider to store you credentials properly, hence that kind of defeats the argument of changing your password often owing to distrust towards third-parties. Who would accept such an amount of effort for relatively small extra security?

Secondly, users are still expected to use unique passwords on dozens of different online accounts. Yet who bothers to actively remember more than a handful of passwords? Especially when not connecting often to a service? What tends to happen is that they are stored in a clear-text document or inside your browser, or worse, written down on your (virtual) desktop or (physical) desk. Imagining as low as twenty online services being used, if all imposed a quarterly password change, then it would mean that users have to come up with (and remember!) a new password every four and a half days on average! Yes, that’s more than once a week! Fortunately not everyone imposes PEPs upon their users!

Thirdly, notwithstanding practical user effort and cognitive load, PEPs are frustrating. Especially when they force you to create an actually different password. That alone is enough to discourage from deploying them, because frustrating security too soon turns into bypassed security.

One may argue that it’s not fair to consider PEPs from a global perspective, yet authentication does not happen only in your own system’s realm but everywhere, everyday, on a large scale! If some advocate a security practice to improve the security of their own service, and are serious about it, they should expect the practice to generalise to the point where everyone uses it! I personally think this is the best scenario for evaluating the usability and security of a measure, mechanism or practice because it puts more stress on it. Any mechanism that strives for success out there must resist attacks against the most serious and motivated adversaries (and this is where the Windows vs Linux security comparison isn’t fair – even though it’s been fair for a handful of other reasons up until recently and possibly now). It should also put little enough burden on the users that they could routinely tolerate it, up to hundreds of times a day. Anything that fails to meet these criteria is bound to fail at some point.

Some good reasons to use PEPs?

I’ve exposed enough arguments against PEPs so far. A CISO deployment such policies in full knowledge of all the above is doing what we like to call the “politique de l’autruche”: denying your problems until they go away. Obviously it does not work very well and this maybe explains why ostriches are not very common animals in the French wilderness (or maybe it’s because of the climate and different evolutionary paths between Africa and Europe, I’m not sure).

Yet, in spite of all the cons of password expiration policies, one may sometimes want to rely on them for extremely sensitive accounts, the ones that could get compromised on a daily basis. I’ve heard of security firm employees changing their password every day, though I would not do that to myself. I also still doubt that it mitigates that many threats.

One case remains where you might actually want to change passwords: when you want your system to be in a “clean state”. This may either be because you suspect previous intrusions or credential theft, or because you’re required to perform extremely sensitive operations that justify the cost of a password change (and generally of a whole change of operating systems, client devices, etc), for instance because your company just won a bid for a defence or intelligence contract. Otherwise, it might be better to just explore alternatives to PEP that will provide you with better security benefits.

Alternatives to PEPs

For stolen hash tables or backups, the following may be of help:

  • a slow hash function like PBKDF2, so that it takes ages (i.e. decades and not just a few weeks) to build any kind of rainbow table (see referred to me by my friend Google), of course with salting parametres that are specific to the system and to each user – this basically prevents the building of a generic rainbow table and forces the attack to bruteforce instead
  • proper encrypted storage of backups, which should be no easier to access than existing systems
  • general OS and application security measures: pen-testing, fuzzing the authentication interface and the rest of the UI, analysing apps to detect bugs and fixing them and to spot unexpected information flows, etc.

And when it comes to password theft:

  • heuristics can be employed to detect connections from unknown devices or unexpected places, so that a second factor is plugged in or the account even blocked and the customer contacted. GMail saved me from a keylogger in a computer I used when travelling in China (indeed, I, a security engineer, use other people’s computers in China to read my emails – I make a point to exclusively favour usability in my day-to-day digital activities), which had caught a password I had not changed in over six years
  • strong passwords, but with what metrics? A strong password is easy to remember and to type, and it’s long to make brute-force attacks costlier (this is also because you can’t trust all service providers to use modern password hashing), different from other passwords (i.e., built with a different system/logic), not based on any information known to be meaningful to you and of course unique – at least for stuff where you actually need security (anything which can be used to harm your reputation, health, money, computing devices, company, friends, family, and also which can be used as a relay to usurpate your identity or compromise further accounts – be particularly wary of email accounts)


Password expiration policies are not a panacea. They’re pretty useless, actually. Passwords themselves are under heavy criticism and sadly, most of what is blamed on them would equally apply to many an authentication factor (). I would not mind a system suggesting that I change my password if it makes lazy/ignorant information security managers feel better about themselves, as long as I’m not harassed or subjected to unfair burdens for refusing to comply.

I’d sometimes like to see a more grounded approach in information security practice. We all have our areas of expertise and our grey zones where we’re not quite up-to-date or knowledgeable, yet I strongly believe one should always try to look at the big picture when they want to design an intervention of any kind. What are we trying to do with authentication? Verify the identity of a person so they can be given access to data or services stored on a specific system. With hindsight, what are the security and usability requirements of an authentication system ? And then, what in the remaining design space fits the requirements best?

There’s a reason why I advocate this lengthy and costly approach to designing security interventions: I want to be able to trust the outcome rather than feeling at the end of a project that I have wasted my time. Interventions may or may not translate into actual large-scale beneficial changes, but surely it’s more likely to happen when you know that your design goes in the right direction at least on paper, rather than when the outcome expectation is still purely speculative.