Lessons From the apt Remote Code Execution Vulnerability

Well, it’s happened before, so it was bound to happen again: a remote code execution bug was found in APT. And it’s particularly interesting in the context of an age-old debate that has been dragging on in Debian-related circles about the use of HTTPS – a question that has been asked often enough that the answer has its own website now.

How bad was it? What is there to learn from this? And what does it tell us about the importance of HTTPS in package management security?

Let’s start with the root cause of this bug, because I think understanding this point is critical to understanding every question that this bug raises.

This bug happened because APT did not validate untrusted input. It is, strictly speaking, a vulnerability in the protocol used to communicate between the “master” APT process and the worker processes that handle downloads.

The critical problem was’t inherently related to HTTP per se (but HTTP was a very convenient attack vector). The real problem was that APT would blindly trust, without verification, the hashes provided by the HTTP fetcher process. This enabled an attacker to effectively eschew the signature verification process and, therefore, to provide a forged package (with the arbitrary code to execute).

What does this have to do with HTTP? Conceptually (but not practically) speaking, the most straightforward way to exploit this would be by compromising the mirror itself. So HTTP wouldn’t have been much of a mitigation, right? You could trigger this bug without it, too.

Right, but.

HTTP is a very convenient vector: if HTTP is used, you no longer need access to the mirror, you just need to in a position where you can intercept HTTP traffic.

For an attacker who wants to target a particular (and security-conscious) target, gaining that privileged position may be as difficult as compromising a legitimate mirror or convincing their target to use a malicious one. But the world is full of opportunistic attackers who could exploit this bug. And, if they aren’t after someone or some network in particular, there are many targets which are far less security-able (or security-conscious) than Debian mirrors and are therefore easier to compromise.

It’s a bit like this: compromising a legitimate Debian mirror is the equivalent of gaining access to the Death Star. This bug is the equivalent of making pint-sized nukes available for 9.99 credits at every shop in the Galaxy. If your objective is to blow up the Imperial Palace on Coruscant, it’s hard to pull it off either way. But the latter option enables every bloke with a grudge to blow up his neighbourhood, and if your objective is to blow up your ex’s speeder, it has the remarkable advantage of not requiring you to take on umpteen squadrons of stormtroopers and Darth Vader himself. (I know you got it the first time but I really wanted to get Star Wars involved here)

Would HTTPS have helped? Definitely. There’s not much to debate here, really — the only mechanism that guaranteed package integrity failed, so having another one that worked would have obviously helped.

But the equation isn’t as simple as that. As far as we know, this bug hasn’t been exploited in the wild. Moving to HTTPS by default (not even HTTPS only) would have been difficult and expensive to orchestrate, but would have protected no one. As far as we know, and up to this point, it would have been the virtual equivalent of Fort Alexander: a sound defense against an attack that never came.

Now, there are three points I want to make about this: one about HTTPS, one about threat models, and one about choosing what tools you trust.

About HTTPS: there’s something we’ve started forgetting about it. With privacy being a growing concern, we are used to treating HTTPS as a privacy-enhancing technology, while forgetting its other useful properties: endpoint authentication and tamper-proof communication.

This bug also raises an interesting point about threat models. This is not merely a matter of defense in depth (which, despite what the armchair developers on /r/linux would have you believe, is a concept that Debian developers are perfectly aware of). It’s a matter of covering the many threat models of an audience as diverse as Debian’s.

When you are writing, say, the backend for an online store, threat models are relatively straightforward to model (note that I haven’t said trivial, so please bear with me). But when you are writing a package management tool, even enumerating the deployment scenarios and the parties interested in compromising each of them is a daunting task.

So at this point, the decade-old mantra of “no threat model, no threat” is something you may want to be flexible about. In principle, it’s true, but for all you know, you may just have forgotten about, or underestimated the importance of, a particular threat. Which is not something to be ashamed of when you have hundreds, if not thousands of other models to consider.

(I don’t know if this is the case, but the Debian wiki, the that-should-settle-it website, and bits and pieces of the code that I’ve read, all give me the impression that threat modelling has focused largely on the mirrors or the upload process getting compromised, with an attacker attempting to supply forged packages at the far end of the connection).

There is also a big difference in threat research difficulty, and this is important when choosing what tools you trust. A Debian mirror exposes its functionality through (mostly) well-audited programs, with a lot of effort put into hardening – the likes of SSH, nginx or Apache, which garner the interest of many people outside the Debian project.

APT, by comparison, is largely a Debian- and Ubuntu-only affair. If someone announced a big bug bounty tomorrow, I’d definitely bet on scary zero-days being found in APT, rather than on someone figuring out an easy way to compromise a server running nothing but sshd and nginx, or figuring out an easy way to MITM an HTTPS connection.

If you’re thinking this is me subtly saying you should stop trusting Debian, I think it suffices to say that I’m writing this from a Debian box. This is a lesson for developers first and foremost. If you’re designing a system that needs to guarantee data integrity, and you can leverage the likes of TLS, which have way more effort put into it than you could ever hope to put into your system, then you should. It might miserably fail Occam’s test, but it might just save you one day.

(Just to make things clear: if you can is system-specific. I don’t know if Debian can.)

Is there a solution to all this? I think there is, and I think it does involve eventually transitioning to a model where HTTPS is used by default (while making it even easier for large site administrators to configure local mirrors). But it’s worth remembering that:

  • Signing is still of capital importance, even over HTTPS. This bug would have been equally scary and equally important even if APT had worked only over HTTPS.
  • HTTPS is not magic security dust. It comes with its own complications (regarding root CAs, for example); a borked HTTPS deployment can end up as insecure as an HTTP deployment, while looking more trustworthy because it starts with the right magic letters. The current state of affairs might mean that the Debian team feels they are more likely to successfully write a good verification chain than to successfully deploy a network of trustworthy HTTPS mirrors; when you look at it that way, it doesn’t seem to be entirely out of this world.
  • HTTPS brings its own challenges in terms of performance, flexibility, and even security. In order for an alternative (we already have apt-transport-https, and it is possible to set up HTTPS mirrors) to be a credible choice for the default option, it needs to have credible answers to all these problems.
  • There are many problems that HTTPS doesn’t solve (ironically, in this case, privacy is one of them). Adopting HTTPS won’t “finally” make APT secure, although it will make it more complex.

I have a feeling that these problems are tractable, if only because OpenBSD can do it – admittedly, for a smaller installer base, but also with less funding and a smaller volunteer base.

So, even though I got a scare out of it (and manually verified package checksums for the first time in many, many years), I don’t think this was that gloomy a day.

Leave a Reply

Your email address will not be published. Required fields are marked *