Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To anyone with any background at all in computer security, this is such a "duh" moment. If Sony et al can't secure their massively important corporate infrastructure, what are the odds your car's wireless computers are secure in any way? They aren't, they knew it, and you knew it. Sorry.

It'll be interesting to watch the fallout from these obviously-present vulnerabilities. I see three possible outcomes, in decreasing order of likelihood: status quo, where they just "fix" the bugs as they hit the news; some sort of massive push towards real computer security, in this and other industries; or a massive reduction in features to avoid the flaws.

This is really just another symptom of the current state of computer security, best described as "a joke." My guess is in 50 years we'll have decent computer security. There's nothing that precludes it in theory. But it's going to be an ugly, ugly couple of decades while we pay off the wave of computer-security-debt that we have been riding.



The attack vectors available for use against a corporation is infinitely larger than those available for a car. It's not really a fair comparison.


Yeah, that's kind of a weird comparison. You can't really tailgate someone through a car door to gain physical access, or social engineer your way to the car's server closet, or spam the car's employees with phishing e-mails.


Not only that - but the 'duh' moment for me was the 96bit key size.


96 bits by itself probably isn't within reach of brute forcing - I assume the algorithm itself had flaws.


What I want to know is why the car will continue to accept 100 trial keys per second after the first 100,000 attempts failed.

Shouldn’t there be some kind of exponential back-off after failures? If after the first 1000 failed keys it would only accept e.g. one new try every few seconds, it would then take 2–3 orders of magnitude more time to brute force.


It doesn't: according to the paper, when someone turns the ignition key, they car will generate about 20 challenges to the key fob, and if the fob does not successfully authenticate any of them, the car will give up and not start.

The attack works by overhearing the exchange between the car and the key fob, and then doing an somewhat brute-force analysis to calculate what the secret key on the fob must have been.


Could someone explain why there is no delay after each failed attempt? The system allowed 197k brute force attempts in 30 minutes. I just cannot wrap my head around it.

I tried reading the paper (not an expert). In the recommendation section, it does not suggest implementing a delay either. Is it just not physically possible with RFID?

I mean, a 4 digit pin with a 5 second delay would take 14 hours for all combinations (better than the half hour with Megamos)???

I have to be missing something.....It can't be this easy.....


As the previous comment says, there's a requirement to eavesdrop on at least one successful authentication.

My guess is that they're then doing the brute-forcing "offline", not against the vehicle's system. If you know the algorithm and the keysize, and you can see one successful authentication, you could ship the work of workig out which key replicates the authentication you just saw off to AWS or custom hardware (I wonder how readily Bitcoin mining ASICs can be tweaked to attack embedded or IoT authentication?) (Though it seems there's flaws somewhere in the crypto anyway - they somehow broke a 96bit key with under 2^18 attempts...)


Clearly, the fact that listening to an exchange helped them proves that the security is fundamentally flawed.


That helps. Thanks.


To a car owner, that's another security flaw of its own. An attacker can deny an owner access to their car with a simple code spammer hidden nearby.


Surely that's better than having your car stolen, right?

Security is about trade-offs, after all.


> Surely that's better than having your car stolen, right?

Stranded in hostile environment (middle of nowhere in the arctic or a desert) could be a death sentence.


Who is going to DoS your car in the middle of the arctic or a desert?


Eh, I dunno, I guess a very not-nice person could attach the device to your car and activate it remotely/later.

I think a more realistic exploit would be a corrupt tow-truck driver / mechanic targeting an area where tourists stop.


That could be exploited to produce a trivial denial-of-service attack.


I can think of other, lower tech, DoS attacks against cars -- which are also not much exploited.


Someone once DoS'd my car, by slashing two tires. On a downtown public street.


That sucks to have happened to you, but that sentence made me lol.


People always say this about physical tech, but the difference is ease and scalability. Slashing all the car tires in a block is harder and more traceable than sending out a small RF signal.


Remediating slashed tires is a lot more difficult than waiting for your attacker to get bored and move on.


Which would be of no use at all to car thieves.


True, but it might be handy for kidnappers.


We're veering into movie-plot territory here.


(not original responder)

That's true. On the subject of movies, though, a plot point based on an actual vulnerability would be way better than typical Hollywood hacking.


Carjackers, and parking lot muggers.


Wouldn’t you need to have a device actively running within a few feet of the vehicle to run such an attack? Couldn’t the car start blaring an alarm or something in that case?

We’re not talking about a website here.


> Wouldn’t you need to have a device actively running within a few feet of the vehicle to run such an attack?

Nope. Just a high-gain antenna.

> Couldn’t the car start blaring an alarm or something in that case?

It could. But that might not help.

For example: you're driving your Mazerati down the road when it suddenly stops and the alarm goes off. The next day you get a letter saying, "If you don't want yesterday's little incident to become a regular event, send BTC500 to the following address...."


If the car responds to RFID keys at all when driving, that is a flaw.


If I get out of my car with the engine still running it starts beeping. I don't know if it will actually turn the engine off, but it obviously knows that the key has departed the vehicle.


Or it detected your bum leaving its seat.


My car also beeps when it detects that the key has left the car. The engine keeps running, but you obviously cannot turn it on again once you turn it off.

I had it happen without me leaving the seat (e.g. my wife has the keys in her bag/pocket, I had been driving, and she gets off the car to unarm the home alarm). The car is turned on by pressing a button, not by turning the key.


That seems like a reasonable setup. I'm having difficulty imagining how that could be hacked into the blackmail situation described above, since the sure way to avoid the beep is to keep the fob in the car.


No, because if I toss the key into the seat it stops beeping even if I'm not there.


..plus a high-gain antenna, and suddenly you have an attack that sets off every car alarm in the city at once.


...and that achieves?


What if the car was parked in a handicapped spot near entrance of a football stadium? It could conceivably receive enough incorrect RFID signals to trigger a back-off.


Since this is an anti-theft system, not a safety system, I can totally see that VW made a rational decision "it's better for the anti-theft system to let a thief steal the car 50 times than for one person to legitimately get locked out of their car."

You can make up for car thefts with dollars.


I don't know anything about the protocols involved, but it would be possible for the first message to be "I'm a key that would like to unlock the vehicle with VIN# 123abc...". In that case there would be no mistaken protocol runs.


There is no key as such. The fob for my Hyundai never leaves my pocket. Just by standing next to the car, the unlock button on the door is enabled. So if I walk up and push the button, it unlocks. If I'm not around, the button does nothing.

So there's no discernible event from the fob, as far as I can see. It's just a "this is me" signal.


I guess the Hyundais I've driven were different, in that the unlock button was on the fob rather than on the car door. Could you say, if you have multiple cars, does the fob work with all of them? I doubt that's the case, so I don't see why your "this is me" signal couldn't actually be a "this is me, fob 123ABC..., and I can authenticate with the vehicle with VIN# 123abc...".


your "this is me" signal couldn't actually be a "this is me, fob 123ABC..., and I can authenticate with the vehicle with VIN# 123abc..."

You're right, that might be it.

the unlock button was on the fob rather than on the car door

Let me clarify: the fob does have buttons for lock, unlock, panic, and open trunk. But I don't normally use them. My normal usage is as I described: just walk up with the fob in my pocket, and press the button on the door handle.


I don't think a high traffic spot could ever cause an issue with this, unless each person tried to start your car. I believe this is referring to the immobilizer chip in the key that allows turning the key to start the engine. Incidentally, this is probably the chip that means that if you lose your key, you can't just get a new one cut, you need to get a new chip too.


I assume the software/hardware is so simple and specific that adding something like back off blocking would require memory chips, software, timers etc increasing the complexity dramatically.


According to TFA, they "overheard 2 communications between the keyfob and the transponder", which reduced the number of possible keys to 196,607. This was brute-forceable in half an hour. So the answer is both - the algorithm was flawed enough to reduce the strength, but they were brute forcing it the rest of the way.

2 communications isn't much at all. Getting something from your car and locking it back up is all it takes.


Worse, the paper says the cipher has only 56 bits of internal state (made me think of DES, but that isn't at play here)

Even worse, they get it down to 48 bits.


It was a general point about the state of computer security. In 2015, if you're connecting a computer to the internet, you're vulnerable. If your computer has non-trivial wireless functionality (in this case, keyless entry), you're vulnerable. The only question is whether someone cares enough to hack you, in particular.


"I see three possible outcomes, in decreasing order of likelihood: status quo, where they just "fix" the bugs as they hit the news; some sort of massive push towards real computer security, in this and other industries; or a massive reduction in features to avoid the flaws."

Only one of those three is the correct answer, and it is the third one.

Your car does not need a wireless network - since you have a newer, nicer one in your pocket every 18 months.

Neither does your refrigerator nor your smoke detector.

These are self-inflicted problems and they're easy to solve - just remove the gratuitous complexity.


If they're easy to solve, why hasn't that been done already?


Because the solution precludes the emergence of the multimillion dollar market called "The Internet of Things".


The article isn't about people remotely taking over cars or disabling cars. It's that the anti-theft system has a flaw. That's not nothing, but it doesn't put anyone's safety at risk.


In particular, it's not necessarily worse than the status quo ante. Cars had mechanical locks, which were pickable. A "slim jim" could unlock many cars. Once you were in the door, you could hotwire the ignition. So to be able to defeat a computerized anti-theft system... no gain from the computerization, but is there any loss from it?


The difference is that physical attacks require 1) individual skills and 2) prolonged physical contact in compromising pose.

Where every single thief had to be a skilled lockpicker before, now you just need a few specialized crackers and then you can mass-produce user-friendly hacking devices or even downloadable software.

Where a thief had to spend several minutes in a compromising pose near the car, often carrying suspicious tools, now he can just sit on a bench nearby, wait for the magic click and then choose the right moment to stroll in. A passerby might as well think he's the owner.


a thief had to spend several minutes in a compromising pose near the car

A couple of weeks ago I saw someone using a slimjim, and I did nothing. I have never done anything in reaction to a car alarm. I'm not sure that thief's pose is sufficiently compromising.


Exactly. And it's insanely easy to physically break into a car. If you have the chance, watch a pop-a-lock guy. They essentially bend your car door backward at the upper portion, and then stick something into your car to pull the lock up. Not difficult.


No gain implying that lockpicking a car and hacking it is of the same level of difficulty.


Its not a matter of difficulty, but of reproducibility.

Mechanical locks imply that each individual thief needs to learn how pick locks. With computerized locks, you only need one hacker to crack the security for each model of car, and then a bunch of two-dime thugs will only need to download the app and get going in with the car-thief franchise.


True, and they probably aren't. So which do you think is harder?


What's the phrase? "Locks keep honest people honest." This has always been the case.


Most people do not have a background in security so this is anything but a 'duh' moment for them. The more news like this that makes it to the mainstream media, the more hopeful I am that regular folks will learn the state of security today.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: