Not until a self driving car can safely handle all manner of edge cases thrown at it, and I don’t see that happening any time soon. The cars would need to be able to recognize situations that may not be explicitly programmed into it, and figure out a safe way to deal with it.
The problem with self-driving cars isn’t that it’s worse than human drivers on average, it’s that it’s SO INCREDIBLY BAD when it’s wrong that no company would ever assume the liability for the worst of its mistakes.
But if the average is better, then we’re will clearly win by using it. I’m not following the logic of tracking the worst case scenarios as opposed to the average.
Average is better means fewer incidents overall. But when there are incidents, the damages for those incidents tend to be much worse. This means the victims are more likely to lawyer up and go after the company responsible for the AI that was driving, and that means that the company who makes the self-driving software better be prepared to pay for those worst case scenarios, which will now be 100% their fault.
Uber can avoid liability for crashes caused by their human drivers. They won’t be able to do the same when their fleet is AI. And when that happens, AI sensibilities will be measured my human metrics because courts are run by humans. The mistakes that they make will be VERY expensive ones, because a minor glitch can turn an autonomous vehicle from the safest driving experience possible to a rogue machine with zero sense of self-preservation. That liability is not worth the cost savings of getting rid of human drivers yet, and it won’t be for a very long time.
“handle” is doing a lot of heavy lifting there. The signs are already there that all of these edge cases will just be programmed as “safely pull over and stop until conditions change or a human takes control”. Which isn’t a small task in itself, but it’s a lot easier than figuring out to continue (e.g.) on ice.
Those self-driving cars are called trains. They already can be self-driving. In a situation where the computational complexity and required precision are somewhat controlled, that is, on train tracks.
there will be a massive building in like india with many thousand of atrociously paid workers donning VR goggles who spend their long hours constantly Quantum Leap finding themselves in traumatizing last second emergency situations that the AI gives up on. Instantly they slam on the brakes as hard as they can. They drink tea. there’s suicide netting everywhere. they were the lowest bidder this quarter.
We’re a century away from self-driving cars that can handle snowfall
Just this year farmers with self-driving tractors got screwed because a solar flare made GPS inaccurate and so tractors went wild because they were programmed with the assumption of GPS being 100% reliable and accurate with no way to override
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !technology@lemmy.world
This is a most excellent place for technology news and articles.
I mean… self driving cars probably will. Just not as soon as they think. My guess, at least another decade.
Maybe, or maybe like harnessing fusion it will always be “just a few more years away!”
Not until a self driving car can safely handle all manner of edge cases thrown at it, and I don’t see that happening any time soon. The cars would need to be able to recognize situations that may not be explicitly programmed into it, and figure out a safe way to deal with it.
Just like all humans can do right now, right?
I never see any humans on the rode staring at their phone and driving like shit.
The problem with self-driving cars isn’t that it’s worse than human drivers on average, it’s that it’s SO INCREDIBLY BAD when it’s wrong that no company would ever assume the liability for the worst of its mistakes.
But if the average is better, then we’re will clearly win by using it. I’m not following the logic of tracking the worst case scenarios as opposed to the average.
Average is better means fewer incidents overall. But when there are incidents, the damages for those incidents tend to be much worse. This means the victims are more likely to lawyer up and go after the company responsible for the AI that was driving, and that means that the company who makes the self-driving software better be prepared to pay for those worst case scenarios, which will now be 100% their fault.
Uber can avoid liability for crashes caused by their human drivers. They won’t be able to do the same when their fleet is AI. And when that happens, AI sensibilities will be measured my human metrics because courts are run by humans. The mistakes that they make will be VERY expensive ones, because a minor glitch can turn an autonomous vehicle from the safest driving experience possible to a rogue machine with zero sense of self-preservation. That liability is not worth the cost savings of getting rid of human drivers yet, and it won’t be for a very long time.
“handle” is doing a lot of heavy lifting there. The signs are already there that all of these edge cases will just be programmed as “safely pull over and stop until conditions change or a human takes control”. Which isn’t a small task in itself, but it’s a lot easier than figuring out to continue (e.g.) on ice.
Those self-driving cars are called trains. They already can be self-driving. In a situation where the computational complexity and required precision are somewhat controlled, that is, on train tracks.
there will be a massive building in like india with many thousand of atrociously paid workers donning VR goggles who spend their long hours constantly Quantum Leap finding themselves in traumatizing last second emergency situations that the AI gives up on. Instantly they slam on the brakes as hard as they can. They drink tea. there’s suicide netting everywhere. they were the lowest bidder this quarter.
I wish I could give this comment more than a simple upvote. I want to mail you a freshly baked cinnamon bun.
Self driving taxis are definitely happening, but the people getting rich in a gold rush are the people selling shovels.
Uber has no structural advantage because their unique value proposition is the army of cheap drivers.
We’re a century away from self-driving cars that can handle snowfall
Just this year farmers with self-driving tractors got screwed because a solar flare made GPS inaccurate and so tractors went wild because they were programmed with the assumption of GPS being 100% reliable and accurate with no way to override
Way longer. Roads will have to be designed and maintained with them in mind.