I came across this story where George Hotz (aka GeoHot), the famous hacker known for ‘jailbreaking’ the iPhone and Playstation 3, is hoping to sell self-driving car kits. He has been using machine learning (where a computer observes human actions and then write its own algorithms to copy the human actions) to programme self-driving cars. I’ve written about this here, further info
This interesting thing about Hotz’ new venture is that his start-up company, comma.ai, is now hoping to sell this self-driving technology as an add-on kit for certain cars with front radar. However, this system uses a front-facing camera to observe the road, in the position of your rearview mirror.
Hotz says his system is ‘about on par with Tesla Autopilot‘, which is a remarkable feat for a guy basically making this technology in his garage. However, this self-driving capability will only work between Mountain View and San Fransisco, California, an area well-tested by Hotz himself. You can see lots of interesting visuals on the comma.ai blog here.
The accountability issues for both Tesla and Hotz system, the ‘Comma One’, are the same. They are both marketed as driver assistance devices, where the driver is required to be in a position to take over all times. Therefore, in accountability terms, they come under the ‘Cruise Control Paradigm’, where the driver is wholly responsible for using the system, and anything that happens during its use.
Indeed, Tesla itself noted after the death of Joshua Brown, who died whilst driving a Tesla S using Autopilot:
“…Tesla disables Autopilot by default and requires explicit acknowledgment that the system is new technology and still in a public beta phase before it can be enabled. When drivers activate Autopilot, the acknowledgment box explains, among other things, that Autopilot “is an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while using it. Additionally, every time that Autopilot is engaged, the car reminds the driver to “Always keep your hands on the wheel. Be prepared to take over at any time.” The system also makes frequent checks to ensure that the driver’s hands remain on the wheel and provides visual and audible alerts if hands-on is not detected. It then gradually slows down the car until hands-on is detected again.”
Clearly, with driver assistance devices, the onus for using them safely lies with the driver.
However, proper autonomous cars, where the ‘driver’ is not expected to do anything, and will just be another passenger are on the horizon. Volvo, Mercedes, and Google have all said they will take full responsibility for crashes caused by their systems. With Volvo senior technical leader saying “If we made a mistake in designing the brakes or writing the software, it is not reasonable to put the liability on the customer.” Whilst manufacturers would already be a big presence in the accountability chain due to the product liability paradigm, it is nice to see a manufacturer taking it’s future responsibilities seriously.
However, Googles’ self-driving car has an emergency stop button, does this mean the passengers are in control of the vehicles? This blog looks at the issue.
Therefore, while we can hope for a clear-cut two paradigm system of responsibly in relation to errors made by driver assistance devices, or proper autonomous cars, the issue of humans being able to take emergency control adds in some really difficult issues. These might not be possible to solve until the exact nature of the control is known. It’s certainly an area ripe for research.
Until next time!