Google Self-Driving Cars Hit California Roads

Google-Autonomous-Car - Self Driving Car

Google self-driving cars are currently going through trials on California roads with plans to officially introduce the cars to the public market in 2020.

by
July 6, 2015

Androids may dream of electric sheep, but we humans aren’t asleep.

Google really has put self-driving cars onto public roads, and has been doing so for the past six yearsGoogle is now preparing to introduce the automatons to public markets by 2020.

On the Road

Google started testing self-piloted cars a while ago, having already booked over 1,011,388 autonomously driven miles using a fleet of Lexus SUVs programmed to drive themselves.

Now, Google is bringing its own car model to the streets. It looks like a sheep on wheels. The prototype has an autonomous driving system and typical mechanical parts like braking, lighting, and steering.

During the road trials, all the normal controls are present, like steering wheel, accelerator, and brake pedal, in case the passenger needs to take control.

Though the cars were designed to run without the safety mechanism of driver control, the state of California has to pass legislation, or the California DMV has to ease up on its requirements for drive-ready vehicles, before the Google cars can run purely driverless.

However, regulation of a drive-ready design isn’t the only issue confronting Google’s driverless vehicle program. Public safety is another established concern.

How safe is too safe?

Google has seen a lot of questions about the accidents their cars have been in over the duration of the driverless project, including the custom sheep prototype and the modified hatchbacks and SUVs they had initially outfitted for driverless mode. Google co-founder Sergey Brin has taken questions himself about those accidents.

Getting rear-ended comprises almost all the reported accidents involving autonomously driven Google cars. Speed is limited to 25 MPH in the self-driving Google cars, so the real worry is not the cars themselves, but the angry drivers that get stuck behind them. You can read the report yourself, but I promise it doesn’t read anything like a page-turning car chase.

All told, there have been eleven accidents in total. Google claims that in none of the accidents was the car at fault. (Sounds kind of screwy, right?) Seven amounted to the Google car getting rear-ended by another car, two were sideswipes, and one resulted from another car running a red light. Google has emphasized two points in its response to publishing the accident report.

First, the sensors on the sheep cars and the algorithms that bound them boast far better attention rates and far fewer errors than human drivers, statistically speaking. Second, the error-prone behavior of the people driving around the sheep cars has been enhancing the algorithms, so the cars are evolving into safer machines.

Really though, the problem isn’t whether Google cars are safe enough, but rather, are they too safe? Mike Tinskey, Ford’s head of electrification and infrastructure, recently spoke on a non-traditional program at Ford called Smart Mobility.

One of the Smart Mobility projects, “Remote Repositioning,” allows a remote driver at a computer terminal to drive a vehicle using an LTE connection and the cameras and sensors outfitted on the car. The point of the project, apart from practical uses like remote valets or delivery services, is to overcome the safety excess in self-driving cars like Google’s.

As Mike Tinskey put it, “If you’ve ever had the pleasure to go to, for instance, China, if you’re not aggressive to try to turn left, there will be people that will walk in front of you all day long. And an autonomous vehicle would end up sitting there forever. And a driver normally just has to kind of say, ‘Alright, I’m going,’ and the people will stop and the car heads through.”

The point is that there are certain situations wherein having a human driver will actually increase driving aptitude by being “unsafe,” whether or not that human is directly behind the wheel in the car or located remotely a thousand miles away.

However, Google’s sheep cars and other automated vehicle designs seem to be moving more towards automation rather than remote access piloting. 

New Laws

The idea of automated driving raises a number of regulatory issues, the biggest one being liability. With drivers handing over control to an automated system, who can we blame if something bad happens?

Some states, like Nevada and others, as well as the District of Columbia, have put in place regulations allowing for the testing of autonomous vehicles. University courses examining vehicle automation have begun to crop up as well, like at the University of Nevada’s Advanced Autonomous Innovations Center.

As the motor vehicle industry seems to be moving towards automated piloting, the question of automaton liability will only loom larger and larger.

For example, in May, Daimler’s subsidy Freightliner unveiled an autonomous big rig in Las Vegas, designed to save lives, mitigate driver fatigue and stress, and reduce CO2 emissions up to five percent.

Ninety percent of truck crashes result from driver error, and nearly one in eight of those cases arose from driver fatigue. With over 10,000 miles of testing on the truck, Freightliner’s Inspiration Truck now sports Nevada’s first “Autonomous Vehicle” license plate for a commercial truck.

The Inspiration Truck is categorized as a “level 3” on The National Highway Traffic Safety Administration’s automation scale. The second-highest level of automation – the same level operated on by Google’s sheep cars – means that, in certain traffic or environmental conditions, a driver can grant total control to the vehicle.

While drivers can still regain control, vehicles must be allowed a “comfortable transition time,” which means that the window of opportunity for regaining driver control in emergency situations may not be large enough to prevent accidents.

Clearly, more testing and development is needed. Additionally, more states must get on board with automated piloting before any comprehensive regulation can be seen.

Public Art in the Streets

As Google begins phasing its self-driving sheep cars onto the streets of Mountain View, California, they have initiated a project to transform the bland sheep into public art. In a project called “Paint the Town,” Google has invited California artists to submit art to be featured on their cars.

In addition, art will be posted on Google’s website and at a community event where up to 10 artists will be selected to get a ride in a self-driving car. Thus, the Google sheep carry the potential to transform our roadways into roving multi-colored dream coats for sleeping androids.

Why is Google doing this?

Why is Google, an online tech empire and revolutionary cultural source, shepherding the pack with its Self-Driving Car Project? As discussed above, a lot of automobile companies are exploring the field of automated vehicles right now, and Google seems to want to stay on top of it all.

While many regulatory obstacles and technical questions remain to be surmounted and answered before we see self-driving cars out on the streets in most American cities, the most pressing question at this moment is, are we headed for dystopia?

Isn’t vehicular automation a penchant of Philip K. Dick’s post-apocalyptic prophecy for the world? It’s entirely possible that Google’s self-piloted sheep cars signal the beginning of the end, but really though, the whole idea is just too cool not to get excited about.

Facebooktwitterredditpinterestlinkedinmail

Topics: