Robotics
If a Driverless Car Goes Bad We May Never Know Why
It’s incredibly difficult to figure out why some of the AI used by self-driving cars does what it does.
- by Will Knight
- July 7, 2016
Two recent accidents involving Tesla’s Autopilot system may raise questions about how computer systems based on learning should be validated and investigated when something goes wrong.
A fatal Tesla accident in Florida last month occurred when a Model S controlled by Autopilot crashed into a truck that the automated system failed to spot. Tesla tells drivers to pay attention to the road while using Autopilot, and explains in a disclaimer that the system may struggle in bright sunlight. Today the National Highway Traffic Safety Administration said it was investigating another accident in Pennsylvania last week where a Model X hit the barriers on both sides of a highway and overturned. The driver said his car was operating in Autopilot mode at the time.
Tesla hasn’t disclosed precisely how Autopilot works. But machine learning techniques are increasingly used to train automotive systems, especially to recognize visual information. MobileEye, an Israeli company that supplies technology to Tesla and other automakers, offers software that uses deep learning to recognize vehicles, lane markings, road signs, and other objects in video footage.
A technician examines a Tesla using a laptop computer.
Machine learning can provide an easier way to program computers to do things that are incredibly difficult to code by hand. *** says we need to discuss technical debt! ** For example, a deep learning neural network can be trained to recognize dogs in photographs or video footage with remarkable accuracy provided it sees enough examples. The flip side is that it can be more complicated to understand how these systems work.
A neural network can be designed to provide a measure of its own confidence in a categorization, but the complexity of the mathematical calculations involved means it’s not straightforward to take the network apart to understand how it makes its decisions. *also real question on the usefulness of such confidence measures ** This can make unintended behavior hard to predict; and if failure does occur, it can be difficult to explain why. If a system misrecognizes an object in a photo, for instance, it may be hard (though not impossible) to know what feature of the image led to the error. Similar challenges exist with other machine learning techniques.
As these algorithms become more common, regulators will need to consider how they should be evaluated. Carmakers are aware that increasingly complex and automated cars may be difficult for regulators to probe. Toyota is funding a research project at MIT that will explore ways for automated vehicles to explain their actions after the fact. The Japanese automaker is funding a number of such research projects related to challenges with self-driving cars.
…
A Shot in the Arm for Obama’s Precision Medicine Initiative
The White House says a huge new database and better genetic testing will help realize the enormous promise of personalized medicine.
- by Mike Orcutt
- July 7, 2016
Precision medicine is a big idea. Tailoring drugs and therapies to a patient’s individual disease, lifestyle, environment, and genes could touch off a health-care revolution, or so the thinking goes. But first there is much we need to learn about what that all means to a person’s health. That’s why the Obama administration announced Wednesday evening that it is devoting $55 million this year to the creation of a public database containing detailed health information about a million or more volunteers. It’s also why it’s trying to figure out how to better regulate the fast-growing genetic testing market.
President Obama, discussing precision medicine at an event in February.
Called the Precision Medicine Cohort, the database will be the “largest, most ambitious research project of this sort ever undertaken,” said Francis Collins, director of the National Institutes of Health, during a call with reporters. It will contain medical records, sequenced genomes, blood and urine tests, and even data from mobile health tracking devices and applications. Collins stressed that the database will represent people from all races, ethnicities, and socioeconomic classes, and said it will track participants over many years.
In a separate but related project, also announced Wednesday, the U.S. Food and Drug Administrationpublished draft guidance documents on how it might police the exploding field of genetic testing. The agency is concerned that a new generation of genetic tests could risk patient safety. The technology underlying these tests can quickly and inexpensively sequence an entire genome and identify millions of genetic abnormalities at a time. But interpreting the results is still a work in progress.
…
Tesla’s Dubious Claims About Autopilot’s Safety Record
Figures from Elon Musk and Tesla Motors probably overstate the safety record of the company’s self-driving Autopilot feature compared to humans.
- by Tom Simonite
- July 6, 2016
Tesla Motors’s statement last week disclosing the first fatal crash involving its Autopilot automated driving feature opened not with condolences but with statistics.
Autopilot’s first fatality came after the system had driven people over 130 million miles, the company said, more than the 94 million miles on average between fatalities on U.S. roads as a whole.
Soon after, Tesla’s CEO and cofounder Elon Musk threw out more figures intended to prove Autopilot’s worth in a tetchy e-mail to Fortune (first disclosed yesterday). “If anyone bothered to do the math (obviously, you did not) they would realize that of the over 1M auto deaths per year worldwide, approximately half a million people would have been saved if the Tesla autopilot was universally available,” he wrote.
Tesla Motors cofounder and CEO Elon Musk.
Tesla and Musk’s message is clear: the data proves Autopilot is much safer than human drivers. But experts say those comparisons are worthless, because the company is comparing apples and oranges.
“It has no meaning,” says Alain Kornhauser, a Princeton professor and director of the university’s transportation program, of Tesla’s comparison of U.S.-wide statistics with data collected from its own cars. Autopilot is designed to be used only for highway driving, and may well make that safer, but standard traffic safety statistics include a much broader range of driving conditions, he says.
…
…
EYE SPY? Google reveals plans to put ‘eyes in machines’ as digital surveillance fears reach boiling point
Campaigners slam tech giant's scheme as 'creepy' and urge Brits to cover up cameras on smartphones and computers
by Jasper Hamill
7th July 2016, 10:58 am
Google is planning to put “eyes in machines” and boostcomputers’ ability to automatically recognise people, places or objects.
The tech giant has just revealed plans to purchasea French firm called Moodstocks which builds software capable of working out what’s happening in a photo – a trick called image recognition.
This buyout is likely to conclude within weeks, although it’s not known exactly how much Google paid to buy the company.
We know what you did last summer (because you searched for it on Google)
“Ever since we started Moodstocks, our dream has been to give eyes to machines by turning cameras into smart sensors able to make sense of their surroundings,” the French firm wrote.
Google said it would use this system to help identity pictures so they can be easily found through a search engine.
But the development is likely to stoke privacyfears, as many people are concerned that allowing computers to “see” like humans will one day enable the construction of a surveillance state in which our every move can be monitored by governments, cops or corporations. The news comes just weeks after it was revealed that Facebook founder, Mark Zuckerberg, tapes over his MacBook camera and microphone.
These fearsare now bubbling over into the real world. Earlier this week, a man allegedly threw Molotov cocktails at Google Street View cars parked outside its California headquarters.
In an affadvit, police officers said theman later told them “he felt Google was watching him and that made him upset”.
Renate Samson, president of the campaign group Big Brother Watch, said people should be aware of the surveillance potential of their computers.
“All connected devices now have a camera and microphone in them, often these can be turned off and on without us knowing,” she told The Sun.
“Making these eyes intelligent will be great for identifying random objects and helping our smart devices to become even smarter, but not so good for keeping your personal life personal.
…