WHEN Toyota, the biggest and most successful car company in the world (ranked by group production volume), says something about the state of play of the auto industry, it pays to listen.
After all, Toyota got to its pre-eminent position by judging the mood and demand of the market and producing high-quality vehicles that have satisfied that demand over and over again all around the world.
So, what has Toyota said that’s so important? Well, it’s basically rubbished the idea that fully driverless cars will be here anytime soon.
Speaking at the giant CES consumer electronics show in Las Vegas, USA, in early January, Toyota Research Institute CEO Gill Pratt said, “None of us in the automobile or IT industries are close to achieving true Level Five autonomy.” By Level Five autonomy Pratt means fully self-driving cars. Things you can jump into, tell them where you want to go, and have them drive you there without constraints of location or particular road and traffic situations.
Pratt then went on to say, “It will take many more years of machine learning and many more miles than anyone has logged of both simulated and real-world testing to achieve the perfection required”.
Pratt is one bloke who should know, as the Toyota Research Institute was specifically set up in 2015 with a big chequebook to recruit top US brains on artificial intelligence, robotics and material science, with a view to the development of autonomous cars, among other things. Pratt was a senior robotics engineer for the US military, which has played a big part in the whole driverless vehicle thing.
Toyota’s announcement is notable as it’s the first voice of caution from an auto industry otherwise seemingly falling over itself to get on the self-driving bandwagon, thinking they might miss out on what could be the next big thing.
Pratt’s naming of the IT industry alongside the automobile industry is of course a reference to Google and its position at the front of the charge to driverless cars. Google has said it will bring a fully autonomous vehicle to market as early as 2018, although that timeline has more recently been pushed back to 2020.
In essence, what’s happened is that Google, along with Tesla to a much lesser extent, has thrown a spanner in the works of the otherwise methodical and cautious car industry. Google is scary to the auto industry because it’s big and cashed-up, while Tesla is worrying because, without any auto industry background, it has still brought a ‘new-world’ electric car with self-driving functions to market in relatively quick time.
Pratt says Level Two autonomy is what the auto industry is realistically going to achieve in the immediate term (the next couple of years). Level Two autonomy means the ability to follow the car in front up to a pre-selected speed, automatically brake if need be, and self-steer to keep in the lane.
Most manufacturers, even on relatively less-expensive models, already offer Level One autonomy via technologies such as radar cruise control, auto braking, lane-keeping assistance or blind-spot monitoring. By combining a number of Level One technologies, Level Two isn’t all that difficult to achieve.
Level Three autonomy is where the car can move out of lanes and navigate through traffic in an active way rather than the essentially passive/reactive Level Two mode. However, a human driver has to be ready to take control in an emergency. This switching between autonomous mode and human driver is fraught with problems, according to Pratt, and he feels Toyota and other automakers will bypass Level Three and go directly to Level Four, which is full autonomous control but only on roads specifically approved or designed for this purpose.
Limiting where an autonomous vehicle can go gives it a big leg-up in terms of knowing where it is; obviously a key prerequisite for the successful operation of any driverless vehicle. Instead of relying solely on its GPS system, with all its associated vagaries, the autonomous car will have a photographic record of what that street looks like at that particular spot, which can be matched with what the on-board cameras are ‘seeing’.
If, for example, you want to go to 350 Fifth Avenue, New York, and the GPS has taken you to the right place, what the on-board camera sees (ie, the Empire State Building) can be matched to the built-in photographic image of the Empire State Building. On this count Google is already there with its ‘Street View’, and Level Four autonomy has obvious applications for a driverless taxi service that operates within fixed city or town boundaries.
Having a driverless vehicle operate with fixed boundaries also means it can be pre-programmed to ‘know’ what traffic conditions to expect at any given time of day, at any point in its area of operation. All of which means that Level Four autonomy should be achievable, and the more limited the operating area the easier it will be to achieve. That leaves Level Five autonomy as a pipe dream, which means most of us will be driving ourselves for a good while yet.
IN THE PIPELINE
CURRENTLY Audi, BMW, Ford, General Motors, Hyundai, Mercedes-Benz, Nissan, Peugeot, Renault, Toyota, Tesla and Volvo (among a few others) are all developing autonomous vehicles, or at least autonomous technologies. Automotive component giants like Bosch and Delphi are also working on off-the-shelf autonomous technology to sell to car companies in much the same way as they sell electronic chassis-control systems.
Aside from Google other tech-based, rather than automotive-based companies, are also working on autonomous vehicles. In Singapore, for example, tech-company NuTonomy plans to have self-driving taxis commercially available as early as 2018.
COMMENTS