That would be one of the most direct solutions, if it were placed in a location (and in a format) that the car’s ‘vision system’ could read.
A follow-on question would be whether money exists in your local budget to replace these signs with a ‘standardized’ particular form that would simplify autonomous-vehicle programming, as I suspect little more than that would suffice. Presumably one of the responses that local law enforcement could use would be to put similar standardized signs out in locations where flooding was being experienced. But I suspect a better answer is going to be some form of centrally-coordinated geofencing (where a given GIS area is set as ‘likely to flood’ and this information broadcast to autonomous vehicles, which then assume that area is ‘off limits’ to them until further notice, and they reroute around following normal GPS navigation procedure.)
The default would be to recognize the changed ‘sight picture’ of a road with more than a few inches of standing water across it. I’m tempted to think that an extension of something like ‘Google street view’ might be used as a reference source for this or other unanticipated change, for example spills or changes in signage that might lead to guidance failure or critical ambiguity if encountered ‘autonomically’. In a sense this is an extension of the ‘tercom’ navigation that was historically commonly employed for self-driving systems… who says no good came out of cruise-missile development? [;)]
If you’re assuming ‘defensive driving’ on the part of other drivers, you probably won’t care much about whether given vehicles are self-driving or not. I confess that I would be eyeing known autonomous vehicle operation with something of a weather eye toward unexpected strange behavior … for example, if “passengers” suddenly disengage auto and respond the wrong way in panic. But I wouldn’t be in a permanent half-funk of prescient terror expecting that at any moment, either … I’d just leave a bit more of a cushion around the vehicle in question.
If you’re assuming, as I did, that some ‘manual drivers’ will try to exploit the self-driving cars, by aggressively cutting in or ‘playing with’ closing distances, then I suspect the situation will hinge very materially on whether or not the self-driving car is prepared to report any such behavior promptly and believably to ‘enforcement’. I had my fill of nasty California drivers trying to beat me to crossings, or assert the right of way by pretending not to stop at four-way stops, and I do not assume that those people would be anything other than delighted to try their skill on robots programmed to be terrified of even the remote possibility of acting in a way that would produce lawsuits or summonses. You can judge for yourself what ‘correct countermeasures’ against that sort of evolved driver behavior could, or should, be.
The trouble I see coming after the Musk/Tesla fiasco, though, is what happens if self-driving cars begin to be frequently encounte
@Overmod…it’s called a California Roll for a reason, and legally if you can see 500’ in either direction at a 4-way stop you do not have to come to a complete stop if the vehicle in front of you already did so…granted this doesn’t happen very often, but it is perfectally legal in that situation. As for “self-driving” cars, the Tesla autopilot isn’t a true self-driving car like the Google vehicle…you still have to maintain some road awareness.
Automation requires control. As long as most cars are driven by humans there will be accidents. The autopilot operated car failed when it could not react to something stupid that a human in another vehicle did. A transition to autodrive cars would take decades.
It’s bad enough that so many people are driving distracted now. Can you imagine their lack of readiness when called upon to perform a sudden override of the autopilot?
After this, we can worry about an even scarier prospect, the flying car. It’s in the works, per a recent story in the Wall Street Journal.
There are idiots and fools everywhere. Even at Google.
Anyone familiar with industrial control logic (let alone consumer-grade or crApple OS and environment “management”) will know how pointless it would be to implement a safety-critical system in a vehicle capable of high momentum without providing capable backup. That backup won’t stop at redundant electronic control systems.
The ‘catch’ for many years is removing controls that are either continuously monitored or which provide the ‘temptation’ to go to manual control unpredictably. The latter being a particular bugaboo in some of the early ‘inductive control’ experiments in the late Forties, where a user in “emergency” might inadvertently grab for the wheel, whack on the brake, and otherwise bung up any semblance of control that the electronics might be able to assert over the situation. There were similar discussions in military applications over the degree to which pilots should be able to override high-G systems that were monitoring key airframe stresses.
There is no better solution for operating a conventional motor vehicle than a steering wheel, but it poses a number of restrictions for a proper ‘autonomous vehicle’. The problem is that most of the ‘other’ backup-control modalities, most notably anything with a sidestick proportional controller, do not work well without power boost of some kind, which means that they are functionally useless or worse in a great range of potential failure situations.
Interestingly enough, most of the ‘older’ systems tacitly accepted the idea that the automatic control was intended for long, continuous sections of ‘cruise’, or enablement for “autobahn”-style
I would not want a Telsa car equipped with their autopilot. Here is why one of their cars in autopilot ran under a truck that was across the road as the autopilot was not able to tell that a 53 foot trailer was not the sky. It literally took the owner of the cars head off. That person was watching a DVD instead of the road. NTSB is now looking into the accident. That should be a fun one for Telsa to get out of.
One has to look no further than “Fly by wire” control systems for aircraft. Even though there is a physical control stick and pedals, the computer controls “how-much” and “when” things happen. Of course there are multiple systems and over-rides, but elimination of a steering wheel in a car may be more of elimination of the direct mechanical linkage than removing the physical wheel.
Compare the complexity of a modern cars anti-lock braking system to an older car. I have a few brake spoons in my tool box that are now there for history lessons for my kids.
Automation is a journey where things that can be automated get automated. How long have we had cruise control? Now we have adaptive cruise control, headlights, blind spot monitoring, lane departure … etc.
When I was a freshman in college, the joke was our job was to create an airplane that had a pilot and a dog in the cockpit. The pilots job is to feed the dog and the dog’s job it to bite the pilot if he touches any of the controls.
I agree Google really advanced the concept, this will be fun to watch.
A big factor was the design of the side stick control on the plane that did not give feedback as to what the other pilot was doing with controls. With the traditinal control yokes, the captain would have known immediately that the first officer was yanking back on the yoke and thus prolonging the stall.
All the more reason to keep the driver informed of what the car was doing, in particular feedback through the steering wheel.
A co-worker suggested that the “auto-pilot” mode should do what’s done on locomotives, have an alerter function that requires interaction from the driver at intervals to check if the driver is paying attention.
Every single person I know in human factors engineering, IxD and artificial consciousness thinks that the premise and execution of alerters is pointless and, basically, more dangerous than effective. And is prepared to back that up with evidence.
The way to ‘check that the driver is paying attention’ is to monitor their attentiveness in the background, and periodically interact with them through ‘normal’ methods (such as conversation). This permits some rather simple confirming analytics, such as voice stress analysis, as well as confirming the right level of ‘high functioning’ that is necessary for safe response to “emergent situations” (which is how we now have to redefine ‘emergencies’ with the original word having lost that technical meaning).
An issue with skids is that feedback ‘through the wheel’ may be inadequate or wrong for many drivers, who will overcompensate or just plain freeze when presented with it. The same has been true of antilock brakes since the early days of hydraulic servomotor actuation, where the default ‘advice’ to the uninitiated has been ‘stomp and steer’ even as some of the instantiations have made that response deadly under what may be fairly common circumstances. (It happened directly to me, so I know the issue q
Of course running the entire road system under centralized control would deprive governments of a nice,steady revenue stream: speeding fines. Any toll-by-the-mile system would also be able to monitor average speed and automatically ticket speeders.
Given that vehicles with self steer also (in all cases AFAIK) have “smart” cruise control systems which automatically detect traffic and operate both the vehicles throttle and brakes I would say that for highway operation they qualify as “self driving”. In the next few years there will be more advanced versions of the system that will automatically change lanes in response to traffic.
Certainly part of it, and I am a much bigger fan of Boeing’s approach to FBW and Otto. However, I think that the most important outcome of this crash was the understanding of the complex human factors that played a major role.
But then what problem are you trying to solve with automation? Unlike in aircraft there are far fewer efficiency gains besides increasing capacity of roads, which would probably reduce vehicle spacing to the point where a human reaction would probably be too slow. The premise for these systems has mostly been to avoid accidents caused by drunks, texters, etc, who are not paying attention to the road, and to allow more productive use of time in the vehicle by allowing the occupants to do other things instead of drive. If the driver has to constantly monitor the vehicle, then why not simply drive the thing instead of trying to deal with the issues of trying to pay attention, complacency and the startle factor resulting in improper reactions when the car suddenly hands control back?
My observations and experiences - either the ‘machine’ has total control or the human has total control. Shared control generally means that the human is in no position (mental and/or physical) to assume control when the machine decides to relinquish control. A human deciding to disengage machine control is one thing - a machine deciding to ceede control is a different animal entirely.
I have no doubt that self-driving cars can be technologically perfected. However, the concept seems to be about so much more than transportation. It seems to be a facet the green movement just like renewable energy. This virtue seems to be responsible it being pushed so hard by the public sector. It has that coercive feel of the banning of the incandescent light bulb.
Foxx [of USDOT] said the government believes self-driving vehicles could eventually cut traffic deaths, decrease highway congestion and improve the environment. He encouraged automakers to come to the government with ideas about how to speed their development.
“In 2016, we are going to do everything we can to promote safe, smart and sustainable vehicles. We are bullish on automated vehicles,” Foxx said during an appearance at the North American International Auto Show in Detroit.
Bryant Walker Smith, a law professor at the University of South Carolina and an expert on the legal issues surrounding self-driving cars, said the government’s action is aggressive and ambitious. He said regulators are followi
In addition to saving lives, driverless cars may also help save our planet.
Because autonomous vehicles are built to optimize efficiency in acceleration, braking, and speed variation, they help increase fuel efficiency and reduce carbon emissions.
According to McKinsey, adoption of autonomous cars could help reduce car CO2 emissions by as much as 300 million tons per year. To put that into perspective, that’s the equivalent of half of the CO2 emissions from the commercial aviation sector.
Clearly the author of that article lives in the city.
Out here in the sticks, I don’t see many of those touted advantages as being advantages. Even if you can call for a vehicle and have it arrive at your house, such functionality will rely on the sufficient availability of the vehicles. Will my decision to use one for a two hour trip to somewhere mean someone doesn’t make it to work? Will one still be available ten hours from now when I want to return? What if someone at my destination wants to use one of the vehicles to travel even further from my home? Could one of these vehicles theoretically travel coast to coast, in bit and pieces by a number of individuals?
And if individuals don’t own the vehicles, who does?
One way that i can see this working is on Private toll roads and HOV lanes- Imagine that we could repurpose abandoned railroad corridors into robot roads that have driverless electric trucks and buses serving industries and stations along the way-