By Lance Eliot, the AI Trends Insider
According to compiled statistics, every 90 minutes in the United States a vehicle and a train collide.
That’s a staggering and somber statistic. Train-vehicle crashes are a lot more common than most people assume they are. Plus, sadly, injuries and deaths are the likely result of cars and trains opting to ram into each other. There are about 500 deaths each year in the United States due to failures to safety navigate a railroad crossing, amounting to about 20,000 deaths over the last forty years.
Remember this sage advice that was drummed into our heads when first learning to drive: Stop, look, and listen.
Meanwhile, here’s perhaps another surprise for you, namely that railroad crossings are much more prevalent than you might believe. In the United States, there are approximately 128,000 public railroad crossings. Throughout the U.S. there are over 180,000 miles of railroad track.
Overall, railroads are still a significant part of the transportation fabric of our country.
I realize that some of you might be eager to point out that all a driver needs to do is obey a railroad crossing gate. If the gate is down, don’t go around it, and just wait for the train to pass. That seems to take care of having to be concerned about railroad crossings.
Sorry to say that only about one-third of all railroad crossings have a gate.
That means that two-thirds of railroad crossings do not have a gate. Numerically, two-thirds of the total number of railroad crossings comes out to about 84,500 crossings that do not have any gate.
Some people seem to think that if there isn’t a gate, it implies the railroad crossing isn’t dangerous. The logic seems to be that certainly the railroad bosses would put up a gate where it is needed, and therefore not put up gates where they are not needed.
That’s not compelling logic.
When you come upon a railroad crossing that lacks a gate, it is on your shoulders as a driver to proceed safely.
I know that you might tend to assume that a train will certainly stop in time if your car perchance ends-up on the tracks, but a train moving at 55 miles per hour takes a mile or more to come to a stop.
Don’t expect a locomotive conductor to save your hide, it’s a lousy bet.
Since human drivers seem to be lacking in vigilance when crossing over railroad tracks, maybe we can solve the problem via the advent of self-driving cars.
This brings up an interesting question: Will AI-driven self-driving cars be safer than human drivers when it comes to dealing with railroad crossings?
The answer is maybe, rather than an outright and unequivocal yes, which I realize seems astonishing, so let’s unpack the matter.
For my framework about AI autonomous cars, see the link here: https://aitrends.com/ai-insider/framework-ai-self-driving-driverless-cars-big-picture/
Why this is a moonshot effort, see my explanation here: https://aitrends.com/ai-insider/self-driving-car-mother-ai-projects-moonshot/
For more about the levels as a type of Richter scale, see my discussion here: https://aitrends.com/ai-insider/richter-scale-levels-self-driving-cars/
For the argument about bifurcating the levels, see my explanation here: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/
Self-Driving Cars And Railroad Crossings
I’ll be focusing on true self-driving cars, rather than cars that perchance have advanced automation but still require a human driver at the wheel.
True self-driving cars are ones that have the AI exclusively doing the driving. Usually referred to as Level 4 and Level 5, there isn’t a human driver involved in the driving act.
In contrast, a Level 2 and Level 3 are considered semi-autonomous cars, requiring a human driver to co-share the driving task. This co-sharing arrangement unfortunately has some inherent problems and concerns, and drivers need to be wary of allowing over-reliance on their ADAS (Advanced Driver-Assistance Systems).
Some pundits seem to assume that an AI system will be all-knowing and therefore magically be able to cope with any kind of driving situation.
That’s just not the case.
Consider the nature of dealing with railroad crossings.
Yes, you could program an AI system to undertake certain prescribed steps when encountering a railroad crossing, and it likely would unfailingly try to carry out those steps. In that sense, you might argue that the AI would be more reliable than a human driver, since humans are bound to be inconsistent in their driving practices or might be drunk or simply inattentive.
The first hurdle though that the AI has to overcome is detecting that a railroad crossing even exists.
For humans, we are apt to see a sign that says railroad crossing ahead, along with noticing markings on the roadway that also forewarn about the crossing. We can usually see the railroad tracks implanted into the roadway surface. We might also be able to see down the railroad tracks and observe that they stretch some distance down to the left or right of where the tracks cross the road.
These are all vital clues that a railroad crossing exists, and we ought to be driving cautiously, accordingly.
Getting an AI system to readily figure out that a railroad crossing exists is harder than how humans do so.
The cameras of the self-driving car need to stream images of the driving scene and then scour the digitized video to find any railroad warning signs and railroad tracks that are on the roadway ahead.
Regrettably, the radar sensors and the LIDAR sensors are not especially helpful during this detection effort since they are less likely to pick-up the subtleties of the railroad crossing telltale aspects. That’s a shame because self-driving cars are best suited to holistically ascertain their surroundings, making use of sensor fusion to bring together multiple forms of sensory input into a synergistic whole.
Whenever a self-driving car is reliant on only one kind of sensor, it can be a myopic way of fully gleaning what is going on.
Suppose too that the cameras are somewhat obscured by bad weather and cannot get sharp and clean images of the roadway ahead. Perhaps also the railroad markings and signs are in a dilapidated state, having been worn down over the years or marred with graffiti.
All in all, the point being that it is not assured with 100% certainty that a self-driving car will always detect that a railroad crossing is coming up ahead.
Humans can certainly also fail to detect railroad crossings, so don’t misunderstand that maybe I’m suggesting that humans are infallible. Instead, the crux of the matter is that an AI system won’t necessarily be perfect at identifying a railroad crossing either.
We could though speculate that at least we could program the AI to not do stupid things like try to go around a railroad gate that is blocking cars from proceeding across the railroad tracks.
As such, one would certainly hope that the number of train-vehicle crashes would lessen by merely ensuring that the AI abides by the rules-of-the-road, which we know humans are prone to not strictly observe.
For why remote piloting or operating of self-driving cars is generally eschewed, see my explanation here: https://aitrends.com/ai-insider/remote-piloting-is-a-self-driving-car-crutch/
To be wary of fake news about self-driving cars, see my tips here: https://aitrends.com/ai-insider/ai-fake-news-about-self-driving-cars/
The ethical implications of AI driving systems are significant, see my indication here: http://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/
Be aware of the pitfalls of normalization of deviance when it comes to self-driving cars, here’s my call to arms: https://aitrends.com/ai-insider/normalization-of-deviance-endangers-ai-self-driving-cars/
Self-Driving Tech Added Advantages
One notable capability that would help the AI do a reasonably good job at detecting the existence of a railroad crossing would be the use of advanced GPS and hyper mapping systems.
Presumably, self-driving cars are going to have topnotch GPS capabilities and be armed with the latest and greatest high-def maps. In that case, the self-driving car could consult the map and GPS, and hopefully the railroad crossing is already electronically marked on the map.
The AI system could then reaffirm that a railroad crossing exists up ahead, having been forewarned by referring to the GPS and the electronic on-board maps, and then use its own sensors to verify that the railroad crossing is there.
Furthermore, we’ll eventually have V2I (vehicle-to-infrastructure) electronic communications that will entail having our roadway infrastructure send out electronic signals to alert self-driving cars about roadway artifacts and conditions. If a bridge is out, the bridge itself might be emitting via an Internet of Things (IoT) device that any cars approaching the bridge should pursue a different path.
Railroad crossings that today are outfitted with flashing lights and gate arms could be augmented by having V2I devices to broadcast the existence of the railroad crossing and provide the status of any nearby trains.
Another detection factor that human drivers often use is the fact that the surrounding traffic is likely also coming to a halt due to a railroad crossing.
Thus, even if you fail to realize that a railroad crossing exists, you can see other cars around you that “mysteriously” are coming to a stop, and you might logically deduce that something is afoot, perhaps then noticing a huge angry train that’s barreling down the tracks toward the crossing.
Likewise, AI that is properly programmed would notice the traffic surrounding the self-driving car and possibly piece together that if other cars are stopping, it might suggest that the self-driving car needs to also come to a stop.
Yet another upcoming feature of self-driving cars and connectedness consists of V2V (vehicle-to-vehicle) electronic communication.
This will be a means of having the AI system of a self-driving car communicate with the AI system of other nearby self-driving cars.
Potentially, a self-driving car that has come upon a railroad crossing could send out a message to other self-driving cars approaching the same spot and alert those AI systems that a train is coming along.
In theory, the trains themselves might also be able to do V2V, meaning that a rushing train would be broadcasting to nearby vehicles that the train is coming to town and watch out. Any V2V equipped car, truck, van, or other kind of vehicle would get an electronic notification from the train itself.
For more details about ODDs, see my indication at this link here: https://www.aitrends.com/ai-insider/amalgamating-of-operational-design-domains-odds-for-ai-self-driving-cars/
On the topic of off-road self-driving cars, here’s my details elicitation: https://www.aitrends.com/ai-insider/off-roading-as-a-challenging-use-case-for-ai-autonomous-cars/
I’ve urged that there must be a Chief Safety Officer at self-driving car makers, here’s the scoop: https://www.aitrends.com/ai-insider/chief-safety-officers-needed-in-ai-the-case-of-ai-self-driving-cars/
Expect that lawsuits are going to gradually become a significant part of the self-driving car industry, see my explanatory details here: http://aitrends.com/selfdrivingcars/self-driving-car-lawsuits-bonanza-ahead/
Edge Cases Are Tough
On the surface, it appears that the AI driving system has all the advantages for properly coping with railroad crossings.
If so, why are there any qualms or potential dangers for self-driving cars that come upon a railroad crossing?
Time to consider the myriad of edge cases that can arise.
An edge case or sometimes referred to as a corner case involves aspects that might be beyond the core elements of a given task, allowing you to set them aside for the moment, temporarily, but that ultimately could occur and therefore might be devastating since they weren’t fully considered upfront.
Here’s one: Imagine that a self-driving car safely proceeded to cross a set of railroad tracks, and then got stuck amid the railroad tracks.
If this seems farfetched, I’d say that it could happen, though admittedly rare, yet there’s always the chance that any car could suffer a mechanical problem and suddenly breakdown or otherwise falter once it got onto the tracks.
Let’s agree it is an edge case.
What happens now that the self-driving car is stuck on the tracks?
Well, the AI system might not be able to figure out that it is stuck.
Or, it might determine that it cannot move, but has no provision in the system action efforts of what to do next.
As an edge case, the AI developers might have left this seemingly obscure use case to someday be dealt with, and unfortunately the fielded AI doesn’t have any contingencies ready for the circumstance.
For a human passenger inside the self-driving car, they might be perplexed about why the self-driving car has come to a halt. The AI Natural Language Processing (NLP), akin to a Siri or Alexa conversational capability, might be insufficiently programmed and unable to explain what has taken place.
We don’t even know that the AI could now ascertain whether a train might be coming. If the self-driving car is facing forward, and the train is coming from the left or right side, the kind of camera sensors on the sides of the car might not be as robust as the ones on the front of the self-driving car.
Yikes, it’s like one of those old-time movies of the infamous “car stuck on the railroad tracks” dramatic scenes.
What maybe makes this worse is that at least a human driver might get other passengers out of the car.
Or, maybe the driver flags down another nearby car to help push their car off the tracks.
These are unlikely options for an AI self-driving car, at least for the foreseeable future.
I’ll not dig more deeply into this specific use case herein, and suffice to say that it poses interesting and notably life-or-death consequences if it occurred.
There are numerous other kinds of edge cases that could be envisioned.
For example, suppose the railroad crossing gate comes down to warn that a train is coming. You’ve maybe seen this happen and then patiently waited for the arm to go up. Sometimes, a train doesn’t come along, and yet the gate arm stays down.
This is obviously a dicey situation, since a human driver might be tempted to drive around the gate arm, first hopefully double-checking that there’s no train within miles of eyesight.
What would a self-driving car do?
The odds are that most AI self-driving cars would dutifully sit there, stopped at the gate arm, and wait until doomsday since it assumes that it can only proceed once the gate arm goes up.
Again, the myriad of edge cases can all be eventually figured out, and I’m not suggesting that they are insurmountable.
The gist of the matter is that the current crop of AI self-driving cars gradually appearing on our public roadways has little if any depth of capability in dealing with railroad crossings. This is due to combination of lack of being programmed for it, along with the lack of sufficiently relevant training data for the Machine Learning and Deep Learning aspects of the AI system.
You might be familiar with the famous line that when you look in your side view mirror, objects tend to be closer than they appear.
Equally notable is that trains are closer than they appear, and oftentimes are moving fast towards you, but from a head-on perspective the rate of closure is not completely apparent.
Though the number of train-vehicle crash deaths is small in comparison to car-on-car crash deaths, the stats show that you are thirty times more likely to die in a train-vehicle crash than a car-on-car crash.
This makes sense in that a train is a very big thing and your car is going to lose any kind of fight involving the train and vehicle hitting each other.
Please drive carefully when you come to railroad crossings.
Eventually, we’ll have fully operational AI self-driving cars that have been robustly prepared for such matters, though in the meantime it is prudent to be wary about any so-called self-driving car that comes upon a railroad crossing. It’s a tough hurdle to match to the everyday mantra of stop, look, and listen, even for AI.
Copyright 2020 Dr. Lance Eliot
This content is originally posted on AI Trends.
[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]