Linear No-Threshold (LNT) and the Lives Saved-Lost Debate of AI Self-Driving Cars
By Lance Eliot, the AI Trends Insider
The controversial Linear No-Threshold (LNT) graph has been in the news recently.
LNT is a type of statistical model that has been used primarily in health-related areas such as dealing with exposures to radiation and other human-endangering substances such as toxic chemicals. Essentially, the standard version of an LNT graph posits that any exposure at all is too much and therefore you should seek to not have any exposure, avoiding even the tiniest bit of exposure. You might say it is a zero-tolerance condition (using modern day phrasing). Strictly speaking, if you believe the standardized version of an LNT graph, it means there isn’t any level of exposure that is safe.
It is a linear line meaning that it goes straight along a graph (i.e., not a curved line, a straight line), typically at a steep angle like say 45 degrees, proceeding left-to-right, rising up to indicate that the more exposure you receive the worse things will get for you. This linear aspect is usually not the part of the graph that gets the acrimonious arguments underway, instead it is where the line starts on the graph that gets people boiling mad. In the classic LNT graph, the line starts at the origin point of the graph and the moment that the line starts to rise it is indicating that immediately you are being endangered since any exposure is considered bad. That’s the “no-threshold” part of the LNT. There isn’t any kind of initial gap or buffer portion that is considered safe. Any exposure is considered unsafe and ill-advised to encounter.
Succinctly stated by Nobel Prize winner Hermann Muller in the 1940s, he was the discoverer of the ability of radiation to cause genetic mutation, and he emphatically stated that radiation is a “no threshold dose” kind of contaminant.
The standard LNT has been a cornerstone of the EPA (Environmental Protection Agency), and a latest twist could be that the classic linear no-threshold might become instead a threshold-based variant. That’s kicking up a lot of angst and controversy. By-and-large, the EPA has typically taken the position that any exposure to pollution such as carcinogens is a no-threshold danger, meaning that the substance is dangerous at any level, assuming it is dangerous at some level.
Regulations are usually built on the basis of the no-threshold principle. The EPA has been bolstered by scientists and scientific associations echoing that the standard version of the LNT is a sound and reasonable way to govern on such matters. There is a huge body of research underlying the LNT. It is the foundation of many environmental efforts and safety-related programs in the United States and throughout the globe.
You might at first glance think that this LNT makes a lot of sense. Sure, any exposure to something deadly would seem risky and unwise. Might as well avoid the toxic or endangering item entirely. Case closed.
Not so fast, some say.
There is an argument to be made that sometimes a minor amount of exposure of something is not necessarily that bad, and indeed in some instances it might be considered good.
What might that be, you wonder?
Some would cite drinking and alcohol as an example.
For a long time, health concerns have been raised that drinking alcohol is bad for you, including that it can ruin your liver, it can harm your brain cells, it can become addictive, it can make you fat, it can lead to getting diabetes, it can increase your chances of getting cancer, you can blackout, and so on. The list is rather lengthy. Seems like something that should be avoided, entirely.
Meanwhile, you’ve likely heard or seen the studies that now say that alcohol can possibly increase your life expectancy, it can overcome undue shyness and enable you to be bolder and more dynamic, and it might reduce your risk of getting heart disease. There are numerous bona fide medical studies that have indicated that drinking red wine, for example, might be able to prevent coronary artery diseases and therefore lessen your chances of getting a heart attack. In essence, there are health-positive benefits presumably to drinking.
I assume that you are quick to retort that those “benefits” of drinking are only when you drink alcohol in moderation and with care. Someone that drinks alcohol too much is certainly likely to experience the “cost” or bad sides of drinking and will be less likely to receive any “gains” regarding the otherwise beneficial aspects of drinking.
One concern you might have about touching on the benefits of drinking is that it might be used by some to justify over-drinking, such as those wild college drinking binges that seem to occur (as a former professor, I had many occasions of students showing up to class that had obviously opted to indulge the night before and they were zombies while in the classroom).
If I asked you to create a graph that indicated how much you would recommend that others can drink, what kind of graph line would you make?
The problem you likely would wrestle with is the notion that if you provide a threshold of drinking, maybe one that allows for a low dosage, say a glass of red wine per day, it could become the proverbial snowball that will roll down the snowy hill and become an avalanche. By allowing any kind of signaling that drinking is Okay, you might be opening up Pandora’s box. One glass of wine per day that someone feels comfortable drinking, prompted by your graph, might personally take it upon themselves to gradually enlarge it to two glasses, then it morphs into an oh-so-easy four glasses per day, and onward toward an untoward end.
Perhaps it might be better to just state that no drinking is safe and therefore you can close-off any chance of others trying to wiggle their way into becoming alcoholics by claiming you might have led them down that primrose path. If you have any kind of allowed threshold, others might try to drive a Mack truck through it and later on say they got hooked into drinking and it ultimately ruined their lives.
You might be tempted therefore to make your graph show a no-threshold indication. This maybe seems harsh as you mull it over, yet if you are trying to “do the right thing” it seems to be the clearest and safest way to portray the matter.
That’s pretty much the logic used by the EPA. Historically, the EPA has tended to side with the no-threshold perspective since they have been concerned that allowing any amount of threshold, even a small one, could start the floodgates. They also point out that when trying to make national policy, it is hard to say how exposures can impact any particular person, depending upon their personal characteristics such as age, overall health, and other factors. Thus, the best bet is to make an overarching proclamation that covers presumably everyone, and to do so the no-threshold LNT is the way to do so.
The counter-argument is that this is like the proverbial tossing out the baby with the bath water. You are apparently willing to get rid of the potential “good” for the sake of the potential “bad,” and therefore presumably won’t have any chance at even experiencing the good. Is the health-positive of having a glass of wine per day so lowly in value that it is fine to discard it and instead make the overly simplified and overly generalized claim that alcohol in any amount is bad for you?
The other counter-argument is that oftentimes the use of the no-threshold approach fails to take into account the costs involved in going along with a no-threshold notion. What kind of cost might there be to enforce the no-threshold rule? It could be extremely expensive to deal with the even small doses portion, yet the small doses might either be not so bad or possibly even good.
You are therefore not only undermining the chances of gaining the good, which we’re assuming for the moment happens in the smaller dose’s aspects, but you are also raising the costs overall to attain the no-threshold line-in-the-sand.
Some say that if you allowed for a some-threshold model (versus the stringent no-threshold), you could bring back into the picture the good parts of the matter, plus you would potentially cut the costs tremendously that had gone towards achieving this no-threshold burden at the lower threshold level. Those reduced costs might then be placed toward other greater goods and not have to be any further “wasted” by dealing with the small threshold of something that had a goodness in it anyway.
When I’ve been referring to having a (relatively) small initial threshold, there’s a word that commonly is used to refer to such a phenomenon, namely it is called hormesis.
Linear No-Threshold (LNT) Graph and Hormesis Process
We could take a traditional Linear No-Threshold (LNT) graph, and place onto the graph an indication of a hormesis process, meaning something that allows for having a neutral or possibly positive reaction when at small levels of exposure. The first part of the hormesis’s line or curve would showcase that at the low doses, the result is neutral or possibly positive. That area of the line or curve that contains this neural or positive result is considered the hormetic zone.
There is an entire body of research devoted to hormesis and it is a popular word among those that study these kinds of matters. You can be a hormesis scientist, or if not a scientist of it or devoted to it, you can perhaps be a supporter of the hormesis viewpoint.
It’s not something that most of us use or would hear on a daily basis. I’m introducing it herein so that I can proceed henceforth within to refer to the hormetic zone and you’ll know I am referring to that part of a graph that indicates the reaction or result of a neutral or positive nature when exposed to something that otherwise at higher levels is considered unsafe or heightened in risk.
Typing this back to the discussion about the EPA, there are some that worry about an emerging rising tide of those supporting hormesis that are now beginning to reshape how the EPA does its environmental efforts and makes its regulations. The traditional no-threshold LNT camp is fiercely battling to keep the hormesis supporters at bay. This comes down to preventing any kind of some-threshold or inclusion of a hermetic zone into the work of the EPA.
I’m not going to weigh into that debate about the EPA and related policy matters (you can track the daily news to keep up with that matter, if you wish).
Here’s why I brought it up.
I wanted to bring your attention to the overall notion of the LNT, along with opening your eyes to the debate that can sometimes be waged about whether to allow for any threshold, which some say might is inherently and automatically bad, for the reasons I’ve mentioned earlier, versus insisting on a no-threshold, which some at times is implied that it “must” be inherently and exclusively good.
Of course, as I’ve now mentioned, the no-threshold has its own advantages and disadvantages. This is a crucial aspect to realize, since at times the no-threshold is put in place without any realization of it being both a positive and a negative, depending upon what kind of matter we might be discussing. You should be cautious in falling into a mental trap that the no-threshold versus the some-threshold is always to be won by the no-threshold, and instead ponder the tradeoffs in a given matter of whether the no-threshold or the some-threshold seem to be the better choice.
What does this have to do with AI self-driving cars?
At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. I am a frequent speaker at industry conferences and one of the most popular questions that I get has to do with the societal and economic rationale for pushing ahead on AI self-driving cars. The crux of the matter involves lives saved versus lives lost. As you’ll see in a moment, this is quite related to the Linear No-Threshold (LNT) that I’ve introduced you to.
Allow me to elaborate.
I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.
For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.
For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/
For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/
For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/
For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/
Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.
Here’s the usual steps involved in the AI driving task:
- Sensor data collection and interpretation
- Sensor fusion
- Virtual world model updating
- AI action planning
- Car controls command issuance
Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.
Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.
For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/
See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/
For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/
For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/
Returning to the topic of the Linear No-Threshold (LNT) model, let’s consider how the LNT might apply to the matter of AI self-driving cars.
One of the most noted reasons to pursue AI self-driving cars involves the existing dismal statistic that approximately 37,000 deaths occur in conventional car accidents each year in the United States alone, and it is hoped or assumed that the advent of AI self-driving cars will reduce or perhaps completely do away with those annual deaths.
In one sense, pursuit of AI self-driving cars can be likened to a noble cause.
There are of course other reasons to seek the adoption of AI self-driving cars. One often cited reason involves the mobility that could be presumably attained by society as a result of readily available AI self-driving cars. Some suggest that the AI self-driving will democratize mobility and provide a profound impact to those that today are without mobility or have limited access to mobility. It is said that our entire economic system will be reshaped into a mobility-as-a-service economy, and we’ll see an incredible boon in ridesharing, far beyond anything that have seen to-date.
Let’s though focus on the notion of AI self-driving cars being a life saver by seemingly ensuring that we will no longer have any deaths due to car accidents.
You might ponder for a moment what it is about AI self-driving cars that will apparently avoid deaths via car accidents. The usual answer is that there won’t be any more drunk drivers on the roads, since the AI will be doing the driving, and therefore we can eliminate any car accidents resulting from humans that drink and drive.
Likewise, we can seemingly eliminate car accidents due to human error, such as failing to hit the brakes in time to avoid crashing into another car or perhaps into a pedestrian. These human errors might arise because a human driver is distracted while driving, looking at their cell phone or trying to watch a video, and thus they are not attentive to the driving situation. It could also be that humans get into deadly car accidents by getting overly emotional and do not dispassionately make decisions when they are driving. And so on.
For the moment, I’ll hesitantly say that we can agree that those kinds of deaths due to car accidents can be eliminated by the use of AI self-driving cars, though I make this concession with reservations in doing so.
My reservations are multi-fold.
For example, as mentioned earlier, we are going to have a mixture of human driven cars and AI self-driving cars for quite a long time to come, and thus it will not be as though there are only AI self-driving cars on the public roadways. The assumption about the elimination of car accidents is partially predicated on the removal of human drivers and human driving, and it doesn’t seem like that will happen anytime soon.
Even if we somehow remove all human driving and human drivers from the equation of driving, this does not mean that we would necessarily end-up at zero fatalities in terms of AI self-driving cars. As I’ve repeatedly emphasized in my writings and presentations, goals of having zero fatalities sound good, but the reality is that there is a zero chance of it. When an AI self-driving car is going down a street at 45 miles per hour, let’s assume completely legally doing so, and a pedestrian steps suddenly and unexpectedly into the street, with only a split second before impact, the physics bely any action that the AI self-driving car can take to avoid hitting and likely killing that pedestrian.
You might right away and object and point out that the frequency of those kinds of car accidents involving deaths will certainly be a lot less than it likely does today with conventional cars and human drivers. I would tend to agree. Let’s be clear, I am not saying that the number of car related deaths won’t likely decrease, and hopefully by a wide margin. Instead, I am saying that the chances of having zero car-related deaths is the questionable proposition.
If you accept that premise, it should then suddenly seem familiar, since it takes us back to my earlier discussion about Linear No-Threshold (LNT) graphs and models.
For my article about the various human foibles of driving, see: https://www.aitrends.com/selfdrivingcars/ten-human-driving-foibles-self-driving-car-deep-learning-counter-tactics/
For more about noble cause aspects, see my article: https://www.aitrends.com/selfdrivingcars/noble-cause-corruption-and-ai-the-case-of-ai-self-driving-cars/
For my article about aspects of ridesharing and the future, see: https://www.aitrends.com/selfdrivingcars/ridesharing-services-and-ai-self-driving-cars-notably-uber-in-or-uber-out/
For how mobility of the AI self-driving car might impact the elderly, see my article: https://www.aitrends.com/ethics-and-social-issues/elderly-boon-bust-self-driving-cars/
For the notion of zero fatalities, which I contend is zero chance, see my article: https://www.aitrends.com/selfdrivingcars/self-driving-cars-zero-fatalities-zero-chance/
I’d like to walk you through the type of debate that I usually encounter when discussing this aspect of car-related deaths and AI self-driving cars.
Logical Perspectives on Matters of Life and Death and AI Self-Driving Cars
Much of the time, those involved in the debate are not considering the full range of logical perspectives on the matter. Obviously, any discussion about life or death is bound to be fraught with emotionally laden qualms. It is hard to consider in the abstract the idea of deaths that might be due to car accidents. When I get into these discussions, I usually suggest that we think of this as though we are actuaries, tasked with considering how to establish rates for life insurance. It might seem ghoulish, but the role of an actuary is to dispassionately think about deaths, such as their frequency and how they arise.
Take a look at my Figure 1 that shows the range of logical perspectives on this matter of lives and deaths related to AI self-driving cars.
We’ll start this discussion by considering those that insist on absolutely no deaths to be permitted by any AI self-driving car. Ever. Under no circumstances do they see a rationalization for an AI self-driving car being involved in the death of a human. These are diehards that typically say that until AI self-driving cars have proven themselves to never lead to a human death, only then will they support the possibility of AI self-driving cars being on our roadways.
That’s quite a harsh position to take.
You could say that it is a no-threshold position. This is comparable to suggesting that the toxicity (in a sense) of an AI self-driving car must be zero before it can be allowed on our roads. The person taking this stance is standing on the absolutely and utterly “no risks” allowed side of things. For them, a Linear No-Threshold (LNT) graph would be a fitting depiction of their viewpoint about AI self-driving cars.
I’d like to qualify the aspect of the LNT in their case is somewhat different than say radiation or a toxic chemical. They are willing to allow AI self-driving cars once they have presumably been “perfected” and are guaranteed (somehow?) to not cause or produce any car-related deaths.
This position would be that you can keep trying to perfect AI self-driving cars in other ways, just not on the public roadways.
Test those budding AI self-driving cars on special closed-tracks that are made for the purposes of advancing AI self-driving cars. Use extensive and large-scale computer-based simulations to try and iron out the kinks. Do whatever can be done, except for being on public roadways, and when that’s been done, and in-theory the AI self-driving car is finally ready for death-free driving on the public streets, it can be released into the wild.
The auto makers and tech firms claim that without using AI self-driving cars on the public roadways, there will either not be viable AI self-driving cars until a far distant future, or it might not ever come to pass at all. Without the rigors of being on public roadway, it is assumed that there is no viable way to fully ready AI self-driving cars for public roadways. It is a kind of Catch-22. If you won’t allow AI self-driving cars on public roadways, you are either won’t ever have them there or it will be many moons from now.
For those that are in the camp of no-deaths, they reply that go ahead and take whatever time you need. If it takes 20 years, 50 years, a thousand years, and you still aren’t ready for the public roadways, so be it. That’s the price to pay for ensuring the no-deaths perspective.
But this seems reminiscent once again of the LNT argument.
Suppose that while you wait for AI self-driving cars to be perfected, meanwhile those 37,000 deaths per year with conventional cars is continuing unabated. If you wait say 50 years for AI self-driving cars to be perfected, you are also presumably offering that you are willing to have perhaps nearly 200,000 people die during that period of time. This usually causes the no-deaths camp to become irate, since they are certainly not saying that they are callously discounting those deaths.
This hopefully moves the discussion into one that attempts to see both sides of the equation. There are presumably deaths or lives to be saved, as a result of the adoption of AI self-driving cars, though it is conceivable that those AI self-driving cars will still nonetheless be attributable to some amount of car-related deaths.
Are you willing or not to seek the “good” savings of lives (or reductions in deaths), in exchange for the lives (or deaths) that will be lost while AI self-driving cars are on our roadways and being perfected (if there is such a thing)?
If you could get to AI self-driving cars sooner, such as in 10 years, during which in-theory without any AI self-driving cars on the roadways you would have lost say 370,000 lives, would you do so, if you also were willing to allow for some number of car-related deaths that were attributable to the still being perfected AI self-driving cars. That’s the rub.
Refer again to my Figure 1.
The “showstopper” perspective, shown as the first row of my chart, would continue to embrace the no-deaths permitted via AI self-driving cars notion and either not see the apparent logic of the aforementioned, or be dubious that there is any kind of net savings of lives to be had. They might argue that it could horribly turnout that the number of lives lost, as a result of the initial tryout and perfecting period of AI self-driving cars, might overwhelm the number of lives that were presumably going to be saved.
I’d like to broaden the idea too of AI self-driving cars and car-related deaths that might occur, just to get everything clearly onto the table. I’m going to consider direct deaths and also indirect deaths.
There are direct deaths, such as an AI self-driving car that rear-ends another car, and either a human in the rammed car dies or a human passenger in the AI self-driving car dies (or, of course, it could be multiple human deaths), and for which we could investigate the matter and perhaps agree that it was the fault of the AI self-driving car. Maybe the AI self-driving car had a bug in it, or maybe it was confused due to a sensor that failed, or a myriad of things might have gone wrong.
There are indirect deaths that can also occur. Suppose an AI self-driving car swerves into an adjacent lane on the freeway. There’s a car in that lane, and the driver gets caught off-guard and slams on their brakes to avoid hitting the lane-changing AI self-driving car. Meanwhile, the car behind the brake-slamming car is approaching at a fast rate of speed and collides with the braking car. This car, last in the sequence, rolls over and the human occupants are killed.
I refer to this as an indirect death. The AI self-driving car was not directly involved in the death, though it was a significant contributing factor. We’d need to sort out why the AI self-driving car made the sudden lane change, and why and how the other cars were being driven to figure out the blame aspects. In any case, I’m going to count this kind of scenario as one in which an AI self-driving gets involved in a death related incident, even though it might not have been the AI self-driving that directly generated the human death.
For safety aspects of AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/
For my article about what happens when sensors fail, see: https://www.aitrends.com/selfdrivingcars/going-blind-sensors-fail-self-driving-cars/
For my article about fail-safe considerations, see: https://www.aitrends.com/selfdrivingcars/fail-safe-ai-and-self-driving-cars/
For bugs that could be in AI systems of self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/ghosts-in-ai-self-driving-cars/
Okay, let’s return to my Figure 1.
There’s the first row, the showstopper, consisting of the no-deaths perspective. This viewpoint is that under no circumstances at all will they be satisfied with having AI self-driving cars on the public roadway, until or if they are assured that doing so will cause absolutely no deaths. This encompasses both the no indirect deaths and the posture of no direct deaths. This viewpoint also is blind to the net lives that might be saved, during an interim period of AI self-driving cars being on the roadway and won’t consider the net lives saved and nor the net less deaths possibilities.
That’s about as pure a version of a no-threshold belief as you can find.
Some criticize that camp and use the old proverb that perfection is the enemy of good. By not allowing AI self-driving cars to be on our public roadways until they are somehow guaranteed not to produce any deaths, indirect or direct, you are apparently seeking perfection and will meanwhile be denying a potential good along the way. Plus, maybe the good won’t ever materialize because of that same stance.
For the remainder of the chart, I provide eight variations of those that would be considered the some-threshold camp. This takes us into the hormetic zone.
Yes, I bring up once again the hormetic zone. In this case, it would be the zone during which AI self-driving cars might be allowed onto the roadways and doing so might provide a “good” to society, and yet we would acknowledge that there will also be a “bad” in that those AI self-driving cars are going to produce car-related deaths.
There are four distinct stances or positions about indirect deaths (see the chart rows numbered as 2, 3, 4, 5), and are all instances that involve a willingness to “accept” the possibility of incurring indirect deaths due to AI self-driving cars being on the roadways during this presumed interim period.
For the columns, there is the situation of a belief that there will be a net savings of lives (the number of lives “saved” from the predicted number of usual deaths is greater than the number of indirect deaths generated via the AI self-driving cars), or there will be a net less-deaths (the number of indirect deaths will be greater than the number of lives “saved” in comparison to the predicted number of usual deaths).
One tricky and argumentative aspect about the counting of net lives or net deaths is the time period that you would use to do so.
There are some that would say they would only tolerate this matter if the aggregate count in any given year produces the net savings. Thus, if AI self-driving cars are allowed onto our roadways, it means that in each year that this takes place, the net lives saved must bear out in that year. Every year.
This though might be problematic. If we picked a longer period of time, say some X number of years (use 5 years as a plug-in example), maybe the net savings would come out as you hoped, though during those five years there might have been any of those particular years that the net savings was actually be a net loss.
Would you be so restrictive that it had to be just per-year, or would you be willing to take a longer time period of some kind and be satisfied if the numbers came out over that overall time period – you decide.
Per my chart, we have these four positions about indirect deaths:
- Highly Restrictive = indirect deaths with net life savings each year mandatory (savings > losses)
- Medium Restrictive = indirect deaths with net life savings over X years (where X > 1, savings > losses)
- Low Allowance = indirect deaths with net less deaths each year mandatory (losses > savings)
- Medium Allowance = indirect deaths with net less deaths over X years (where X > 1, losses > savings)
I realize you might be concerned and confounded about the notion of having net less deaths. Why would anyone agree to something that involves the number of losses due to AI self-driving cars being greater than the number of lives “saved” by the use of AI self-driving cars? The answer is that during this hormetic zone, we are assuming that this is something that might indeed occur, and we are presumably willing to allow it in exchange for the future lives savings that will arise once we get out of the hormetic zone.
Without seeming to be callous, take the near-term pain to achieve the longer-term gain, some might argue.
To get a “fairer” picture of the matter, you should presumably count the ongoing number of lives saved, forever after, once you get out of the hormetic zone, and plug that back into your numbers.
Let’s say it takes 10 years to get out of the hormetic zone, and then thereafter we have AI self-driving cars for the next say 100 years, and during that time the number of predicted deaths by conventional cars would be entirely (or nearly so) avoided. If so, using a macroscopic view of the matter, you should take the 100 years’ worth of potential deaths that were avoided, so that’s 100 x 37,000, which comes to 3,700,000 deaths avoided, and add those back into the hormetic zone years. It certainly makes the hormetic zone period likely more palatable. This requires a willingness to make a lot of assumptions about the future and might be difficult for most people to find credible.
Four Categories of Direct and Indirect Deaths
The remain four positions are about direct deaths. It would seem that anyone likely willing to consider direct deaths would also be willing to consider indirect deaths, and thus it makes sense to lump together the indirect and direct deaths for these remaining categories.
Here they are:
- Mild Restrictive = direct + indirect deaths with net life savings each year mandatory (savings > losses)
- Low Restrictive = direct + indirect deaths with net life savings over X years (where X > 1, savings > losses)
- High Allowance = direct + indirect deaths with net less deaths each year mandatory (losses > savings)
- Upper Allowance = direct + indirect deaths with net less deaths over X years (where X > 1, losses > savings)
You can use this overall chart to engage someone into a hopefully intelligent debate about the advent of AI self-driving cars, doing so without a lot of hand waving and yelling that seems ill-served, amorphous, lacks structure, and often seems to generate more heat than substance.
I usually begin by ferreting out whether the person is taking the showstopper stance, in which case there is not much to discuss, assuming that they have actually thought about all of the other positions. It could be that they have not considered those other positions, and upon doing so, they will adjust their mindset and begin find another posture that fits to their sensibility on the matter.
If someone is more open to the matter and willing to discuss the hormetic zone notion of the adoption of AI self-driving cars, you are likely to then find you and the other person anguishing over trying to identify the magic number X.
The person is potentially willing to be in any of the camps numbered 2 through 9, but they are unsure because of the time period that might be involved. Were X to be a somewhat larger number, such as say a dozen years, they might find it very hard to go along with the net less deaths and be only willing to go along with the net savings, and also they might say they want this to be per year, rather than aggregated over the entire time period. For an X that is lesser in size, perhaps 5 or less, they are at times more open to the other positions.
For why public perception of AI self-driving cars is like a roller coaster, see my article: https://www.aitrends.com/selfdrivingcars/roller-coaster-public-perception-ai-self-driving-cars/
For my article about my Top 10 predictions regarding AI self-driving cars, see: https://www.aitrends.com/ai-insider/top-10-ai-trends-insider-predictions-about-ai-and-ai-self-driving-cars-for-2019/
For the role of ethics and AI self-driving cars, see my article: https://www.aitrends.com/selfdrivingcars/ethics-review-boards-and-ai-self-driving-cars/
For my article about the role of the media in propagating fake news about AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/ai-fake-news-about-self-driving-cars/
The handy thing about this chart overall and the approach is that it gets the debate onto firmer ground. No wild finger pointing needed. Instead, calmly try to see this as an actuarial kind of exercise. In doing so, consider what each of the chart positions suggests.
I usually hold back a few other notable aspects that I figure can regrettably turn the discussion nearly immediately upside down.
For example, suppose that we never reach this nirvana of perfected AI self-driving cars, in spite of perhaps allowed them to be used on our public roadways, and in the end they are still going to be involved in car-related deaths. That’s a lot to take in.
Also, as I earlier mentioned, it is not especially realistic to suggest that AI self-driving cars won’t be involved in any car-related deaths, ever, even once so-called perfection is reached, per the example I gave of the pedestrian that unexpectedly steps in front of an AI self-driving car coming down the street.
In that instance, I’m saying that the AI self-driving was working “perfectly” and noticed the pedestrian, and ascertain the pedestrian was standing still, on the curb, and the self-driving car was going to drive past the pedestrian, just like any of us human drivers would, and the pedestrian without any sufficient warning jumps into the street.
The physics involved preclude the AI self-driving car of doing anything other than hitting the pedestrian. Yes, maybe the brakes are applied or the AI self-driving car attempts to swerve away, but if the pedestrian does this with no warning and only a few feet in front of the AI self-driving car, there’s no braking or swerving that would be done in sufficient time to avoid the matter.
How many of those kinds of “perfected” AI self-driving car related car-deaths will we have? It’s hard to say. I’ve previously warned that we are going to have pranks by humans toward AI self-driving cars, which indeed has already been occurring. It could be that some humans will try to trick an AI self-driving car and get killed while doing so. There might be other instances of humans not paying attention and getting run over, not because of a prank and simply because it happens in the real-world that we live in.
For my article about pranking of AI self-driving cars, see: https://www.aitrends.com/selfdrivingcars/pranking-of-ai-self-driving-cars/
For nutty things like jumping out of moving cars, see my article: https://www.aitrends.com/selfdrivingcars/shiggy-challenge-and-dangers-of-an-in-motion-ai-self-driving-car/
For the Uber self-driving car incident that killed a pedestrian, see my article: https://www.aitrends.com/selfdrivingcars/initial-forensic-analysis/
For my follow-up article about the Uber self-driving car incident, see: https://www.aitrends.com/selfdrivingcars/ntsb-releases-initial-report-on-fatal-uber-pedestrian-crash-dr-lance-eliot-seen-as-prescient/
For my article exposing the myths about back-up drivers, see: https://www.aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/
Conclusion
Whether you know it or not, we are currently in the hormetic zone. AI self-driving cars are already on our public roadways.
So far, most of the tryouts include a human back-up driver, but as I’ve repeatedly stated, a human back-up driver does not translate into a guarantee that an AI self-driving car is not going to be involved in a car-related death. The Uber self-driving car incident in Arizona is an example of that unfortunate point.
Per my predictions about the upcoming status of AI self-driving cars, we are headed toward an inflexion point. There are going to be more deaths involving AI self-driving cars, including direct and indirect deaths.
How many such deaths will be tolerated before the angst causes the public and regulators to decide to bring down the hammer on AI self-driving cars tryouts on our roadways?
If the threshold is going to be a small number such as one death or two deaths, it pretty much means that AI self-driving cars will no longer be considered viable on our public roadways. This then means that it will be up to using closed-tracks and simulations to try and “perfect” AI self-driving cars. Yet, as per my earlier points, the pell-mell rush to get AI self-driving cars possibly off the roadways could dampen the pace of advancing AI self-driving cars, which as mentioned could imply that we’ll be incurring conventional car deaths that much longer.
Can the public and regulators view this advent of AI self-driving cars as an LNT type of problem? Is there room to shift from a no-threshold to some-threshold? Can the use of hormesis approaches give guidance of looking at a larger picture?
As an aside, one unfortunate element of referring to LNT is that it in a sense is not well regarded by those that are dealing with truly toxic substances, and for which they tend to make the case that indeed no-threshold is the way to go. I don’t want to overplay the LNT analogy since I don’t want others to somehow ascribe to AI self-driving cars that they are a type of radiation or carcinogen that needs to be abated. Please do keep that in mind.
Can the scourge of any deaths at the “hands” of an AI self-driving car be tolerated as long as there is progress toward reducing conventional cars deaths?
It’s a lot for anyone to consider. It certainly isn’t going to lend itself to a debate in 140-characters at a time basis. It’s more complex than that. Might as well start thinking about the threshold problem right now, since we’ll soon enough find ourselves completely immersed in the soup of it. Things are undoubtedly coming to come to a boil.
Copyright 2019 Dr. Lance Eliot
This content is originally posted on AI Trends.
from AI Trends https://ift.tt/2XqdHvd
via IFTTT
Comments
Post a Comment