Super-Intelligent AI Paperclip Maximizer Conundrum and AI Self-Driving Cars

By Lance Eliot, the AI Trends Insider

Paperclips. They quietly do their job for us. Innocent, simple, nondescript. You probably have paperclips right now somewhere near you, doing their duty by holding together a thicket of papers. In the United States alone there are about 11 billion paperclips sold each year. That’s about 34 paperclips per American per year. You likely have some straggler paperclips in your pocket, your purse, in the glove box of your car, and in a slew of other places.

Little did you know the danger you face.

There is a paperclips apocalypse heading our way. Locking your doors won’t stop it. Tossing out the paperclips you have in-hand won’t help. Moving to a remote island will not particularly increase your chances of survival. Face the facts and get ready for the dawning of the paperclips war and the end of mankind.

What am I talking about? Have I gone plain loco?

I’m referring to the everyday-person obscure but also semi-popular in AI “paperclip maximizer” problem. It goes somewhat like this.

As humans, we build some kind of super-intelligent AI. Of the many things we end-up asking the super-intelligent AI to do, one aspect includes that we might request that it make paperclips for us. Seems simple enough. The super-intelligent AI can hopefully do something as relatively easy as running a manufacturing plant to bend little wiry pieces of thin steel and make paperclips for us.

The super-intelligent AI is trying to be as helpful to us as it can be. Almost like a brand-new puppy that will do just about anything to make you happy, including wagging its tail, jumping all over you, and the like, the super-intelligent AI opts to really seriously get into the making of paperclips for mankind. It begins to acquire all of the available steel in the world so as to be able to make more paperclips. It quickly and inescapably opts to convert more and more of our existence and Earth into a magnificent paperclip making factory.

The super-intelligent AI assumes of course that humans will go along with this, since it was humans that started the super-intelligent AI on this quest.

If there are humans that happen to wander along during the quest and try to get in the way of making paperclips, well, those humans will need to one-way-or-another be gotten out of the way. Paperclips must be made. Paperclips are going to flourish and if it takes all of the globe’s resources to do so, the super-intelligent AI will find a means to make it occur.

Think of the famous movie 2001: A Space Odyssey and how HAL, the AI system running the spaceship, tried to stop the astronauts (I’m not going to say much more about the movie because I don’t want to spoil the plotline for those of you that haven’t seen it, though, come on, you should have already seen the movie!).

This paperclip apocalyptic scenario is credited to Nick Bostrom, an Oxford University philosophy professor that first mentioned it in his now-classic piece published in 2003 entitled “Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence” (see https://nickbostrom.com/ethics/ai.html) and which eventually became a darling of hypothetical AI super-intelligence takeover discussions and debates.

The paperclips scenario has spawned numerous variants.

I had mentioned herein that we humans asked the super-intelligent AI to make paperclips for us. But, you could also take the position that the super-intelligent AI for whatever reason decided to make paperclips without us humans even asking it to do so.

Notice though that either way, the making of the paperclips seems like a rather innocent and benign act. That’s a crucial aspect underlying the nature of the debate.

We could of course posit that the super-intelligent AI wants to be overtly evil and is out to kill-off humans, or that it fiendishly plots to make paperclips as a means to destabilize, overthrow, and imprison or destroy all of mankind. This is not the essence though of the paperclip scenario (there are lots of other scenarios that involve that AI as a heinous humanity-destroyer portrayal). Instead, let’s go with the theme that the super-intelligent AI happens to get into the paperclip making business and then things go awry.

Let’s consider some excerpts of what Bostrom had to say when he first postulated the paperclips scenario.

Superintelligence with Goal of Making Paperclips

When discussing the advent of super-intelligent AI — “It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal.”

And, here’s another related excerpt:

“Another way for it to happen is that a well-meaning team of programmers make a big mistake in designing its goal system. This could result, to return to the earlier example, in a superintelligence whose top goal is the manufacturing of paperclips, with the consequence that it starts transforming first all of earth and then increasing portions of space into paperclip manufacturing facilities.”

The choice in the scenario of making of paperclips by the imaginary super-intelligent AI was kind of handy since we already accept that paperclips are rather innocent and benign. If the postulation was the making of atomic bombs, it would not have been helpful in the discussion because then we might have all gotten wrapped-up into the fact that what was being made is inherently dangerous and can kill.

In terms of paperclips, though I did one time get a cut from a paperclip, they otherwise are relatively tame and not especially threatening. I have no particular grudge against paperclips and accept them with open arms.

The aspect that the scenario encompasses making paperclips, rather than just admiring them or using them, provides an essential element to the underlying theme. The super-intelligent AI is underway on a task that will require physical materials and the acquisition and consumption of resources. I think we can all envision how this might end-up starving the world by the super-intelligent AI scooping up everything that could be used to make paperclips. A vivid imagery!

For those of you that aren’t so keen on the paperclips aspect per se, there are other similar exemplars that are often utilized. For example, you can be a bit more lofty by substituting the role of the paperclips with instead a quest to solve the Riemann Hypothesis.

The Riemann Hypothesis involves a key question about the nature and distribution of prime numbers. Bernhard Riemann proposed a hypothesis about prime numbers in 1859 and mathematicians have been trying to prove or disprove it ever since. It is so important that it is considered a vaunted Millennium Prize Problem and sits in the same ranks as the computer science quest for whether P=NP problem. Some say that true pure mathematicians are continually slaving away at the Riemann Hypothesis and consider it to be one of the greatest unsolved mathematical puzzles.

In the case of the super-intelligent AI, you can scrap the story about the paperclips, and instead replace the paperclips with the super-intelligent AI opting to try and solve the Riemann Hypothesis instead. To solve the mathematical puzzle, the super-intelligent AI once again grabs up the world’s resources and uses them to participate in working toward a solution. If humans get in the way of the super-intelligent AI during this quest, those pesky humans will be dispensed with in some fashion or another.

See how that version is a bit more lofty and refined?

Paperclips are mundane. Everybody knows about paperclips. If you juice up the scenario by referring to the Riemann Hypothesis, you’ll get others to perceive the scenario as more high-fluting. You can even suggest that ultimately the super-intelligent AI would turn the world into a computronium (there’s a word you likely haven’t used lately, which indicates that the planet would essentially be turned into one gigantic computing devices, ostensibly used by the super-intelligent AI in this case to try and ferret out the Riemann Hypothesis).

Personally, I usually opt to use the paperclips version since it is easier to explain and also the notion of the super-intelligent AI coopting the world’s resources seems to fit better to a situation involving the physical manufacturing of something. Anyway, choose any version that you prefer.

An area of AI known as “instrumental convergence” tends to use the paperclips scenario (or equivalent) as a basis for discussing what might happen once we are able to produce super-intelligent AI systems. The crux is that we might have super-intelligent AI that has the most innocuous of overall goals, such as making paperclips, but for which things go haywire and the super-intelligent AI inadvertently wipes us all out (that’s a simplification, but you get the idea).

When I say that things go haywire, I don’t want you to infer that the AI has a mistake or fault inside of it. Let’s assume for the moment that this super-intelligent AI is working as we designed it and built it to work. Of course, yes, there could be something that goes amiss inside the AI and it goes on a rampage like a crazed Godzilla, but we’ll put that to the side for the moment.

Imagine that we’ve created this super-intelligent AI and it is working as we intended, or at least as far as we were able to look-ahead and be able to imagine what we thought we intended. Keep in mind that maybe we can only see two moves ahead in the game of life, such as a chessboard where we can only see a move or two ahead (often referred to as ply). Perhaps, by the time things get to ply three, we suddenly realize, oops, we goofed up and started something that when it gets to move three it is bad for all of us. Ouch!

Anyway, let’s get back to the end-goals matter.

Most of the time, we usually focus solely on the end-goals of these super-intelligent AI systems. Was the end-goal to destroy all of humanity? If so, it certainly makes sense that the super-intelligent AI might do exactly as so built, namely it might attempt to destroy all of mankind and thus succeed at what we set it up to do. Congrats, super-intelligent AI, you succeeded, we’re all dead.

The end-goal is almost too easy of a line-of-thought. To go deeper, suppose you have an end-goal that looks pretty good and innocent and satisfactory. Meanwhile, you have intermediary kinds of goals, often not getting as much attention as the end-goals, but nonetheless those intermediary goals are crucial to gradually getting toward the end-goals.

Suppose the intermediary goals inadvertently can allow for the end-goal to get somewhat twisted out-of-shape. This can be by the nature of the intermediary goals themselves and maybe they aren’t well stated, or it could be that you omitted an intermediary goal that should have been included.

You might have insufficient intermediary goals that therefore do not provide a proper driver toward the end-goals and thus the attempt to reach the end-goal goes astray accordingly. To make paperclips, I might not have included an intermediary goal that says do not destroy humanity in whatever quest you are undertaking. By omission, I have left vague and available the super-intelligent AI to take actions that achieve the end-goal and yet have rather adverse consequences in doing so.

Some assert that we need to have fundamental AI-drives that will be included in the intermediary goals.

Those fundamental AI-drives are aspects such as the AI having a sense of self-preservation. Another one might be the preservation of mankind. You can liken these AI-drives to something like Issac Asimov’s so-called “The Three Laws” which he introduced in his science fiction story in 1942. Be aware that Asimov’s Laws are exceedingly simplistic and have been criticized as over-simplifying the foundations for a super-intelligent AI system, which, you also need to keep in mind it was just a science fiction short story and not a design manual for super-intelligent AI of the future.

In any case, the ethical aspects of AI are certainly worthy of attention and this will increasingly be the case.

The more that AI can actually become the futuristic AI that has been envisioned, the closer we get to having to deal with the practical aspects of these various doomsday scenarios. Some are worried that we’ll let the horse out of the barn and have come too late to figuring out the AI ethical aspects. It does seem to make sense that we ought to make sure that we iron out these aspects before the super-intelligent AI making paperclips or solving the Riemann Hypothesis destroys us all.

For instrumental convergence, the key takeaway is that we might have relatively decent end-goals for our super-intelligent AI, but the underlying intermediary goals were lacking or omitted that would have led the super-intelligent AI on a more rightful path. The set of so-called instrumental goals or sub-goals, or often referred to as instrumental values, are vital to the journey on the way to the end-goals. An adverse instrumental convergence can occur, meaning that these intermediary goals don’t mesh together in a good way and thus fail to stop or prevent distortions during the journey to a seemingly helpful and beneficial end-goal.

Pinned Down Playing Capture the Flag

This reminds me of when my children were quite young and one day we went to a local park. There were some other children there that we did not know. My children mixed-in with those unfamiliar children, and it was determined by the collective group that they would play a game of capture the flag. This is normally a simple and innocent enough game involving placing an item such as a flag or T-shirt or whatever at one end of the park, doing so for one team, and likewise at the other end for the other team (having then been divided into two teams, or more if they had a lot of kids).

Thus, the end-goal involves capturing the flag of the other team.

Usually, this entails running around and playfully having a good time. The flag capturing was in my view not nearly as crucial as the kids getting some exercise and having a good time. It also involved working together as a group. This was a means to hone their teamwork skills and deal with others that might or might not be familiar with group dynamics. When my kids were very young, there wasn’t any group dynamics per se and it was each person just ran wildly. As they got older, working collaboratively with the group became more reasoned.

Well, here’s what happened in this one particular instance and it still stands out in my mind because of what took place. Some of the kids opted to pounce on my children and pin them to the ground, incapacitating them so that they could not run and try to help capture the flag. And don’t think that this pinning action was delightful or sweet. The bigger kids were pushing, shoving, hitting, kicking, and doing whatever they could to keep my children (and some of the others) pinned to the dirt.

I was shocked. I looked at the other parents that happened to be at the park, and none them seemed to be paying attention and none of them seemed to care about how the game was unfolding. My kids were old enough that they didn’t like it when I might try to intervene, and they had reached the age of wanting to take care of themselves. Should I step into this melee? Should I leave it be? I decided to ask one of the parents what they thought of the actions occurring. This particular parent shrugged his shoulders and said that kids will be kids. He was somewhat proud that they had discovered a means to win the game.

The end-goal was to capture the flag. I had assumed that these children would all have somewhat similar intermediary goals such as don’t beat-up another kid to win a playful game. That’s what my children knew from how I was raising them. Other parents there were obviously raising their children with a different set of instrumental values or instrumental goals, or maybe had omitted some that I had already tried to ingrain in my children.

In any case, this somewhat provides a highlight about what might happen with super-intelligent AI. We could program a super-intelligent AI that has seemingly innocuous end-goals and yet the pursuit of the end-goals could go in a direction that we did not anticipate and nor desire. Whether it is paperclips or solving a mathematical puzzle or capturing the flag, we need to be wary of setting in motion a blind pursuit of an end-goal and for which the utility function that combines together to reach that end-goal has a proper and appropriate balance to it.

You might want to take a look at some of my prior pieces about how AI could become a type of Frankenstein, and also aspects about the AI singularity, and so on.

For my article about AI as a potential Frankenstein, see: https://aitrends.com/selfdrivingcars/frankenstein-and-ai-self-driving-cars/

For the potential coming singularity of AI, see my article: https://aitrends.com/selfdrivingcars/singularity-and-ai-self-driving-cars/

For idealism about AI, see my article: https://aitrends.com/selfdrivingcars/idealism-and-ai-self-driving-cars/

For the Turing test and AI, see my article: https://aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/

These “thought experiments” about the future of AI are often seen as somewhat abstract and not especially practical. Is AI going to be an existential threat to humanity? That’s quite a way off in the future. There isn’t any kind of AI today that even remotely has anything to do with super-intelligence. Debates of this kind will often meander around and cover a lot of ground about the nature of intelligence and the nature of artificial intelligence. Quite interesting and thought provoking, yes, but not especially pertinent to today and nor even likely anytime near-term (nor likely mid-term).

One criticism often tossed around about these debates is that the AI that is supposed to be super-intelligent appears to behave in ways that don’t seem to be super-intelligent. Would a truly super-intelligent AI be so super-stupid that it did not realize that the obsession with making paperclips was to the detriment of everything else? What kind of super-intelligent AI is that?

Indeed, in today’s world, I’d tend to suggest that super-stupid AI is a much more immediate and worrisome threat than the super-intelligent AI.

When I use the phrase “super-stupid AI” please don’t get offended. Is the AI that is running a robotic arm currently in a manufacturing plant and doing some relatively sophisticated work the kind of AI that would be super-intelligent? I’d say no. Is that AI super-stupid? I would say it is closer to being super-stupid than it is being super-intelligent, and thus if you forced me into deciding into which of those two categories it fits into, I’d pick the super-stupid (that’s if I was only allowed the two categories).

I would feel safer telling people that come in contact with that AI robotic arm that it is super-stupid, which hopefully would put those people into an alert mode of being cautious around it, versus if I told them it was super-intelligent AI, and for which they might then falsely let down their guard and get clobbered by it. They would likely assume that a super-intelligent AI system would be smart enough to not strike them when they happened to get too close to the equipment.

If you prefer, I can use the phrase super-ignorant instead of super-stupid, which might be more palatable and applicable. But let’s for now go with the super-intelligent AI notion that we are going to have super-intelligent AI that also has warts and flaws and acts at times like child that has no comprehension of the world, even though we are calling it super-intelligent. It is a blend of super-stupidity, super-ignorance, and super-intelligence, all mixed into one.

For my article about the limits of today’s AI when it comes to common sense reasoning, see: https://aitrends.com/selfdrivingcars/common-sense-reasoning-and-ai-self-driving-cars/

For issues about AI boundaries, see my article: https://aitrends.com/ai-insider/ai-boundaries-and-self-driving-cars-the-driving-controls-debate/

For reasons to consider starting over on AI, see my article: https://aitrends.com/selfdrivingcars/starting-over-on-ai-and-self-driving-cars/

For conspiracy theories about AI, see my article: https://aitrends.com/selfdrivingcars/conspiracy-theories-about-ai-self-driving-cars/

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. In some ways, the AI being developed and fielded by many of the auto makers and tech firms in the self-driving car realm are akin to the paperclip maximizer problem.

There are AI self-driving cars that are going to have some semblance of super-intelligent AI, combined with super-stupid AI and super-ignorant AI. I’d like to describe how this might occur and also offer indications of what we ought to all be doing because of it.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Combination of Suiper-Intelligent, Super-Stupid and Super-Ignorant

Returning to the super-intelligent AI, let’s consider ways in which AI that is being developed today and fielded into AI self-driving cars is going to be a combination of super-intelligent, super-stupid, and super-ignorant.

I’ll start by providing an example that seems rather farfetched, but it was one that I think offers a handy paperclip-like maximizer scenario and is squarely in the AI self-driving car realm.

Ryan Calo, an Associate Professor of Law at the University of Washington in Seattle, offered an intriguing and disturbing circumstance of a fictional AI self-driving car that goes too far in a quest to achieve maximum fuel efficiency and in so doing asphyxiates the human owners of the AI self-driving car:

“The designers of this hybrid vehicle provide it with an objective function of greater fuel efficiency and the leeway to experiment with systems operations, consistent with the rules of the road and passenger expectations. A month or so after deployment, one vehicle determines it performs more efficiently overall if it begins the day with a fully charged battery. Accordingly, the car decides to run the gas engine overnight in the garage, killing everyone in the household” (see his article entitled “Is the Law Ready for Driverless Cars” in the May 2018 issue of the Communications of the ACM, page 34).

This morbid scenario provides another instance of the paperclip maximizer problem.

The AI of the self-driving car was provided with a seemingly innocuous end-goal, namely to achieve high fuel efficiency. Somehow the AI devised an oddball logical contortion that by running the gas engine and depleting the gas it would end-up with a fully charged battery, and it would be able to arrive at the desired end-goal of fuel efficiency. We can quibble about various facets of this scenario and that it has some loose ends (if you dig into the logic of it), but anyway it is another handy example on this matter of super-intelligent AI and how it can get things messed-up.

Let’s consider something directly applicable to the emerging AI self-driving cars of today.

Today’s AI self-driving cars are going to have Natural Language Processing (NLP) capabilities to converse with the human occupants of AI self-driving cars.

Some falsely think that human occupants will only utter a destination location and then remain silent during the rest of the AI self-driving car driving journey. If you consider this for a moment, you’d realize it is a quite naïve way to consider the needs of the interaction between the AI and the human occupants inside an AI self-driving car. There is likely going to be a need for the human occupant to alter the indicated destination desired and request a different destination, or seek to have intermediary destinations add, or have a concern about the driving aspects, and so on.

For conversing with an AI self-driving car to give driving commands, see my article: https://aitrends.com/selfdrivingcars/car-voice-commands-nlp-self-driving-cars/

For the socio-behavioral aspects of humans instructing AI self-driving cars, see my article: https://aitrends.com/features/socio-behavioral-computing-for-ai-self-driving-cars/

For humans helping to teach AI self-driving cars via Machine Learning aspects, see my article: https://aitrends.com/ai-insider/human-aided-training-deep-reinforcement-learning-ai-self-driving-cars/

For more about Machine Learning and AI self-driving cars, see my article: https://aitrends.com/ai-insider/occams-razor-ai-machine-learning-self-driving-cars-zebra/

Suppose my Level 5 AI self-driving car is parked in my garage and I come out to it since I have a driving trip in mind. I get into the self-driving car and tell the AI that I want to be driven to the grocery store. I lean back in my comfy passenger seat (as a Level 5, there aren’t any driver’s seats), and await the AI to start the self-driving car and head over to the store. Instead, the AI refuses to start-up the self-driving car.

My first hunch is that the AI is suffering from a fault or failure. I run an internal systems diagnostic test and it reports that the AI is working just fine. I ask the AI again, please, I add to the wording, take me to the grocery store. The engine still doesn’t start. The self-driving car remains still. It doesn’t look like I’m going to be getting my ride over to the store anytime soon.

Fortunately, this AI happens to have an explanation-generation capability. I ask the AI to explain why my command is not being obeyed. I’d been wondering that maybe I’ve not phrased my request aptly?  Maybe the AI is misunderstanding what I am asking the AI to do? An articulated explanation of the AI’s logic for not abiding by my command might reveal where the hold-up seems to be.

The AI reveals that it will not take me to the store because it is not safe to do so. Furthermore, one of its top priority end-goals to always try to make sure that any human passengers in the AI self-driving car are kept safe. Since the end-goal is make sure in this case that I remain safe, and since the AI has ascertained that it is unsafe to drive over to the store, the AI “logically” deduced that it should not take me there and therefore is not going to start on the driving journey.

Impeccable logic, it would seem.

But is this logic absurdity that has gone astray? You could claim that never leaving the garage would always be the safest act of the AI self-driving car. The moment that the AI self-driving car gets onto a roadway and in-motion, the odds of a crash or other incident would certainly seem to rise. To ensure my safety, the AI self-driving car can just sit quietly in the garage and never move. I think we might all agree that this would not be a very useful AI self-driving car if it never left the garage.

This could be an indicator of the paperclip maximizer problem pervading the AI of my AI self-driving car.

I’ll add a twist though to showcase that you cannot always jump right away to the paperclips. Suppose that I live in an area that has just been hit by a massive hurricane. The roads are flooded. Electrical power poles have fallen over and there are streets with electrical lines dangling across them. Local emergency agencies have advised the public to stay in place and not venture out onto the roads.

What do you think of the AI now?

It could be that the AI was electronically privy to the hurricane conditions and has determined that the self-driving car should not venture out. My safety is indeed in jeopardy if the AI were to proceed to head to the grocery store. Thank goodness for the AI. Probably saved my life.

Of course, that’s not quite the end of the matter, since as a human, perhaps I ought to be able to override the AI hesitation, but that’s something I’ve discussed in several of my other articles and I’ll skip covering that aspect herein.

For my article about the role of AI self-driving cars when confronted with hurricanes, see: https://aitrends.com/selfdrivingcars/hurricanes-and-ai-self-driving-cars-plus-other-natural-disasters/

For being able to stop an AI self-driving car remotely or directly, see my article: https://aitrends.com/ai-insider/virtual-spike-strips-and-ai-self-driving-cars/

For the dangers of a freezing up AI self-driving car, see my article: https://aitrends.com/selfdrivingcars/freezing-robot-problem-and-ai-self-driving-cars/

For cognitive AI aspects that can go awry, see my article: https://aitrends.com/selfdrivingcars/cognitive-timing-for-ai-self-driving-cars/

Overall, the paperclips maximizer problem can be quite useful for even today’s AI.

It is more than merely an abstract thought experiment about a future world that we might not see for eons to come. You don’t necessarily need to have super-intelligent AI to be considering the paperclips threat. Sophisticated AI systems of today that have end-goals and intermediary goals and values can get themselves into a bind by not having a sufficient form of interlacing logic.

I’m especially concerned about AI self-driving cars that are emerging from the auto makers and tech firms and whether or not the AI developers are properly and appropriately worried about the paperclips scenario. They are so focused right now on getting an AI self-driving car to drive on a road and not hit people, which barely scratches the surface of what a true AI self-driving car needs to do, and thus there is not much attention to this kind of “futuristic” paperclips maximizer issue.

Imagine too if the OTA (Over-The-Air) updating capability of an auto maker or tech firm were to send out an updated set of goals and sub-goals that led their entire fleet of AI self-driving cars to get into an unexpected bind. Perhaps all of the AI self-driving cars in their fleet might suddenly come to a halt or take some other untoward action, prompted by conflicting sub-goals and goals, or sub-goals that undermine the end-goals, and so on. I mention this because I’ve only been discussing herein an individual self-driving car and it’s own AI issues, and yet ultimately there will presumably be thousands, hundreds of thousands, or many millions of such cars on our roadways.

For my article about OTA, see: https://aitrends.com/selfdrivingcars/air-ota-updating-ai-self-driving-cars/

For the concerns about human response times to taking over the driving task, see my article: https://aitrends.com/selfdrivingcars/not-fast-enough-human-factors-ai-self-driving-cars-control-transitions/

For the product liability aspects that AI self-driving car makers are going to face, see my article: https://aitrends.com/selfdrivingcars/product-liability-self-driving-cars-looming-cloud-ahead/

For my article about the safety aspects of AI self-driving cars, see: https://aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

Paperclips Apocalypse. Riemann Hypothesis Armageddon. Or, perhaps the AI self-driving car Day-of-Reckoning. Not a rosy picture of the future.

We can already use “thought experiments” to right now figure out that AI self-driving cars need to be designed, programmed, and fielded in a manner that will be beneficial to mankind, and AI developers need to be wise and leery of hidden or unsuspected out-of-control maximizers and other aliments of systems logic that could turn their beloved AI self-driving cars into our worst nightmare.

Either way, I’d advise you to make sure you keep your eye on those paperclips, they might be needed to defuse a super-intelligent AI gone amok.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 



from AI Trends https://ift.tt/2zwpRbE
via IFTTT

Comments

Popular posts from this blog

Underwater Autonomous Vehicles Helping Navy Get More for the Money 

Canada regulator seeks information from public on Rogers-Shaw deal