Chaff Bugs and AI Autonomous Cars

By Lance Eliot, the AI Trends Insider

In the movie remake of the Thomas Crown Affair, the main character opts to go into an art museum to ostensibly steal a famous work of art, and does so attired in the manner of The Son of Man artwork (a man wearing a bowler hat and an overcoat).

Spoiler alert, he arranges for dozens of other men to come into the museum dressed similarly as he, thus confounding the efforts by the waiting police that had been tipped that he would come there to commit his thievery. By having many men serving as decoys, he pulls off the effort and the police are exasperated at having to check the numerous decoys and yet are unable to nab him (he sneakily changes his clothes).

This ploy was a clever use of deception.

During World War II, there was the invention of chaff, which was also a form of deception.

Radar had just emerged as a means to detect flying airplanes and therefore be able to try and more accurately shoot them down. The radar device would send a signal that would bounce off the airplane and provide a return to the radar device, thus allowing detection of where the airplane was.

It was hypothesized that there might be a means to confuse the radar device by putting something into the air that would seem like an airplane but was not an airplane. At first, the idea was to have something suspended from an airplane or maybe have balloons or parachutes that could contain a material that would bounce back the radar signals.

A flying airplane could potentially release the parachutes or balloons that had the radar reflecting material.

After exploring this notion, it was further advanced by the discovery that pieces of metal foil could be dropped from the airplane and that it was an easier way to create this deception.

The first versions were envisioned as acting in a double-duty fashion, doing so by being the size of a sheet of paper and contain propaganda written on them. This would be providing a twofer (two-for-one), it would confuse the radar and then once landed on the ground it would serve as a propaganda leaflet.

Turns out that the use of strips of aluminum foil were much more effective.

This nixed the propaganda element of the idea. With the strips, you could dump out hundreds or even thousands of the strips all at once, bundled together but intended to float apart from each other once made airborne. These strips will flutter around and the radar would ping off of them. With a cloud of them floating in the air, the radar would be overwhelmed and unable to identify where the airplane was.

Interestingly, this was considered such a significant defensive weapon that neither the Allies nor the Germans were willing to use them after they had each independently discovered the invention.

It was thought that once used, even if used just one time, the other side would then also discover it and be able to use the same.

We have two examples then of the use of decoys for deception purposes.

One is the bowler hat attired character in a movie, the other is the real-world use during World War II.

Of course, there are many other ways in life that you might come across these kinds of ploys.

One such relatively newer such use applies to computer software.

Use Of Deception And Chaff Bugs In Software

Researchers at NYU had posted an innovative research paper about their use of chaff bugs in software (work done by Zhenghao Hu, Yu Hu, and Brendan Dolan-Gavitt).

This reuse of the old “decoys trickery” is an intriguing modern approach to trying to bolster computer security.

Some of you will find it curious.

Some of you will think it genius.

Some of you with think it silly and unworkable.

Here’s the deal.

We already know that computer hackers (in this case, the word “hackers” is being used to imply thieves or hooligans) will often try to find some exploitable aspect of a software program so that they can then get the program to do something to their bidding or otherwise act in an untoward manner.

Famous exploits include being able to force a buffer overflow, which then might allow the program to suddenly get access to areas of memory it normally should not. Or, it might be that the exploit causes the program to come to a halt or crash. This might be desirable by the hacker either to allow them to take some other action because perhaps the program was acting as a guardian, or that it might cause havoc or confusion that the hacker is hoping to stoke.

Indeed, in the news, there was a reveal that certain HP All-in-One printers could be sent a rigged image to the fax machine portion of the printer, and it would cause a buffer overflow that could allow for someone remotely to get the printer to engage in its built-in remote code execution mode.

Once in the remote code execution mode, the nefarious person could get the printer to do other potentially bad things by sending it additional code and commands. Plus, if the printer was behind a firewall and had internal network access, you could possibly sneak into other of the network connected devices too.

Skilled such hackers are continually on the look for exploits in software.

These potential exploits are usually small clips of code that can be turned into aiding the evil doings of the hacker.

Generally, this is highly skilled work to find the exploits and then devise a means to leverage the exploit. I say that because most of the time when you hear in the news about some “hacker” that got into a person’s computer or email, it is something very low-tech such as they guessed that the person used a password like “12345” and the “hacker” simply used that to break-in.

By the way, these simpleton kinds of break-ins of guessing passwords are somewhat demeaning to the highly skilled hackers. If you are a hunter that is trained and has years of experience in how to use a gun and hunt for wild boar, you are pretty irked when someone at a campground puts out some strips of bacon and the wild boar lands in their lap. Irked because those that don’t know about hunting will equate the trained and experienced hunter with the idiot that happened to have the bacon.

In any case, when developing software, well-versed software engineers and programmers should be trying to avoid writing something that can become an exploit.

If you have in-depth knowledge of the programming language you are using, you should already have familiarity with the known potential exploits that you can fall into. Unfortunately, not all programmers are versed in this, or they are so pressured to write the code quickly that they don’t think about the potential for exploits, or they are not aware of how the code will be compiled and run such that it creates an exploit that they would not otherwise have anticipated. And so on.

Let’s get back to the researchers and what they came up with.

They were trying to develop software that would find exploits in software. Indeed, there are various tools that you can use to find potential exploits. The hackers use these tools. Such tools can also be used by those that want to scan their own software and try to find exploits, hopefully doing so before they actually release their software. It would be handy to catch the exploits beforehand, rather than having them arise at a bad time, or allow someone nefarious to find them and use them.

To be able to properly test a tool that seeks to find exploits, you need to have a test-bed of software that has potential exploits and thus you can run your detective tool on that testbed.

Presumably, the tool should be able to find the exploits, which you know are in the test-bed. This helps to then verify that the tool apparently works as hoped for. If the tool cannot find exploits that you know are embedded into the testbed, you’d need to take a closer look at the tool and try to figure out why it missed finding the exploit. This cycle is repeated over and over, until you believe that the tool is catching all of the exploits that you purposely seeded into the testbed.

So, you need to create a test-bed that has a lot of potential exploits. It can be laborious to think of and write a ton of such exploits. You can find many of them online and copy them, but it is still quite a labor intensive process. Therefore, it would be handy to have a tool that would generate exploits or potential exploits which you could then insert into or “inject” into a software test-bed.

With me so far?

Here’s the final twist.

If you had a tool that could create potential exploits, doing so for purposes of creating test exploits to be put into a test-bed of software, you could also consider potentially using the ability to generate potential exploits to create decoys for use in real software.

Think of each of the potential exploits as akin to a strip of foil for the World War II chaff.

Explaining How Chaff Bugs Work

For the WWII chaff, you’d have lots and lots of the strips, so as to overwhelm the enemy radar.

Why not do the same for software by generating say hundreds or maybe even thousands of potential exploits, clips of code, and then embed those clips of code into the software that you are otherwise developing.

This could then serve to trick any hacker that is aiming to look into your code.

They would find tons of these potential exploits.

Now, you’d of course want to make sure that these seeded exploits are non-exploitable.

In other words, you’d be shooting your own foot if you were generating true exploits. You want instead ones that look like the real thing, but that are in fact not exploitable.

The hacker then would be faced with having to find a needle in a haystack, meaning that even if you have a real exploit in your code, presumably done unintentionally and you didn’t catch it beforehand, the hacker now is faced with thousands of potential exploits and the odds of finding the true one is lessened. This would raise the barrier to entry, so to speak, in that the hacker now has to spend an inordinate amount of time and effort to possibly find the true exploit, even if it exists, which it might not.

I realize that the initial reaction is that it seems somewhat surprising, maybe ludicrous, for you to purposely put potential exploits into your code.

Even if they are truly and utterly non-exploitable, it still just seems like you are playing with fire. When you play with fire, you can get burned. That’s the concern for some, namely that if you inject your (hopefully) clean code with hundreds or thousands of potential non-exploitable exploits, it seems like something bad is bound to happen.

This might be similar to the famous line in the movie Ghostbusters when they were cautioned to not cross the streams of their ghostbuster energy guns, since it could cause an obliteration of the entire universe.

You could say that vaccines are dangerous and yet we use them on humans everyday.

A vaccine is a weakened version of the real underlying virus, and you use it to get the human body to react and build-up a defense. The defense then comes to play when a true wild attack of the virus occurs. This analogy though isn’t quite apt for this matter since the non-exploitable exploits aren’t bolstering the code of the software, instead they are there simply as decoys.

These so-called chaff bugs might be an effective kind of decoy.

Would it scare off a hacker looking for exploits?

Maybe yes, maybe no.

On the one hand, if the hacker looked at the overall code and found right away a potential exploit, they might get pretty excited and think it is their lucky day. They might then expend a lot of attention to the found exploit, which presumably if truly un-exploitable then is a waste of their time.

Would they then give up, or would they look for another one?

If they give up, great, the decoy did its thing. If they look for more, they’ll certainly find more because we know that we’ve purposely put a bunch of them in there.

After finding numerous such potential exploits, and after discovering that they are non-exploitable, would then the hacker give up?

This seems likely. It all depends on how important it is to find a potential exploit. It also depends on how “good” the non-exploitable exploits are in terms of looking like a true exploit.

If the hacker can somehow readily figure out which are the injected exploits, they could then readily opt to ignore them. In that sense, we’re back to the software essentially not having any of the decoys in it, in the sense that if the decoys are all readily discoverable, it’s about the same as if you had none at all in your code.

Therefore, the decoys need to be “good” decoys in that they appear to be exploitable exploits and do not readily appear to be non-exploitable exploits.

This can be tricky to achieve. Having an exploit that is non-exploitable can be achieved by doing some relatively simple things to mute or block the exploit’s exploitability, but then it becomes very easy to look at the exploit and know that it is most likely a planted non-exploitable exploit.

In terms of the planting of the non-exploitable exploits, that’s another factor to be considered.

If I put the decoys in very obvious places of the code, the hacker might realize the underlying pattern of where I am putting the decoys. This then allows the hacker to either ignore those seeming exploits or at least realize that after some limited inspection that if it came from that area of the code it is more likely to be a planted one. You’ve got to then find a means to plant or inject the exploits so that their positioning in the code is not a giveaway.

You could consider randomly scattering the non-exploitable exploits throughout the software.

This might not be so good.

On the one hand, the randomness hopefully prevents a hacker from identifying a pattern to where the decoys were planted.

At the same time, it could be that you’ve put an exploit in a part of the code that would not be advisable for it.

Inner Trickery Is Key

Allow me to explain the logic involved.

If the non-exploitable exploits are to be convincing as potentially exploitable exploits, they presumably need to actually do something and cannot be just stubs of code that are blocked off from execution.

A blocked off exploit would upon inspection be an obvious decoy and thus not require much further effort to explore. Remember that we want the presumed hacker to consume a lot of time by examining the decoys.

But, if the non-exploitable exploits are indeed able to execute then we need to be sure that not only don’t they do something that a genuine exploit would do, we also need to be concerned about their execution time and the consumption of computing cycles.

The decoy might be using up expensive computing cycles, doing so needlessly, other than to try and suggest that it is not a decoy. When I say expensive, I am referring to the notion that the computing cycles might be needed for other computational tasks, and so the decoy is robbing those tasks by chewing up cycles (in addition, you could say “expensive” depending upon what the cost is for your computing cycles).

The decoy might use up both computer cycles and also memory.

Memory could also be a limited resource and therefore the decoy is using it up, solely for the purposes of trying to throw off a potential interloper. As such, however it is that the non-exploitable exploit works, it must appear to be a true exploit, and yet not do any harm, and also minimize consumption of precious computational cycles and computer memory. This is a bit of a tall order as to having non-exploitable exploits that can be such alluring decoys at the minimal overall cost possible.

We could opt to have let’s say simple decoys and super decoys.

The simple decoys are not very convincing and are readily detectable, while the super decoys are complex and difficult to detect as a decoy.

Then, upon seeding of the source code, we might use a mixture of both the simple decoys and the super decoys.

As mentioned before, though, it is important to refrain from putting any of the more time consuming or memory consuming decoys into areas of the code that might be especially adversely impacted. If there’s a routine in the code that needs to run fast and tightly, putting a decoy into the middle of it would likely be unwise and detrimental.

AI Autonomous Cars And Chaff Bugs

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. As part of that effort, we are also exploring ways to try and protect the AI software, particularly once it is on-board a self-driving car and could potentially be hacked by someone with untoward intentions.

I’ve previously discussed that there are many trying to steal the secrets of AI systems for self-driving cars, see my article: https://aitrends.com/selfdrivingcars/stealing-secrets-about-ai-self-driving-cars/

I’ve also discussed that there are numerous computer security concerns underlying the running of the AI of a self-driving car, see my article: https://aitrends.com/ai-insider/ai-deep-learning-backdoor-security-holes-self-driving-cars-detection-prevention/

One approach to help make the AI software harder to figure out for an interloper involves making use of code obfuscation. This is a method in which you purposely make the source code difficult to logically comprehend.

See my article about code obfuscation: https://aitrends.com/selfdrivingcars/code-obfuscation-for-ai-self-driving-cars/

Another possibility of a means to try and undermine an interloper would be to consider using chaff bugs in the AI software.

This has advantages and disadvantages.

It has the potential to boost the security by a security-by-deception approach and might discourage hackers that are trying to delve into the system. A significant disadvantage would be whether the decoys would possibly undermine the system due to the real-time nature of the system. The AI needs to work under tight time constraints and must be making computational aspects that ultimately are controlling a moving car and for which the “decisions” made by the software are of a life-or-death nature.

A decoy that placed in the wrong spot of the code and for which chews up on-board cycles could put the AI and humans at risk.

Consider that these are the major tasks of the AI for a self-driving car:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action plan updating
  • Car controls command issuance

See my article about my framework for AI self-driving cars for more details about these tasks: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

Where would it be “safe” to put the decoys?

Safe in the sense that the execution of the decoy does not delay or interfere with the otherwise normal real-time operation of the system.

It’s dicey wherever you might be thinking to place the decoys.

We also need to consider the determination of the hacker.

If a hacker happens upon some software of any kind, they might be curious to try and hack it, and give up if they aren’t sure whether the software itself does something of enough significance that it is worth their effort to continue trying to crack it. In the case of the AI software for a self-driving car, there is a lot of incentive to want to crack into the code, and so even if the decoys are present, and even if it becomes apparent to the hacker, and even if the hacker is somewhat discouraged, it would seem less likely they’d give up since the prize in the end has such high value.

Anyway, one of the members of our AI development team is taking a closer look at this potential use of chaff bugs. My view is that the team members are able to set aside a small percentage of their time toward innovation projects that might or might not lead to something of utility. It is becoming gradually somewhat popular as a technique among high-tech firms to allow their developers to have some “fun” time on projects of their own choosing. This boosts morale, gives them a break from their other duties, and might just land upon a goldmine.

Machine Learning And Chaff Bugs

One approach that we’re exploring is whether Machine Learning (ML) can be used to aid in figuring out how to generate the non-exploitable exploits and also to make those decoys as realistically appearing to be integral to the code as we can get.

By analyzing the style of the existing source code, the ML tries to take templates of non-exploitable exploits and see if they can be “personalized” as befits the source code.

This would make those decoys even more convincing.

For more about Machine Learning and AI self-driving cars, see my article: https://aitrends.com/ai-insider/machine-learning-benchmarks-and-ai-self-driving-cars/

At an industry conference I mentioned the chaff bugs work, and I was asked about whether to hide them or whether to make them more obvious in some respects.

The idea is that if you hide them well, the hacker might not realize they are faced with a situation of having to pore through purposely seeded non-exploitable exploits and so blindly just plow away and use up a lot of their effort needlessly.

On the other hand, if you make them more apparent, at least some of them, it might be a kind of warning to the hacker that they are faced with software that has gone to the trouble to make it very hard to find true exploits. You might consider this equivalent to putting a sign outside of your house that says the house is protected by burglar alarms. The sign alone might scare off a lot of potential intruders, even whether you have put in place the decoys or not (some people get a burglar alarm sign and put it on their house as merely a scare tactic).

For AI software that runs a self-driving car, I’d vote that we all ought to be making it as hard to crack into as we can.

Conclusion

The auto makers and tech firms aren’t putting as much attention to the security aspects as they perhaps should, since right now the AI self-driving cars are pretty much kept in their hands as they do testing and trial runs. Once true AI self-driving cars are being sold openly, the chances for the hackers to spend whatever amount of time they want to crack into the system goes up.

We need to prepare for that eventually. If AI self-driving cars become prevalent, and yet they get hacked, it’s going to be bad times for everyone, the auto makers, the tech firms, and the public at large.

Chaff bugs and whatever other novel ideas arise, we’re going to be taking a look and be kicking the tires to see if they’ll be viable as a means to protect the AI systems of self-driving cars.

Copyright 2019 Dr. Lance Eliot

This content is originally posted on AI Trends.



from AI Trends https://ift.tt/2oE8V0w
via IFTTT

Comments

Popular posts from this blog

Underwater Autonomous Vehicles Helping Navy Get More for the Money 

Canada regulator seeks information from public on Rogers-Shaw deal