Clicky
Opinion

Artificial intelligence gets scarier and scarier

Reverse engineering of algorithms is the new danger and it’s a real threat


Published : 22 Mar 2022 08:55 PM

It is impossible to overestimate the importance of artificial intelligence. Since it is an enabling technology akin to Thomas Edison’s description of electricity: “a field of fields … it holds the secrets which will reorganize the life of the world.”

While the commission also noted that “No comfortable historical reference captures the impact of artificial intelligence (AI) on national security,” it’s rapidly becoming clear that those ramifications are far more extensive — and alarming — than experts had imagined. It is unlikely that our awareness of the dangers is keeping pace with the state of AI. Worse, there are no good answers to the threats it poses.

AI technologies are the most powerful tools that have been developed in generations — perhaps even human history — for “expanding knowledge, increasing prosperity and enriching the human experience.” This is because AI helps us use other technologies more effectively and efficiently. AI is everywhere — in homes and businesses (and everywhere in between) — and is deeply integrated into the information technologies we use or affect our lives throughout the day.

The consulting company Accenture predicted in 2016 that AI “could double annual economic growth rates by 2035 by changing the nature of work and spawning a new relationship between man and machine” and by boosting labor productivity by 40%,” all of which is accelerating the pace of integration. For this reason and others — the military applications in particular — world leaders recognize that AI is a strategic technology that may well determine national competitiveness.

That promise is not risk free. It’s easy to imagine a range of scenarios, some irritating, some nightmarish, that demonstrate the dangers of AI. Georgetown’s Center for Security and Emerging Technology (CSET) has outlined a long list of stomach-churning examples, among them AI-driven blackouts, chemical controller failures at manufacturing plants, phantom missile launches or the tricking of missile targeting systems.

For just about any use of AI, it’s possible to conjure up some type of failure. Today, however, those systems aren’t yet functional or they remain subject to human supervision so the possibility of catastrophic failure is small, but it’s only a matter of time.

For many researchers, the chief concern is corruption of the process by which AI is created — machine learning. AI is the ability of a computer system to use math and logic to mimic human cognitive functions such as learning and problem-solving. Machine learning is an application of AI. It’s the way that data enables a computer to learn without direct instruction, allowing the machine to continue improving on its own, based on experience. It’s how a computer develops its intelligence.

Andrew Lohn, an AI researcher at CSET, identified three types of machine learning vulnerabilities. Those that permit hackers to manipulate the machine learning systems’ integrity (causing them to make mistakes); those that affect its confidentiality (causing them to leak information); and those that impact availability (causing them to cease functioning).

Broadly speaking, there are three ways to corrupt AI. The first way is to compromise the tools — the instructions — used to make the machine learning model. Programmers often go to open-source libraries to get the code or instructions to build the AI “brain.”

For some of the most popular sources, daily downloads are in the tens of thousands; monthly downloads are in the millions. Badly written code can be included or compromises introduced, which then spread around the world. 

Closed source software isn’t necessarily less vulnerable, as the robust trade in “zero day exploits” should make clear.

A second danger is corruption of the data used to train the machine. In another report, Lohn noted that the most common datasets for developing machine learning are used “over and over by thousands of researchers.” Malicious actors can change labels on data — “data poisoning” — to get the AI to misread inputs. Alternatively, they create “noise” to disrupt the interpretation process. 

These “evasion attacks” are minuscule modifications to photos, invisible to the naked eye but which render AI useless. Lohn notes one case in which tiny changes to pictures of frogs got the computer to misclassify planes as frogs. (Just because it doesn’t make sense to you doesn’t mean that the machine isn’t 

flummoxed; it reasons differently from you.)

A third danger is that the algorithm of the AI, the “logic of the machine,” doesn’t work as planned — or works exactly as programed. Think of it as bad teaching. 

The data sets aren’t corrupt per se, but they incorporate pre-existing biases and prejudices. Advocates may claim that they provide “neutral and objective decision making,” but as Cathy O’Neill made clear in “Weapons of Math Destruction,” they’re anything but.

These are “new kinds of bugs,” argues one research team, “specific to modern data-driven applications.” For example, one study revealed that the online pricing algorithm used by Staples, a U.S. office supply store, which adjusted online prices based on user proximity to competitors’ stores, discriminated against lower-income people because they tended to live farther from its stores. O’Neill shows how proliferation of such systems amplifies injustice because they are scalable (easily expanded), so that they influence (and disadvantage) even more people.

Computer scientists have discovered a new AI danger — reverse engineering machine learning — and that has created a whole host of worries. 

First, since algorithms are frequently proprietary information, the ability to expose them is effectively theft of intellectual property.

Second, if you can figure out how an AI reasons or what its parameters are — what it is looking for — then you can “beat” the system. In the simplest case, knowledge of the algorithm allows someone to “fit” a situation to manufacture the most favorable outcome. Gaming the system could be used to create bad if not catastrophic results. 

For example, a lawyer could present a case or a client in ways that best fit a legal AI’s decision-making model. Judges haven’t abdicated decision-making to machines yet, but courts are increasingly relying on decision-predicting systems for some rulings. (Pick your profession and see what nightmares you can come up with.)

But for catastrophic outcomes, there is no topping the third danger: repurposing an algorithm designed to make something new and 

safe to achieve the exact opposite outcome.

A team associated with a U.S. pharmaceutical company developed an AI to find new drugs; among its features, the model penalized toxicity — after all, you don’t want your drugs to kill the patient. Asked by a conference organizer to explore the potential for misuse of their technologies, they discovered that tweaking their algorithm allowed them to design potential biochemical weapons — within six hours they had generated 40,000 molecules that met the threat parameters.

Some were well-known such as VX, an especially deadly nerve agent, but it also developed new molecules that were more toxic than any known biochemical weapons. Writing in Nature Machine Intelligence, a science journal, the team explained that “by inverting the use of our machine learning models, we had transformed our innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecules.”

The team warned that this should be a wake-up call to the scientific community: “A nonhuman autonomous creator of a deadly chemical weapon is entirely feasible … .This is not science fiction.” Since machine learning models can be easily reverse engineered, similar outcomes should be expected in other areas.

Sharp-eyed readers will see the dilemma. Algorithms that aren’t transparent risk being abused and perpetuating injustice; those that are, risk being exploited to produce new and even worse outcomes. Once again, readers can pick their own particular favorite and see what nightmare they can conjure up.

I warned you — scary stuff.


Brad Glosserman is deputy director of and visiting professor at the Center for Rule-Making Strategies at Tama University as well as senior adviser (nonresident) at Pacific Forum. He is the author of “Peak Japan: The End of Great Ambitions” (Georgetown 

University Press, 2019). 

Source: The Japan Times