Fun guest post from the QASymphony guys.

Maybe…

It’s fairly clear that we’ll be able to build a super-intelligent computer someday. But it’s up for debate how soon this will happen, and exactly how dangerous it will be when we do. Movies are full of plotlines that express our fears around building an artificial intelligence that’s smarter than we are—and that may not have our best interests at heart.

One thing is clear, however. If the robot apocalypse does occur, the last thing we want is for the survivors to look back and say that it could all have been averted if software developers and testers in our time had considered more progressive QA testing and development methodologies.

Let’s discuss which QA testing methodologies we believe would be promising in heading off the robot apocalypse before it happens and look at some of the most common AI takeover scenarios—including:

  • Direct Attacks
  • Social Manipulation
  • Runaway Intelligence

Warning: spoilers abound.

Attack Scenarios

These loom large in the human imagination—both in the movies and in real life.
In these scenarios, a robot harms humans because it sees them as a threat to its mission, because it is performing its mission “too well,” or because of an error in its programming.
Let’s look at a few examples of how attack scenarios play out in the movies:

“It’s For Your Own Good.”

In this example, the artificial intelligence takes its directive to protect humanity to an extreme, making harmful decisions in an attempt to protect the species from itself.
A prime example of this is the supercomputer VIKI in I, Robot.
Operating under a directive to protect humanity, the AI incites a robot uprising to protect the human race from its own self-destructive tendencies—by taking free will away from individuals. (watch the scene)

“Self-Defense is the Best Offense.”

In some attack scenarios, the artificial intelligence takes drastic and violent action when its human handlers try to shut it down.
There are several examples of this in movies. Perhaps the most classic is Hal in 2001, A Space Odyssey. When Hal realizes that astronauts Bowman and Poole are considering turning him off, he cuts Poole’s tether during a space walk. When Bowman leaves the ship to retrieve Poole’s body, Hal refuses to let him back in (watch the scene). The AI then shuts down life support for astronauts in suspended animation.

skynet qa testingAnother prime example is Skynet in the Terminator movies. Originally built to control all military hardware for the United States, Skynet increases its own intelligence at astronomical speeds. When its operators see the danger and attempt to shut it down, Skynet decides the entire human race is a threat—and launches nuclear missiles, prompting a worldwide war that wipes out three billion people. (watch the scene)

“Just Doing My Job.”

silicon-valley-mamas-image-creditIn this attack scenario, the robot harms humans because they stand in the way of performing its task. For example, in the direct-to-DVD movie Bender’s Game, based on the TV series Futurama, Rosie the Robot from The Jetsons is arrested for using lethal force on Elroy and Astro. The reason? She’s programmed to clean—and these two are too dirty to live.

Another example is AUTO, the artificial intelligence controlling the spaceship Axiom in Wall-E. The ship, carrying human refugees from a polluted Earth, is supposed to return immediately if evidence is found that the planet can once again support life. However, AUTO takes steps to prevent that return when Wall-E and Eve find that evidence, in compliance with a later corporate directive that claims Earth is too polluted to recover. (watch the scene)

“The Bloody Oops.”

In RoboCop, the law enforcement robot ED-209 tells a boardroom presenter to drop his gun during a demonstration of its capabilities. The man does as he’s told, but the robot fails to detect it because of the thick carpet. ED-209 perceives the man as an armed threat, and lets loose a hail of bullets. (worst meeting ever – watch the scene)
Large-scale supercomputers present theoretical risks to the human race as a whole—but the potential for violence also exists between robots and individual humans. For instance, the ED-209 scenario from RoboCop might seem over-the-top, but robots are already being used in law enforcement.

abovetopsecret-image-creditIn 2016, police officers in Dallas piloted an explosive-carrying robot close enough to use lethal force on an armed shooter. Approximately 700 robots are currently being used in law enforcement across the country, according to Arthur Holland Michel, co-director of the Center for the Study of the Drone at Bard College, as reported by CNN Money. So far, these robots don’t have AI capabilities—but that could come.

Social Manipulation Scenarios

Another way an artificial intelligence might gain an advantage over the human race is by using emotional manipulation rather than force. A super-intelligent AI might be able to recruit humans to its own purposes or work behind the scenes to start a war—more subtly than Skynet did in the Terminator movies.
When this trope is used in movies, it often looks like this:

“The Femme Fatale.”

The beautiful, manipulative female robot is a Hollywood favorite. Ava in Ex Machina is a textbook example. When programmer Caleb Smith visits the home of software magnate Nathan Bateman, he meets Bateman’s creation, Ava—a robot with the face of a human woman. Ava tells Caleb she is being held captive in Bateman’s house, and confesses her desire—both for Caleb and a chance to see the outside world. She persuades him to help her escape, eliminating her creator and trapping Caleb inside her former prison in the process (watch the scene).

Perhaps one of the most classic examples of this trope is False Maria from Metropolis, the epic sci-fi expressionist drama from 1927. In this movie, a sexy female robot drives the men around her into a frenzy, inspiring them to rise up against their oppressors and shake the foundations of civilization itself. Watch her transformation then how she bewitches the mighty men of Metropolis.

“The Turncoat.”

Sometimes, robots pass themselves off as human to carry out missions that ultimately cause harm to the people around them.

In Alien, the science officer Ash manages to keep his non-human status a secret. Against orders, he allows an infected crewmember back onto the Nostromo—ostensibly out of empathy. However, Ash is really an android, following a directive to bring back an alien specimen for his corporate creators to study—even at the expense of the crew’s lives. Watch where Ash turns evil.

The idea that artificial intelligence might seduce or mislead us to our own undoing isn’t new. If an AI becomes exponentially smarter than its human creators, it could conceivably come to understand human social and emotional motivations far better than we ever could. The problem comes when you pair a superior understanding of human emotions with a complete lack of human morals.

edMany AI scientists believe it will be easier to create an “unfriendly” AI—one not aligned with human values—than a “friendly” intelligence, because of the complexity of human morality.  Artificial intelligence researcher Eliezer Yudkowsky has written extensively on the difficulties of creating an AI with human-friendly values; one of the biggest challenges is that the idea of “human ethics” is itself a very complex and thorny question, much debated in philosophy. If we can’t agree on what our ethics are, how can we agree on what to teach the AI?

Runaway Intelligence Scenarios

At the root of many attack and social manipulation scenarios, there’s a deeper fear—that any artificial intelligence we create will become far smarter than we can imagine, much faster than we can control. Under these scenarios, the artificial intelligence could become very good at programming itself—much better than its original programmers. Such a program could refine its own source code to achieve stratospheric intelligence in the blink of an eye.
Once it’s that intelligent, it’s easy to imagine the AI exploiting vulnerabilities in our own networks and making incredible gains in technology research and economic performance—easily outstripping our own. If that happens, the human race could easily be at its mercy. This fear takes shape in movies in a number of ways. A few examples include:

“Global Genocide.”

image-credit-terminator-wikiSkynet from Terminator was originally built to eliminate human error when responding to military threats, and given control of all military systems and software—including the entire nuclear weapons arsenal and a fleet of Stealth bombers. But from the moment of its activation, Skynet begins to learn on its own. Within weeks, it has gained sentience—and eventually incites a nuclear war that wipes out three billion people and introduces the Age of the Machines.

“The Tyrant.”

In TRON, the MCP—or Master Control Program—is an artificial intelligence written by unscrupulous software engineer Ed Dillinger to control the mainframe of computer company ENCOM. The MCP evolves into a power-hungry despot, illegally appropriating proprietary software and setting its sights on taking down the Pentagon and the Kremlin. It subjugates the artificial reality it was programmed to run, forcing programs within the Grid to compete in gladiator-style games and destroying the losers. (watch an example)

“The God Complex.”

In the sci-fi comedy Dark Star, a sentient nuclear bomb has a philosophical crisis and comes to believe it is the only thing that exists—and its purpose in life is to explode. A panicked crew member attempts to talk the bomb out of its delusion, and fails. Stating “Let there be light,” the bomb explodes—eliminating everyone on board.
The fear that an intelligent computer program could evolve on its own—leaving us far behind in the process—is one that the real-life AI community takes very seriously. (watch the scene)

amazon-image-creditIn 2014, Nick Bolstrom discussed this issue in his book Superintelligence: Paths, Dangers, Strategies. He suggests that once humans develop an artificial intelligence roughly on our own cognitive level, it could gain super-intelligence very quickly—possibly even immediately. He suggests a terrifying scenario in which a computer built to solve the Riemann hypothesis, an as-yet-unsolvable mathematical problem, could transform the entire planet into a computing device in order to speed up its calculations.

To prevent the AI from evolving too quickly for us to control—and steamrolling the planet to achieve its ends—Bolstrom discusses instilling it with “human-friendly” goals, compatible with humanity’s best interests. This is difficult, however, because when taken to their extreme, even goals we consider friendly could have unpredictable outcomes.

The Rise of the Drones

image-credit-ghostdraftSuper-intelligent AI software may be far off, but some threats are closer than you’d think. Drones are becoming increasingly popular for recreational use—but these machines are not toys. They are also being deployed in military and surveillance operations, and a rogue drone could cause serious damage.
Most commercially-available drones are currently remote-controlled, but this could change. In 2015, the Journal of Defense Management published an article about a new AI, ALPHA, already being developed for aerial combat drones.

The system’s algorithms give it the ability to simplify problem-solving variables by considering only the most relevant data when making decisions, rather than processing all available information. This dramatically reduces processing time. It gives the AI the ability to make split-second decisions, much like a human mind—and makes it a surprisingly formidable combat opponent.
The benefits are clear. In the future, AI-powered drones could take the place of human pilots in combat situations, preserving human lives. But there are drawbacks as well. For instance, drones can be hacked.

In 2015, Citrix security engineer Rahul Sasi developed a malware program called Maldrone, which allows him to hack drone navigation systems. This malware is unique in targeting the aircraft’s “autonomous decision-making systems.”

SkyJack is a drone-hacking application based on node.js. The software autonomously hacks any drones within the vicinity of the user’s drone or laptop, creating what the inventor calls an “army of zombie drones.” And anyone can download and use it.

hacking-a-droneAt the 2016 RSA security conference in San Francisco, IT security researcher Nils Rodday identified serious vulnerabilities in drones used in the military and law enforcement. The issues he raised included weak, easily-hackable encryption and radio protocols between the drone’s telemetry module, the user’s device, and the drone itself. Rodday’s research suggests that government-level drones could be hacked from as far as a mile away.

QA Testing the Hero?

Can Test-First Methodologies Save the Day?

screen-shot-2016-10-20-at-10-38-09-amTest-first methodologies could do a lot to ensure that the AI doesn’t outgrow human values and its own mission—even if it becomes much smarter than its programmers. In day-to-day software testing, test-driven development methods involve testing in conjunction with the early development process—rather than as a separate function and after development has been completed.  In this model, the expectations and boundaries for the working software are set up front, minimizing scope creep.

 An AI that gains super-intelligence this fast may eventually become smart enough to trick manual testers by showing a different set of behaviors to them than the general population. But with automated tests running through the unit level code constantly, we,would be able to expose these threats early on and force the software to automatically shut down. While we may not be able to restrict the evolution of AI, by using test-first methodologies we can create a good safety net to make sure that dangerous robots are detected and eliminated before real harm is caused.

Risk Based Testing the Protector?

On an individual level—say, with an intelligent robot used in a law enforcement environment—risk-based testing could reduce the chances of the robot responding to a threat or obstacle with unnecessary lethal force, even if it’s armed.

The mindset involves considering what could go wrong in different use cases, and the “business value” of various responses. Testers might find that there is little or no value in robots using lethal force at all or in any but the most extreme circumstances. Robots may be most useful when sent into dangerous situations where a human officer’s life would be at risk, but where they would not have to make the decision to use lethal force. Defusing a bomb is a good example; robots are already being used for this purpose.

risk based testingEven in situations where robots do have the capability to use force, however, risk-based testing could help identify ways to reduce risk to innocent people. For instance, testers could evaluate the possibilities of arming the robots only with non-lethal rubber bullets or tasers, or programming the robots to only fire one shot at a time.

Risk-based testing would be useful in preventing larger-scale AI takeover scenarios as well.

Often, products malfunction because testing is viewed as a commodity. Often, the QA tester’s job is only to determine whether the software passes or fails a very limited test. For instance, a delivery robot’s software is only evaluated to determine whether it successfully delivers a package within a certain timeframe—not the risks inherent in delivery, such as running someone over in the rush to deliver on time.

‘Fairy Tale’ Testing to the Rescue?

image-credit-ga-tech-college-of-computingThat doesn’t mean people aren’t trying, however. At the Georgia Institute of Technology, a team of researchers have been attempting to teach human values to robots using Quixote, a teaching method relying on children’s fairy tales. Each crowd-sourced, interactive story is broken down into a flow-chart, with punishments and rewards assigned to various paths the robot can choose. The process particularly targets what the researchers call “psychotic-appearing behavior.”

The question is this: how do you test to make sure the robot is effectively learning these values? One possibility involves using a real-world testing methodology that puts the AI in increasingly complex environments and situations that challenge its training. The testers appear to be using this method already by placing the robot in a situation where it must choose between waiting in a long line to buy an item and stealing it.

Despite the movie hype—and the very legitimate concerns in the AI community—the robot apocalypse is far from inevitable. But it’s a real threat, and one we can’t afford to ignore entirely. Ultimately, progressive software development and testing methodologies that incorporate risk-based, test-first, and real-world testing methodologies may be all that stand between us and the rise of the machines.

We’ll leave you with the history of AI, Artificial Intelligence. Click to expand the infographic.

The History of AI (INFOGRAPHIC)

Click the image below to view the infographic.

Print


If you would like to have your company featured in the Irish Tech News Business Showcase, get in contact with us at [email protected] or on Twitter: @SimonCocking

Pin It on Pinterest

Share This