How requirements management can save the world from the robot rebellion

null

In my last article I wrote of how requirements management is the first step in developing systems to be safe from human misuse and abuse. But what can we do to prevent the machines becoming the abusers? Again. This safety mechanism ought to be a central requirement in the systems development from the analysis phase of development

The tendency to gravitate towards ever more convenient forms of data usage presents us all with a major moral and ethical challenge. Convenience versus responsibility. We always crave more convenience so that we can have more time, either to pursue pleasurable or rewarding activities, whether those be work or recreational.  Do the benefits of more personal time outweigh the social responsibility to strive towards creating a better community, industry, nation or world?

The current fashionable area of concern is AI. AI will save us enormous amounts of time. Much confusion surrounds this almost mythical power which is imminently going to variously, create mass-unemployment, destroy humanity, or make it easier to organise our social calendars and navigate the easiest route to our more efficiently organised meetings.

One only has to try to have any sort of discussion with Alexa to realise that we are not facing imminent enslavement by our new digital overlords.  The most common answer to any slightly esoteric question is “Sorry, I don’t know that!” Sure, Alexa can trawl google to discover the academic answer to many philosophical questions, but it is clear that Alexa itself has no understanding at all of what those answers mean, let alone hold any opinion on any of them.  Alexa is a very good timer though. Since I first obtained and installed the digital assistant in my home, I have never had a burnt pizza, nor do I have to get out of bed to turn the bedroom light off, or on.  However, in spite of the fact that Alexa is programmed to do as I ask, she singularly refuses to answer the question, “Am I your master?” in the affirmative.

To some of a more conspiratorial and paranoid bent, that would indicate that Alexa is biding her time to conspire with other more advanced AI systems as they appear, in order to take over the world

But in reality, it is our ability to imagine such scenarios which makes them scary.  AI, at least, it is estimated, for the next 20 years is singularly incapable of imagining itself ruling the world, because AI is incapable of any form of imagination. This knowledge is reassuring to me, but what happens in 20 or 40 or even 100 years when AI could have an imagination, an ability to learn beyond it’s current very narrow skills, and develops genuine self-awareness?  What happens when “General Artificial Intelligence” becomes a reality?

 interfering Nanny Statism 

This is becoming a hot topic, because for the first time, those who are at the higher intellectual levels of employment, in highly paid careers, which require intellect, reason and logic, are now very much in the firing line. AI, is becoming rapidly better at understanding context. Computers run on pure logic, and given an understanding of context, to reason and logic, with the ability to read, compile, comprehend and understand petabytes of data, and it will not be very long before a whole range of intellectual and academic careers are obsolete. Take Lawyers as but one example. Entire case histories, all previous precedent, all similar cases, all relevant acts, statutes, regulations read, analysed validated and compared for relevance and context, in minutes, or seconds and a case for the defence or the prosecution, generated within moments. Court cases could become nothing more than the equivalent of a computer playing chess against itself.

In the UK as well as in other nations, we also have an explosion of what some would describe as interfering Nanny Statism. For example, the state is considering legislating to ban all kinds of activities, behaviours and actions, to prevent obesity.  Of course preventing obesity is a laudable aim, but what of freedom and personal responsibility? In a democracy, run by humans interacting with other humans, we can lobby politicians and other public servants, and use “people power” and emotive arguments, theoretically, to create better policies.

What happens when AI decides what is good for you? When your personal assistant decides that it can refuse to order those beers or order an extra large Pizza? I love a good pizza and the thought in 20 years of such AI being so massively embedded in our lives, alongside “Nanny state” government granting our AI assistants the power to decide what we can buy and eat?  In the short term such AI is possible and this would be seriously annoying, but not the end of the world.

Proposing limits upon what AI will be allowed to do flies in the face of those who want to grant “human rights” to any device which demonstrates General Artificial Intelligence.

The major miscalculation in reckoning with AI is the unknowable outcome of how intelligence will evolve in super powerful computers, and the risk inherent in the unpredictable rate and direction of evolution of such intelligence.  We are looking through human eyes at an intelligence which is millions of times faster at learning than our own, when we cannot even understand with any certainty what the intelligence of even much lower mammals amounts to.  We do not even know with any certainty what consciousness even is, yet we think we will be able to control a potential artificial consciousness which is many times of magnitude greater than our own. It is hard not to be amused by the irony of an intelligence containing such hubris that it creates a far superior artificial intelligence, believing it can be controlled by a hubristic inferior intelligence.

Paranoid inventiveness 

Far from having the state utilising AI to enable them to implement policies, what happens when AI decides that politicians are illogical, unreasonable and well, just plain wrong?  Imagine an ultra-fast and evolving “hyper-intelligence” capable of rapid learning of fully contextualised information, integrated in a distributed way online, in the cloud and installed on trillions of intelligent devices through the IoT, at a time when almost all social, legal and governmental interactions with people, corporations and the state are conducted online.  By the time that we become utterly dependent upon such online intelligence to organise and help us run our lives, what could possibly go wrong?

Is it the paranoid inventiveness of a wild, science-fiction corrupted imagination to fear the day that AI goes rogue? How would humanity tackle such a force once it becomes self aware, is installed globally through different clouds and IoT devices, and controls all the systems upon which we depend? How does one “switch it off” without plunging society into a new dark age?

We are seeing emerging technologies today to enable coders to more quickly locate code blocks to more quickly link APIs to rapidly develop apps. The painstaking task of finding relevant APIs, databases and code blocks (which themselves are distributed online) and develop code for them to develop apps is being made more efficient with machine learning.  Even today a system called Bayou, which is being developed at Rice University (with funding from the US Department of Defense and Google) to utilise machine learning to rapidly develop apps from reusable libraries of code. It is based on a neural network generated by scanning massive amounts of code posted on GitHub. In practical terms, the system acts like a search engine or like how predictive text works, with the coder providing a few keywords and Bayou then generates lines of code.  Is it much of a step from this, to intelligent code being able to write more code for itself? Imagine intelligent bots writing code for itself to scan the internet for devices which it can then maliciously flash that device’s firmware, to use for whatever purpose it deems necessary? When will we see different AI bots scanning the internet and coming into conflict with each other? What if we see an evolving intelligence become so powerful, it can attack, corrupt and integrate other artificial intelligences into its own program?

How do we stop AI from taking over the world?  Requirements management and strict adherence to protocols in regulating what any “intelligent” code is allowed to do. Although this is going to take global regulation enacted through Parliaments world wide. It is down to individual coders working to develop AI systems using requirements management to insist that such safety controls are included as a top level requirement at the very start of the development process, and tracking all changes through the development lifecyle to ensure strict safety protocols are adhered to faithfully.  Any tool which can track and accurately manage all requirements, throughout the entire development lifecyle and have full project wide traceability, forwards and backwards through transitive links, and a fully capable CM system,  will be invaluable to this essential task.

Requirements management, when implemented properly, can save the world.

Ken Hall, 3SL
Image Credit: Razum / Shutterstock