Stephen Hawking, Elon Musk and thousands of computer scientists, developers and technologists have signed an open letter calling for research into artificial intelligence, to combat the potential dangers of the new technology.
The letter was created by The Future of Life Institute and calls on researchers to understand the potential pitfalls of AI, how to avoid these mistakes, and how to reap the benefits of an autonomous brain.
Currently, AI is mostly confined to virtual assistants like Siri, Cortana and Google Now, alongside smart home appliances capable of learning the user's habits. However, in the next few years, AI could become extremely advanced, capable of controlling a full manufacturing floor or automating the whole household.
Startups like DeepMind, which Google acquired for $500 million (£331 million) last year, are also working on an AI brain, capable of thinking for itself, making decisions and learning from the past.
This sort of AI might excite people when it comes to the future, but for Tesla Motors CEO Elon Musk, it sounds like a nightmare waiting to happen. Musk has went on record saying “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that."
Several researchers from Google, DeepMind, Facebook and MIT have all signed the open letter. In the future, we may see these technology companies create standards for AI, to make sure no startup is capable of creating destructive AI.
There is a worry that one bad apple could spoil the bunch, and AI could be rather deadly in the wrong hands. Military organisations, for example, may be able to program AI to learn from previous battles and deploy robots against soldiers.