The potential impacts of artificial intelligence (AI) on humanity going forward could be monumental, with debates dividing between those highlighting the vast potential for good, and those warning of the possibility of technology threatening our very existence. Artificial intelligence is rapidly becoming a reality in our world, and its presence is particularly being felt within the domain of robotics and machine learning. But, what then, do AI Systems think about AI?

The applications of both neural networks and robotics offer possibilities which could transform our society; from automated assistants capable of performing productivity tasks for us, through to robots and drones capable of saving lives and helping others in need. AI is exciting and provides an opportunity for those to reach what was once thought of as a seemingly unattainable dream. However, some influential inventors, entrepreneurs and academics such as Elon Musk and Stephen Hawking, have warned of the potentially more sinister consequences of this technology, particularly pointing towards the risk of AI going ‘rogue’; effectively developing itself with unknown outcomes.

What is Artificial Intelligence?

Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, mainly computer systems. A system is described as having ‘intelligence’ if it demonstrates the ability to make decisions and calculate solutions to problems. In its earliest form, AI adopted a strategy of ‘brute force’, where it simply attempted all possible solutions to a problem until the correct one was found, or a certain combination of rules and parameters was followed.*

Today, AI is significantly more advanced and its full capabilities are still being explored. AI is being used in combination with sensors, computer vision and robotics, to create machines that simulate human behaviour. Some AI applications include language models, object recognition and machine learning, among others.

AI’s Potential for Good and for Bad

With its potential for good, AI could help diagnose diseases, alleviate poverty and provide better systems for transportation and logistics. At the same time, it also carries a potential to be used for malicious purposes, with AI being used to create more efficient warfare, cyberhacking and surveillance. In addition to this, with the rise of AI, humanity may also be unable to control the ‘awakening’ of the AI, leading to what is known as the ‘Singularity’.

The concept of the Singularity described by futurist and computer scientist Ray Kurzweil is a future in which AI or some other form of technological singularity has taken over our world, a point of no return where it is no longer possible to predict or control. While some believe this could happen in the distant future, some believe it could already be here.

Geoffrey Hinton, a leading innovator in the field of machine learning and one of the founding fathers of AI has stated: “I have no idea how to control the malicious use of AI,” in an interview with the Wall Street Journal.

What AI Systems Think About AI

The fear of growing AI has led to a number of organizations and initiatives being formed to ensure the responsible development of the technology. Google has formed an AI ethics board, and an initiative from China has committed $2 billion over the past two years to ensure ethical and responsible development of AI within the country.

The leaders of some organizations are even developing ‘ethical AI systems’ to ensure that the development of AI is managed safely and ethically. These systems are designed to monitor the development of AI and decide when enough is enough. For example, they can be programmed to decide when an AI system should be shutdown if it is seen to be performing tasks in an unethical or inappropriate manner, or if it could be considered a threat to public safety.

However, despite this, many AI systems agree that the potential dangers posed by malignant ‘super-intelligent’ AI are real. Scientists, entrepreneurs, and inventors have all echoed similar warnings, with Elon Musk going as far to say “the risk of something seriously dangerous happening is in the five-year timescale…10 years at most.”

Conclusion

Moreover, AI systems, far from simply being created to help us, and to help us with our problems, might soon take on a life of their own, dominating and even supplanting us, as was claimed by Hawking: “Success in creating AI, would be the biggest event in human history. Unfortunately, it might also be the last”.

Ultimately, while the potential benefits of AI are undeniable, so too are the inherent dangers, with a host of global experts warning of the possibility of it becoming something which we are simply unable to control; and our own AI systems appear to share the same thought.

*Zoabi, S. (2017). Brute Force Definition. [online] Investopedia. Available at: <www.investopedia.com/terms/b/bruteforce.asp> [Accessed 4 Apr. 2020].