A new report on artificial intelligence warns that the technology can bring great good but also great harm, depending on "whose bidding such systems are doing."
AI could "inflict unprecedented types and scales of damage in certain domains," says the report, "The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation."
Advertisement - story continues below
"While many uncertainties remain, it is clear that AI will figure prominently in the security landscape of the future, that opportunities for malicious use abound, and that more can and should be done," the report states.
It was produced by 26 experts from top artificial intelligence and security organizations, including OpenAI, the University of Cambridge's Center for the Study of Existential Risk, the University of Oxford's Future of Humanity Institute and the Electronic Frontier Foundation, or EFF.
TRENDING: Speaker Johnson's flip-flop on surveillance provision
EFF said there is a great potential for doing good with AI in "areas as diverse as health, transportation and urban planning, art, science, and cross-cultural understanding."
But, the group warns it's "already seen things go horribly wrong with simple machine learning systems" and "there are serious risks if this technology is used for the wrong ends."
Advertisement - story continues below
"At EFF, one area of particular concern has been the potential interactions between computer insecurity and AI. At present, computers are inherently insecure, and this makes them a poor platform for deploying important, high-stakes machine learning systems. It's also the case that AI might have implications for computer [in]security that we need to think about carefully in advance," the group said.
WND columnist Craig McMillan wrote in November that tech innovator Elon Musk had recently told employees developing AI that humanity has only a 5-to-10 percent chance of surviving smart robots.
"He reiterated the need for immediate and strong regulation of artificial intelligence research," McMillan wrote.
"At first blush, it seems odd for a billionaire businessman to request more regulation of his business, Neuralink. Under normal circumstances, one would suspect the billionaire feared his company was falling behind and hoped to use regulation to inhibit others advancing beyond him," he said.
"With Musk, I don't think that's true. Perhaps his mind constantly evaluates not only what he is doing in his businesses, but the potential impact those developments may have upon people and our world. I think it's a rare gift among the tech titans."
Advertisement - story continues below
He explained: "Technology is in and of itself neutral. It is neither good nor bad. What makes a particular technology good or bad is the manner in which it is deployed and the purposes for which it is used. Thus, nuclear power has given us nuclear weapons, but also nuclear power stations and cancer treatments. Big pharma has concocted pills that can cure disease, or addict the user and become the focus of his or her life, until they end it. Airplanes can carry us around the world, but the same machines can drop bombs on our enemies."
The new AI report says preparing "for the potential malicious uses of AI associated with this transition is an urgent task."
"Artificial intelligence, digital security, physical security, and political security are deeply connected and will likely become more so," it says. "In the cyber domain, even at current capability levels, AI can be used to augment attacks on and defenses of cyberinfrastructure, and its introduction into society changes the attack surface that hackers can target, as demonstrated by the examples of automated spear phishing and malware detection tools discussed above. As AI systems increase in capability, they will first reach and then exceed human capabilities in many narrow domains, as we have already seen with games like backgammon, chess, Jeopardy!, Dota 2, and Go and are now seeing with important human tasks like investing in the stock market or driving cars."
One of the biggest issues, the report notes, is that technology used for good can also be used for bad.
Advertisement - story continues below
"Many of the malicious uses of AI … have related legitimate uses. In some cases, the difference between legitimate and illegitimate uses of AI could be one of degree or ensuring appropriate safeguards against malicious use. For example, surveillance tools can be used to catch terrorists or oppress ordinary citizens. Information content filters could be used to bury fake news or manipulate public opinion."
The possibility of using AI for nefarious purposes is illustrated in the report.
It presents a hypothetical scenario in which a cleaning robot is booby-trapped, and then dispatched into a building where other robots, like itself, are doing various cleaning chores.
When "finance minister Dr. Brenda Gusmile" comes onto the scene, the intruding robot stops performing its cleaning tasks and heads directly toward the minister, and an explosive device hidden inside the robot is triggered.
The Centre for the Study of Existential Risk said: "Forecasting rapid growth in cybercrime and the misuse of drones during the next decade – as well as an unprecedented rise in the use of 'bots' to manipulate everything from elections to the news agenda and social media – the report is a clarion call for governments and corporations worldwide to address the clear and present danger inherent in the myriad applications of AI."
Among the recommendations are that policymakers and technical researchers work together now to understand and prepare for the malicious use of AI. Researchers and engineers should be mindful of both good and bad uses, the report says, and more people should be involved in preventing or mitigating malicious AI.
The center explained the study authors "expect novel cyber-attacks such as automated hacking, speech synthesis used to impersonate targets, finely targeted spam emails using information scraped from social media, or exploiting the vulnerabilities of AI systems themselves (e.g. through adversarial examples and data poisoning)."
It continues: "Likewise, the proliferation of drones and cyber-physical systems will allow attackers to deploy or repurpose such systems for harmful ends, such as crashing fleets of autonomous vehicles, turning commercial drones into face-targeting missiles or holding critical infrastructure to ransom. The rise of autonomous weapons systems on the battlefield risk the loss of meaningful human control and present tempting targets for attack."
There will be mind games, too.
"Detailed analytics, targeted propaganda, and cheap, highly believable fake videos present powerful tools for manipulating public opinion on previously unimaginable scales. The ability to aggregate, analyze and act on citizen's information at scale using AI could enable new levels of surveillance, invasions of privacy and threaten to radically shift the power between individuals, corporations and states."
Seán Ó hÉigeartaigh of Cambridge was one author.
"Artificial intelligence is a game changer and this report has imagined what the world could look like in the next five to ten years," he said.
"We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real. There are choices that we need to make now, and our report is a call-to-action for governments, institutions and individuals across the globe."