Get A Quote
Written by Vicky Desjardins on 4 August 2023

Oppenheimer’s lesson: The road to hell is paved with good intentions

Oppenheimer opened in theaters last week and had significant success at the box office. Oppenheimer and a group of physicists, including Enrico Fermi, Richard Feynman (and his famous bongos), James B. Conant, and many more, worked together to create the atomic bomb. The atomic was meant to be used against the Nazis but ultimately dropped on Japan. Throughout the film, the scientist starts to doubt if they should continue to build the bomb after the Nazis have been defeated. Oppenheimer defended the continuation of the project, saying, “They won’t fear it until they understand it. And they won't understand it until they’ve used it". Now that we have seen it understood it, and feared it, what now?

Every invention can be used maliciously when individuals want to adapt it for themselves. There is a reason the expression the road to hell is paved with good intentions is common. In all types of research, we often wonder how far is too far. Oppenheimer’s atomic bomb is an example of going too far and being unable to put the genie back in the bottle. Japan is still paying the consequences of that bomb over seventy years ago. The question remains, Is A.I. following the same path as the atomic bomb? Sure, AI and AI-powered tools are still controllable and researchable, but what happened when they went into the wild?

AI teases the possibility of broadening research in medical, physics, and other fields. Before the Atomic bomb was the atomic bomb, it was a research project by university researchers. Most university researchers need ethical clearance to research if it includes any humans. University Ethics would not have approved the atomic bomb as the impact on human lives to be too impactful. Ethics keeps scientists going too far and risking the well-being of humans. Some of the most extraordinary criminology research came before ethics was required before experimenting. Although ethics is often viewed as a pain, we should never go back to traumatizing human lives for research. But AI is not human, so where do ethics fit into this? It’s not because you can research that it should be done. There has been significant progress in using artificial intelligence, but there are also malicious uses.

A.I. is now being used in the wild. When ChatGPT came out, everyone was excited about the prospect of the tool. The honeymoon phase did not last very long as more people became worried about their jobs. Screenwriters, marketing, programmers, and other fields started to worry about what would happen to them if ChatGPT took over. Now indeed, we are far from that reality, or are we?

A.I. gives threat actors the upper hand, and the defenders often do not have time to catch up. Threat actors entered the scene. Some cyberattacks created by ChatGPT evade defenses in places. Some are so well done that EDRs, like humans, do not recognize emails as phishing. ChatGPT-like tools such as “FraudGPT” and “WormGPT” came out to help threat actors launch cyberattacks even if they did not have the technological skills required to do it in the old fashion way. The creators of the criminal version of ChatGTP could adapt the tool quickly by removing the safeguard. AI-based tools are lowering the technical bar and helping more potential threat actors perform illegal activities. So, we could face a new crop of criminals who can pull off great crimes with little skills.

One of the significant crime barriers is the inability to commit the crime. For example, the lack of ability to build a bomb. In an online setting, the lack of coding skills can stop many motivated people from committing online crimes. Now if the barrier no longer exists to stop motivated people is no longer present, the number of potential actors increases. Online crimes are arguably morally easier than offline crimes as the actor is far from their victim and cannot see the impact on their victim. They will use rational such as “I only wrote a few lines of code” in defense of their actions. So, what happens when now all motivated actors can just write in a malicious AI-powered tool: write me a code to bypass EDR X. Then only must copy and paste the code for a successful attack. What happens to employees who let go and are furious and want revenge? How soon before we are facing Gotham like the online city?

Was any of this intended when artificial intelligence was thought of, worked on, and finally created? Of course, not, but good intentions do not matter in the end.

 

Learn more about Cyber Threat Intelligence

Related Posts

phone-handsetcrossmenu