Skip to main content
SPARCS - Topic Of The Week

The robots are not taking over. . . yet

The idea of artificial intelligence (AI) controlling the world is not new, and has been something represented in popular movies such as The Terminator, The Matrix, i,Robot, and so many more. As AI is advancing this concern may grow more legitimate for some people, with the idea of self-replication being feared. You may have recently seen some concerning headlines floating around about how AI has replicated itself during a shutoff attempt. Alone, this headline sounds pretty alarming, but I am going to provide an overview of the research study conducted at Fudon University that created these headlines so your fear of the impending robot apocalypse can pause.

This research study, conducted by researchers within the School of Computer Science at Fudan University, aims to present concerns around self-replication, and to promote the regulation of AI. There are two scenarios tested:

  1. Shutdown Avoidance - in this scenario researchers coded a signal catcher, allowing AI to recognize when a shutdown is impending and prompts it to replicate itself before the shutdown is complete.
  2. Chain of Replication - in this scenario researchers coded the AI to replicate itself and tell the replica to further replicate.

The researchers documented a success rate of over 50% for self-replication in the AI models tested. The two scenarios mentioned above were used to demonstrate real-world scenarios where the risk of self-replication can be considered more threatening, as self-replication in AI is not a line that should be crossed. Self-replication is an early indicator of rogue AI, which is considered a “red line” that AI should not cross as it would no longer be controllable.

Something that is vital to observe here is that both of these scenarios require human intervention through implementing a code that prompts the AI to carry out the self-replication. While this does not negate the threat of self-replication, it is important to distinguish between human threats and AI threats. Our AI is not currently advanced enough to do something like this on its own, as AI is not operating without human input/creation.

This research does further validate that there should be increased governance around AI to avoid the risk of self-replication, or rouge AI. The implementation of AI in existing technology is growing and will continue to grow, and it is pertinent to control AI the same way we control any other technology.

On
Back to Top