When you hear the term “killer robots”, what sprouts to mind? Is it ‘The Terminator’, or some dystopian fiction? Communally, it could be agreed that the term has an association of being relatively far-fetched. The logic behind creating an ‘army’ of robots whose designated function is to kill is understandably flawed; there would always be the immediate risk of the robots turning against their authoritarian, which consequently outweighs any conceivable benefits.
Officially known as lethal autonomous weapon systems (LAWS), these killer robots aren’t confirmed to exist as of yet, but their development for military purposes would oversee full autonomy in the critical function of logistically selecting and attacking targets. Thus, there would be no human intervention or moral conflictions when choosing between life and death of the target. PAX, a non-profit and non-governmental peace organization that advocates for the cease in all possible development of killer robots, states that the decision over life and death should never be made by a machine, and it would be in poor taste to reduce that decision to an algorithm. Advances in artificial intelligence and other precursors clue towards the looming actualization of killer robots. For example, an armed robot called SGR-1, located on the border between North and South Korea, is equipped with a machine gun and grenade launcher, and can detect human beings via infrared sensors.
The ongoing global competition between countries such as South Korea, the USA, Israel and Russia to develop these autonomous weapons is being vehemently warned against by artificial intelligence leaders and robotics experts, including billionaire Elon Musk. In a letter to the United Nations imploring the addition of the technology to the list of banned weapons under the UN Convention on Certain Conventional Weapons, killer robots were cautioned to be the “third revolution in warfare”, succeeding the invention gunpowder and nuclear bombs. Disassociating humans in the act of killing removes any trace of morality from warfare and will make it essentially impossible to specifically assign culpability if there are civilian casualties or violations of international law.
Creative agency We are Social revealed earlier this year that 45% of the world’s population are active social media users in their global digital report. Those fortunate enough to access the technology which so heavily shapes and contributes to our lives would assume, in blissful ignorance, that they will continue to wield full control of the technology they largely depend on, from navigation systems to employment to worldwide interpersonal connections. This, however, is no longer the case. The already thin line between humans and technology in the discourse of who controls who and for what purposes blurs every day; an example being the infamous Facebook-Cambridge Analytica data scandal from early 2018, where the private data of millions of users were leaked and used without consent for “political advertising purposes”. As a global society, it is absolutely crucial that we come to an agreement on the acceptable and unacceptable uses of technology before it is too late and causes more harm than good.