English, Technology

Killer Robots And The Blame

It might be a time for the military to celebrate as the new automated weapons and robots are invented. It has raised many ethical and safety issues for the public. It has raised questions regarding the differences between human-controlled weapons and automated weapons. Although both types of weapons can be destructive, people seem to fear automated weapons more than human-controlled weapons.

It might be because the human factor and the compassion are subtracted from the equation. However, many argue humans using weapons can be as lethal as the robots on the battlefield or if they intend to harm someone. Regardless, people deem killer robots more dangerous than human-operated weapons because they operate automatically. Conversely, Michael Robillard argues in “the Killer Robots are Us” that humans control the programming and the whole process, they make the strategy, therefore, banning the automated weapons would not assist in creating peace. Moreover, he claims that such assumptions put the blame on the robots and remove the blame from the humans who created them and put their intentions in them. Although Robillard correctly draws attention to the programmers who have created the robots or automated weapons, he assumes the robots and the institutions are functionally identical. In addition, the robots can complete the task following deontological principles. However, it is a fallacy to equalize the moral values of humans and robots and to assume that people will use the killer robots for the same purpose in the military as the developers intended. Moreover, he argues that robots manifest the interests of institutions; therefore, they must be blamed for human rights violations, not for robots.

Robillard is rightly arguing that humans have developed robots, and they represent plans and military strategies of the countries where they are made. The programmers encode the strategies and intentions of the military institutes to make those robots. Therefore, the robots, whatever they do, represent the intentions of the institutions and the strategies of the people who have programmed them. Therefore, the robots cannot be deemed ethical or unethical. They are doing what they are programmed to do, and they are more efficient than humans in fighting wars and saving the lives of army personnel. The robots do not have the agency to decide whether to fight a war or not, but when they are put to work, they do their jobs without considering the consequences or intentions. The robots complete their tasks as a person with a deontological approach without concerning themselves with the intentions or consequences, following orders without fear of the consequences. Thus, his argument infers that the robots complete the task the way they are programmed. They do not alter or change the strategies nor consider the consequences of their actions because they are made that way. They cannot decide or change their plans. The automated codes in the program are the only commands they follow, yet people are scared of them and consider the robots evil and a way to end the world. In reality, the people who have programmed must bear the criticism, but the current focus of the people on banning robots and automated weapons discusses the fatality of the robots, ignoring the human contribution to it.

The ban focuses on the automated features of robots and automated weapons and not on the programs that institutions have intentionally put in place to destroy their enemy. Therefore, he argues that the whole debate on the ban is useless because it is wrong to use automated machines to inflict terror or kill the enemy. The use of a weapon by the people to kill the enemy must also be wrong, as both techniques aim at killing the people. There is no reason that humans killing other humans using non-automated weapons is justified, but the same action by automated weapons is criticized as cruel and unethical. Consequently, humans must change their strategies of war, and everyone must contribute to it instead of banning the robots, which are a mere manifestation of human planning and war strategies.

Although the author likely argues that humans are involved in the programming of robots, it is a fallacy to equate their potential and ethics. The automated machines are only focused on the commands that have been written into the program regardless of the situation that might occur at the time of the war or conflict, for the weapons which human operators are easy to control compared to the robots, which follow the instructions that were written sitting in front of a computer analyzing earlier data. The use of automated machines and robots cannot account for the situation, as it might harm people rather than serve them. Consequently, comparing human skills with automated computer skills is a fallacy. It might make sense at first glance that the actions of humans and automated machines might be similar in the situation of war. It does not mean that they are similar or the same as the author assumes. The pre-written orders might not serve the purpose of the war situation. Although they might be efficient in killing and targeting the target, that might be the problem because killing is not always required. For instance, if a person has set a timer to take a picture on her phone, but at the last moment, the person sneezes and the picture is ruined. The use of automated machines is a similar problem as they cannot be stopped or intervened after beginning. In the case of a camera photo, a person can delete and retake the photo, but damage to human life cannot be retrieved. Hence, equalizing human-controlled weapons does not have the same ethical issues as the person can stop after firing two or three bullets, but automated robots are unstoppable. Due to this, it has different ethical implications than non-automated weapons.

Second fallacy in his argument is that he considers as the intention of the programmers and military strategist were to create a military weapon to use in the war situation, it will be used according to their plans. Automated weapons were made by strategists and the military to be used in war situations, but in reality, automated weapons can be used in society to threaten people’s security if they fall into the wrong hands. It is problematic to consider that the automated weapons will be used in warfare only, and the people with bad intentions or the terrorists will not use them as the programmer had made them for use in the military. Although the automated weapons are intended to be used by the military and the programmers have set them accordingly, it cannot be ensured that everyone would care about the intentions of the programmers. Therefore, legitimizing automated weapons because humans made them is not a valid argument to justify the deadly killer robots.

Furthermore, the author’s argument about the intentions of the programmers and the institution is wrong and violates the rights of humans, not automated machines. Automated machines and killer robots might be useful in situations of war, as suggested by the op-ed, but they have serious human rights issues. Wars are especially infamous for the human rights violations and atrocities that they cost the humans residing in the country of conflict. The automated killer robots increase the intensity of the human rights violations. The automated machines might not have the agency to decide, and the humans might be blamed for developing the killer robots, but such confessions do not reduce the levels of human rights abuse that such technology causes. It might not blame the machines for human rights violations or for the actions and intentions of humans, but it does not ignore the fact that people are killed and tortured. Moreover, the technology of the robots has enhanced the intensity of the violence, making it efficient because if the target is selected, these robots do not make mistakes. They kill the target with accuracy, due to which the technology scares people because the technology is efficient and human intervention is removed from the equation.

To conclude, the author, Michael Robillard, in his opinion article, “The Killer Robots Are Us,” argues that as killer robots and automated weapons are made by humans, they represent human and institutional intentions to kill. Therefore, by banning automated weapons, people are trying to blame the weapons for the violence and human rights abuses, but they manifest the intentions of the developers. Although the author is right in arguing that the robots represent human intentions, the intentions to use them in the state of war, some people can use them in non-war times. Moreover, the automated weapons do not allow humans to intervene when the target is selected, which increases the intensity of the violence. They increase the intensity of violence at a time when violence is already prominent, creating more problems than solving any. Hence, they should be banned.

Cite This Work

To export a reference to this article please select a referencing stye below:

SEARCH

WHY US?

Calculate Your Order




Standard price

$310

SAVE ON YOUR FIRST ORDER!

$263.5

YOU MAY ALSO LIKE

Pop-up Message