Academic Master

English, Technology

Killer Robots and the Blame

It might be a time for the military to celebrate as the new automated weapons and robots are invented. It has raised many ethical and safety issues for the public. It has raised questions regarding differences in human controlled weapons and automated weapons. Although both types of the weapons can be destructive people seem to fear the automated weapons more than the human controlled weapons. It might be because the human factor and the compassion are subtracted from the equation. However, many argue humans using weapons can be as lethal as the robots in the battlefield or if they intend to harm someone. Regardless people deem killer robots more dangerous than human-operated weapons because they operate automatically. Conversely, Michael Robillard argues in “the Killer Robots are Us” that humans control the programming and the whole process, they make the strategy, therefore, banning the automated weapons would not assist in creating peace. Moreover, he claims such assumptions put the blames on the robots and remove the blame from the humans who have created them and put the intentions in them. Although Robillard correctly draws the attention to the programmers who have created the robots or automated weapons, he assumes the robots and the institutions are functionally identical. In addition, the robots can complete the task following deontological principles. However, it is a fallacy to equalize the moral values of humans and robots and to assume that people will use the killer robots for the same purpose in the military as the developers intended. Moreover, he argues the robots manifest the interest of the institutions; therefore, they must be blamed for the human rights violation, not the robots.

Robillard is rightly arguing humans have developed the robots and they represent plans and military strategies of the countries where they are made. The programmers encode the strategies and intentions of the military institutes to make those robots. Therefore, the robots whatever they do represents the intentions of the institutions and the strategies of the people who have programmed them. Therefore, the robots cannot be deemed ethical or unethical. They are doing what they are programmed to do and they are more efficient than humans in fighting the wars and save the lives of the army personnel. The robots do not decide nor have the agency to decide whether to fight a war or not but when they are put to the work, they do their jobs without considering the consequences or intentions. The robots complete their tasks as a person with deontological approach would to do without concerning oneself with the intentions or consequences following the orders without fear of the consequences. Thus, his argument infers that the robots complete the task the way they are programmed. They do not alter or change the strategies nor consider the consequences of their actions because they are made that way. They cannot decide or change the plans. The automated codes in the program are the only commands they follow yet people are scared of them and consider the robots evil and a way to end the world. In reality, the people who have programmed must bear the criticism but the current focus of the people on banning the robots and automated weapons discuss the fatality of the robots ignoring the human contribution in it.

The ban focuses on the automated features of robots and automated weapons and not on the programs, which the institutions have put in intentionally to destroy their enemy. Therefore, he argues that the whole debate on the ban is useless because if it is wrong to use automated machines to inflict terror or kill the enemy. The use of a weapon by the people to kill the enemy must also be wrong as both of techniques aim at killing the people. There is no reason that human killing other human using the non-automated weapon are justified but the same action by the automated weapons are criticized as cruel and unethical. Consequently, humans must change their strategies of war and everyone must contribute to it instead of banning the robots which are a mere manifestation of the human planning and war strategies.

Although the author likely argues that the humans are involved in the programming of the robots, it is a fallacy to equate their potentials and ethics. The automated machines are only focused on the commands that have been written into the program regardless of the situation that might occur at the time of the war or conflict. For the weapons which human operators are easy to control compared to the robots, which follow the instructions that were written sitting in front of a computer analyzing earlier data. The use of automated machines and robots cannot account for the situation due to which it might harm the people than serving them. Consequently comparing human skills with the automated computer skills is a fallacy. It might make sense at first glance that the action of human and automated machines might be similar in the situation of war. It does not mean that they are similar or same as the author assumes. The pre-written orders might not serve the purpose of the war situation. Although they might be efficient in killing and targeting the target, that might be the problem because killing is not required always. For instance, if a person has set a timer to take a picture on her phone but at the last moment the person sneezes and the picture is ruined. The use of automated machines a similar problem as they cannot be stopped or intervened after beginning. In case of the camera a photo, a person can delete and retake the photo but damage to human life cannot be retrieved. Hence equalizing human controlled weapons do not have the same ethical issues as the person can stop after firing two or three bullets but automated robots are unstoppable. Due to which it has different ethical implication than the non-automated weapons.

Second fallacy in his argument is that he considers as the intention of the programmers and military strategist were to create a military weapon to use in the war situation, it will be used according to their plans. The automated weapons were made by the strategists and military to use in the war situation but in reality, the automated weapons can be used in the society to threaten the security of people, if they fall in the wrong hands. It is problematic to consider that the automated weapons will be used in the warfare only and the people with bad intentions or the terrorist will not use it as the programmer had made it for the use in the military. Although the automated weapons are intended to be used by military and the programmers have set it accordingly, it cannot be ensured that everyone would care about the intentions of the programmers. Therefore, legitimizing automated weapons because humans made them is not a valid argument to justify the deadly killer robots.

Furthermore, the argument of the author about the intentions of the programmers and the institution are wrong and violate the rights of the humans, not the automated machines. The automated machines and killer robots might be useful in the situation of war as suggested by the op-ed but they have serious human rights issues. Wars are especially infamous for the human rights violations and atrocities that they cost to the humans residing in the country of conflict. The automated killer robots increase the intensity of the human rights violations. The automated machines might not have the agency to decide and the humans might be blamed for developing the killer robots but such confessions do not reduce the levels of human right abuse that such technology causes. It might not blame the machines for the human rights violations or for the actions and intentions of humans but it does not ignore the fact that people are killed and tortured. Moreover, the technology of the robots has enhanced the intensity of the violence making it efficient because if the target is selected these robots do not make mistakes. They kill the target with accuracy due to which the technology scares people because the technology is efficient and human intervention is removed from the equation.

To conclude, the author, Michael Robillard, in his opinion article, “the Killer Robots are Us” argues that as killer robots and automated weapons are made by humans, they represent human and institutional intentions to kill. Therefore, banning automated weapons people are trying to blame the weapons for the violence and human rights abuses but they manifest the intentions of the developers. Although the author is right in arguing that the robots represent human intentions, the intentions to use them in the state of war, but some people can use it at non-war times. Moreover, the automated weapons do not allow humans to intervene when the target is selected due to which they increase the intensity of the violence. They increase the intensity of violence at a time when violence is already prominent creating more problems than solving any. Hence, they should be banned.

SEARCH

Top-right-side-AD-min
WHY US?

Calculate Your Order




Standard price

$310

SAVE ON YOUR FIRST ORDER!

$263.5

YOU MAY ALSO LIKE

Pop-up Message