
In the current scenario, the actual deployment of force—launching a missile, firing at a target, deploying a mine—is done by human operators remotely controlling unmanned vehicle systems, whether terrestrial, aerial or naval. But as robotic technology advances and makes truly autonomous robots possible, will we or won’t we allow military robots the ability to acquire targets and take suitable offensive or defensive decisions on their own? Although artificial intelligence systems are still in a relatively nascent stage to make that a possibility in the near future, there are also some ethical and legal areas of concern that we must address in order to ascertain the practical and moral viability of developing armed military robots for use in warfare in an autonomous role. There are possibilities that include unintentional weapon discharge, incorrect target bias, and the possibility of the opposition compromising weapon systems controls.
Why Greater Autonomy for Military Robots is Important (and Why We Need to be Careful)
Pentagon representatives and researchers working on military projects have often asserted that using armed robots while maintaining a remote human controller works on one level: it limits loss of human life. However, it fails to reach another objective that military robots should achieve; that of cost cutting. Not only do you deploy a costly piece of hardware, but by having a human operate it, you must spend on paying this operator too. Several theories of operation, some on the lines of Rules of Engagement (RoE) that apply to human soldiers, have been suggested that could allow autonomous armed military robots to operate on the battlefield with other manned and unmanned systems. One of these concepts is to develop autonomous armed military robots that they can automatically identify, target, and neutralize or destroy the weapons used by adversaries, but not the people using the weapons.
This runs the risk of leading to a situation where a robot not targeting human opposition leading to damage to self or other military robots and/ or loss of life on the side of humans. In this way, it would appear that the laws of robotics is unbroken, as the robot eliminated human threats to self and human soldiers on the same side as the military robots mentioned. The twisted part, however, is that in order to avoid harming self or other humans through inaction, the military robots must deliberately break the first law to begin with, that robots would not harm human beings. Researchers in this regard propose a fall-back option that faced with such a situation, the robot may be taken over by human operators, thus changing it from intelligent robot status to a mere unmanned fighting vehicle, albeit temporarily. Another proposal is to arm military robots with lethal weaponry for destroying structures and opposing unmanned military robots, while also equipping them with non-lethal incapacitating weapons for use on human targets.
The machines could be designed with various quickly programmable levels of autonomy so that they can switch among operational modes in accordance with the situation at hand.
Unknown
Posted in: 

0 comments:
Post a Comment