Friday, September 16, 2011

Armed UAS/UAV Targeting "AI Singularity" (Not the Sci-FiVersion)

In 2010, an autonomous rotorcraft UAS left it’s programmed course and flew into restricted airspace over Washington DC. The operator lost contact or the UAS system lost the GPS signal. There are a number of individuals both in the realm of conspiracy theorists and those of rational science, who are concerned about AI Singularity or informational, mechanical, and SMART systems becoming self-aware.
Though mechanically inclined, I am not by any means an expert in the field of computers or that of mechanics. That being said, I could be completely wrong in my methodology and thinking with regard to how I ponder this subject. I will, however, convey my thoughts on why it is important to keep a human “in the loop” with regard to any SMART system, Human Machine Interface (HMI) or Human Adaptive Mechatronic(HAM) systems. In today’s informational age, there are a number of conspiracy theory and dooms day forums in which AI Singularity is a hot topic. Ignorance plays a part in a number, but not all of these perspectives from people giving their “2 cents“. The rapid pace at which technology is progressing imparts fear to people of whom many are not informed. However, there have been incidents after which the end result could have been worse which questions in favor of the plausibility of AI singularity.
I do not believe that informational/mechanical systems will become “self-aware” to a degree that people have seen in the Hollywood pictures or urban legends. I believe that AI/SMART systems may act or react in an adverse manner, but it would be the result of segments of informational code entered incorrectly into the system, retrieval of such incorrect code producing unwanted actions, or failure of the system to differentiate between groups of code. Let’s say, for example, the code differences between actual targets and that of friendly non-combatant. How will the system associate segments of hostile and friendly code to local nationals in fields of combat or even the populous of America when training with such systems is conducted? This is the exact reason that keeping a human in the loop is required. A human mind can discern aggressive body language. We think in a rational manner while taking into consideration risk mitigation and collateral damage prior to reacting to a hostile threat. Or as Soldiers, we can simply change our posture to address the first signs of a perceived hostile intent. Perhaps our change to this more aggressive posture will be enough to deter an individual from escalating from the suspicion of hostile intent to a hostile act. This works well for human face to face, but not always and certainly not with autonomous UAS. With UAS it’s one sided. The upper hand, with regard to target discrimination, is in the favor of the UAS operator who is the “human in the loop”. That being said, why would the decision of life and death or the “upper hand” be left to  an autonomous computer system? An aggressive posture for the UAS is merely its presence. A combatant may choose to transition from hostile intent to out right hostile act. After all, they know that the aircraft is unmanned and has limitations. Insurgents are smart enough to stay out of the line of sight of the UAS optics in low altitude or act like the local populous and “blend in“. In Iraq, the insurgents were smart enough to step inside a building when they heard the power plant of a UAS and waited until it completed it’s pass before continuing with their activities.
 Autonomous systems only retrieve and associate informational code to a target or what “it” thinks to be a target based on a summation of information entered into the system prior to mission execution. For a UAS to be truly autonomous, the informational code is entered into and associated for the UAS targeting system. Even if the system can “learn“, there must be a starting point or foundation of code that must be entered into the system. In turn, said code is retrieved and associated by the system to a hostile target and responds accordingly which is another grouping of code for the correct response. The issue is what if the system associates hostile code to the motion of a raised hand or perhaps an object in a raised hand? How then will the system differ between a man with a weapon and a man with a shovel who only intends to carry out his trade of farming? How will the system prevent the association of hostile code to US ground forces with their respective weapons systems? To date, armed UAS aircraft are controlled by human pilots, but there is a hope and an effort for UAS to be completely autonomous. Programs can be entered into a UAS system and the UAS will take off on “it’s” own and execute that mission. Autonomous systems that can learn do so through the recording of video, thermal, and topographical data from “it’s” surroundings and “learn” as it associates and categorizes the information attained. Even from what it learns it can predict an outcome based on the data it has attained and retrieves. Still yet, what if the system is wrong? That is what concerns many people. Keeping a human in the loop will help prevent sets of circumstances that are not hostile from being treated as hostile when utilizing autonomous UAS targeting systems. The human mind thinks not only with the consideration of risk mitigation and collateral damage, but a human can also reason, taking into account the social effects that their actions may have when acting or reacting to hostile intent or threat. As a result of our reasoning, we have a larger conceptual view of the possible effects caused by our actions and we can adjust our response with measured force and lethality (killing only the enemy and not civilians). But in all conflicts there will be collateral damage both in property and loss in life.  Autonomous UAS are given a mission through a program and guided by satellites, but the question still remains; what if the UAS, for what ever reason, flies off course and gets it wrong? It’s occurred before, but thus far I have never heard of an incident during which people were harmed by autonomous UAS systems. Since it’s possible for a UAS to have “glitches” with it’s navigational GPS, can there be chance for the UAS to experience a malfunction with the GPS system specifically for targeting? With human intellect, there are no codes to differ between hostile and friendly except that which we gather from our environment when we feel threatened. It’s instinctive “fight or flight” reactions. We simply know when we either see or even sense an impending threat. We know our decisions need to be addressed with a moral approach with regard to what is right and wrong. Consideration is given to human dignity and human rights. Entitlements to exist in peace. There can be no informational code for this. What we associate as peaceful existence, a SMART system only sees as a different grouping of code apart from the codes associated to hostile targets. There is no way for the system to associate code for acts of peace and normal routine existence. The system does not reason why routine acts are important or key to good social order. The understanding for the need and preservation of social order is an important state of existence that only the human mind can reason with and make the correct choices for in order to maintain a peaceful existence. Thus the utilization of armed UAS and keeping a “human in the loop” is pragmatic. A safe and predictable union of the human mind and our informational/mechanical devices. At least when compared to the theory regarding the outcome should AI singularity become reality in conjunction with armed UAS not being coupled with a human awareness. The implementation of lethal instruments of war without the human mind will not lead to any form of peaceful social order. Perhaps nothing would happen. But, what if we’re wrong? The choice of life and death given to a SMART system removes the human desire to prevent war even if only in one incident. The only chance to prevent any unwanted act that only humans will regret is gone and yet we no longer have control of for a period of time because the choice was made to have a completely autonomous armed UAS. It must be dealt with outside the autonomous system.
 Like all other Soldiers, I’m against war, but I also recognize that war is necessary to preserve peace and good social order. The benefits and human qualities of life that are taken by terrorists or any other armed enemy. That being said, we should always keep our skills to wage war very honed and quick to utilize to prevent total social break down. This includes our instruments of war, but a human mind should always be coupled with any weapon system. Many of you may disagree about my thoughts on war. That it should never happen, but the problem is that it does happen and it’s often unavoidable as there are two sides to a human mind. That which is good and that which takes pleasure in the demise of others. Pray to God that the instruments to wage war stay in the hands of those who protect the peaceful existence of our nation and those of our allies. At least pray that we always have the bigger stick. That includes technology.
D.A. Hickman

No comments: