Recently, many articles and references have been published, most of them sinful to the truth, about the role of artificial intelligence (AI) in military systems. in the article, we will present some of the current reality, within the limitations of classification.
AI appears in military applications in two main categories:
Utilities (Helping tools) in technical and tactical working processes (OCR, retrieval, automatic translation, information extraction and construction, speaker recognition, speech recognition and speech to text transformation (STT), area scanning and change detection, big data processing, etc.).
Utilities (Helping tools) for improving decision-making processes, but not in actual decision-making.
Naturally, most of the interest is focused on the use of artificial intelligence in decision-making processes within the defense establishment and daily work , which have developed greatly in recent years, with the HPC (High Power Computing) revolution and the ability to quickly and efficiently process “big data”
The use of AI in IDF systems is accompanied by very careful operating methodologies and it is very important to maintain this approach.
The current orders and procedures state unequivocally that artificial intelligence applications will not be used in cases involving questions of death and life, without human involvement.
In other words, a decision to attack is not made and will probably not be made in the near future solely by AI-based algorithms. The decision will be made only by competent human entities, including close legal support, just as is done, for example, in AI applications that involve human life in medicine.
Decision makers certainly use AI to extract the best information and recommendations, including presenting past precedents, in order to make the best informed decision, but while it sounds tempting, AI systems are not designed to replace defense and legal professionals.
For example, in the “Lavender” system, operated by the IDF, which became famous during “Iron Swords,” the AI layer does add certain considerations to the intelligence information filter, but does not make decisions.
Beyond scientific research, the use of AI tools has led to the development of broad evaluation methodologies to ensure product accuracy relative to the defined performance threshold, so that the systems can be used operationally. The process includes examining the stability of the algorithms over time and under different conditions, by creating random samples and developing many regulatory mechanisms and approvals, before the AI system is authorized to present recommendations for human use.
AdvertisementVerification and decision that would result in an attack would in any case include at least two separate indications from independent sources and would be carried out only by a human element.
An AI product that become accessible to the operational consumer also undergoes an “explainability” process to verify its validity of use.
AI products can also be divided according to their resolution use. The first layer deals with a “single piece of information” (in very large quantities, and this is one of AI main enablers) in order to extract most of the information components from it, for future retrieval. This kind of usage led to various specializations in information items (audio, image, text, etc.).
The second layer deals with the “intelligence entity”, which combines many pieces of information, and the AI algorithms make it possible to generate aggregate insights from them.
The third (futuristic) layer deals with broad events that connect a huge number of “intelligence entities” in order to construct the capabilities to predict events in which the various entities may participate in the future.
The 3 layers model is suitable for defense intelligence systems, but also for many civilian systems in medicine, science, transportation, agriculture and more.
Implementation of AI
Implementing AI tools in military systems, certainly in an environment that affects human life, is extremely complex and difficult. This is why only a human factor is declared as the “content expert” , and AI software algorithms does not determine anything alone.
In decision-making systems, the “content experts” are the ones who actually make the decisions, and until the AI system’s recommendations gain enough trust, to be part of the “content expert” decision making process, a long time passes.
Training operational users who use machine learning tools requires special knowledge and a long time, and in any case it is important to emphasize again: in offensive military systems, which may affect human lives, there is no decision-making by AI systems. We are very far from this situation.
In civilian systems, you can find applications in which AI systems may decide matters of life and death, certainly indirectly. This is probably one of the main reasons why civilian autonomous systems, such as autonomous vehicles, are entering to our life at a much slower pace than predicted, while other kind of AI systems are all over.
Brig. Gen. (res.) Prof. Jacob Nagel is a senior fellow at FDD and a professor at the Technion. He previously served as Netanyahu’s national security advisor and as head of the National Security Council (Acting). Adv. Jacob Bardugo is a media professional who deals with technology and has served as a senior commentator for many years in radio, television, and press.