The military research agency’s algorithms took down an Air Force pilot in a virtual dogfight last year. In February, the Pentagon’s “mad science” unit tested how they’d perform as a team. The battle pitted two friendly F-16s against a single enemy aircraft. Each fighter jet was equipped with a gun for short-range engagements and a missile for more distant targets. Colonel Dan “Animal” Javorsek, program manager in DARPA’s Strategic Technology Office, said testing multiple weapons and aircraft introduced new dynamics to the trials: DARPA is also assessing how much pilots trust the systems. Javorsek said they’ve installed sensors in a jet to measure physiological responses, such as where a pilot’s head is pointing and where their eyes are moving: They now plan to test out AI on real-world aircraft. To do this, DARPA is creating an aero-performance model of an L-39 jet trainer, which the algorithm will use to make predictions and maneuver decisions. Once the model is complete, the agency will begin modifying the aircraft so the algorithm can control it. The Pentagon plans to test them in live-fly dogfights in late 2023 and 2024. [Read: Iranian nuclear scientist allegedly assassinated via killer robot] Critics, however, have questioned the value of the trials. The rules-based nature of air-to-air combat is a good fit for algorithmic decisions and the “perfect information” supplied by the simulators isn’t available in the field. Even if the AI worked equally well in reality, there’s been only one dogfight involving a US aircraft in the last 20 years. A more pressing concern involves the rush to develop autonomous weapons. An AI arms race could encourage countries to cut corners on safety considerations, and even trigger an accidental war. HT – The Drive Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.