A recent United Nations report about Libya’s 2020 civil war has revealed that a military-grade drone may have attacked soldiers without the help of a human.

The drone, per a report highlighted in recent pieces from NPR and the New York Times (among others), is known as a lethal autonomous weapons system (LAWS). It’s said to have been used by Tripoli-based forces backed by the government as soldiers attempted to flee in March 2020.

“Once in retreat, they were subject to continual harassment from the unmanned combat aerial vehicles and lethal autonomous weapons systems, which were proving to be a highly effective combination in defeating the United Arab Emirates-delivered Pantsir S-1 surface-to-air missile systems,” the report, helmed by a panel of experts on Libya, said.

Back in May, Zachary Kallenborn first reported on this development in a decidedly thorough piece published by independent nonprofit organization Bulletin of the Atomic Scientists. 

According to Kallenborn, it would “likely represent an historic first” if it were confirmed that anyone had been killed in such an autonomous attack, noting it would mark the first known instance in which AI-based autonomous weapons had been utilized in this manner.

The report in question does not state that anyone was killed by the LAWS. Mentioned, however, is the device was programmed to attack targets without the requirement of data connectivity with an operator. “In effect, a true ‘fire, forget and find’ capability,” the report said.

The news coverage surrounding the United Nations report has also spurred some confusion, as well as several experts noting that the wording used in some sections may have resulted in more attention being paid to the report than made sense:

Regardless, the larger topic of autonomous vehicles of this variety being used in a military setting at all—as well as the broader topic of artificial intelligence—remains a key argument among those who continue to speak out about a potentially bleak future.