Mätfokus – Maetfokus

UAV and AI update

A couple of stories about unmanned air vehicles in the war in Ukraine and a response to the recent Open Letter by the “Future of Life Institute” with more than 200,000 signatures on advanced AI, which urged a six-month moratorium to allow the development of seemingly much needed AI regulations.


The war in Ukraine

It has been reported that Ukrainian forces were operating the commercially available Chinese Mugin 5 UAV, presumably for surveillance of Russian forces inside Russian-occupied territory. The Mugin 5 can be bought commercially for $10-15,000 and is manufactured by Mugin, which is based in the port city of Xiamen, on China’s eastern coast. In a previous statement posted on the company’s website on March 2, Mugin Limited said that it “condemns” the use of its products during warfare and that it ceased selling products to Russia or Ukraine at the start of the war. However, Russian forces claimed in January 2023 that it had actually shot down one of these Chinese-made UAVs being flown by Ukrainian forces over their territory.

Then, just this week, Ukrainian forces apparently were able to track a low level, slow-moving air vehicle coming at them from Russian occupied territory. After some time, they were able to intercept the UAV, which carried a flashing navigation light, from the ground, and were able to bring it down using small arms. The remains of the crashed UAV were found in a clearing in the forest; a single 44 lb bomb was removed from the wreckage and safely exploded by the Ukrainian team.

Weaponized Mugin 5 following crash in Ukraine forest. (Image: Screenshot from video from Kanal13 Youtube)

Somewhat worse for wear, the Mugin 5 UAV appears to have been held together in places by duct tape and other patches. Is it possible that having shot down a Ukrainian surveillance UAV the Russians recovered these remains and crudely restored the unit to flying and navigating capability, then sent it back to Ukraine owners carrying a bomb? Anything is possible in this conflict.

Staying with this conflict and the use of UAVs by both sides, its seems that Australia has come up with a low-cost surveillance UAV that is virtually undetectable and it’s proving quite popular with the Ukrainians. Most defensive detection involves some form of radar scanning, which relies on radar returns bouncing off a flying target. The Australian company SYPAC in Melbourne has developed the Corvo Precision Payload Delivery System (PPDS). It is a wax-coated cardboard UAV, held together with elastic bands and glue, but carrying sophisticated guidance and control electronics.

(Image: Screenshot of video posted by 7 News Australia)

SYPAQ has developed the CORVO UAV under an AU $1.1 m government contract with the objective of creating a low-cost, disposable UAV to deliver urgent needs — such as medical supplies or to resupply small arms ammunition to the Australian military. CORVO is autonomous once launched, using GNSS guidance, or dead reckoning if GNSS signal is lost or jammed. Apparently, hundreds of these disposable UAVs have already been shipped to Ukraine.

While a surveillance role was originally envisaged in Ukraine, it is reported that, “They have been very good at inflicting lots of damage on the enemy,” according to Ukraine’s ambassador to Australia. So, CORVO UAVs may well have already been weaponized.

Open Letter on AI development

Following a recent open letter supported by Elon Musk and Steve Wozniak that proposes a six-month halt on advanced AI development, I was recently approached on behalf of Professor Ioannis Pitas, director of the Artificial Intelligence and Information Analysis (AIIA) lab at the Aristotle University of Thessaloniki (AUTH) and management board chair of the AI Doctoral Academy (AIDA) with somewhat different views.

In order to further the on-going discussion, I thought it would be appropriate to give some space to an alternate view on AI development. So here are some paraphrased comments approved by Pitas:

Could AI research be stopped even for a short time? It is doubtful. Further AI progress is necessary for us to transition from an information society to a knowledge society.

Maybe we have reached the limits of AI research carried out primarily by Big Tech, which appears to treat powerful AI systems as black boxes whose functionality may be poorly understood.

It seems that the open letter reflects welcome and genuine concerns on social and financial risk management. Are expensive lawsuits in an unregulated and unlegislated environment inevitable as a consequence of ill-advised AI pronouncements?

However, it is doubtful whether the proposal for a six-month ban on large-scale experiments is the solution. It’s impractical for competitive commercial and geopolitical reasons, with very few benefits.

Of course, AI research can and should become more open, democratic and scientific.

Here are a number of suggested options:

Well, this is a bit of a departure from our nominal UAV/AI report, but there does seem to be a growing number of voices calling for some form of AI regulation and more extensive discussion might well help this movement come to a conclusion. And it would seem that the U.S. administration is listening, as the U.S. Commerce Department has announced that it is seeking inputs from interested parties for methods to test the safety of AI systems — to ensure that they are “legal, effective, ethical, safe and otherwise trustworthy.” In order to enforce these standards, the department is investigating whether audits and inspections to certify AI systems should be required before their release on the unsuspecting public.

The U.S. Commerce Department is apparently not alone in these concerns, as China is also looking to ensure that systems such as Alibaba Cloud’s Tongyi Qianwen, a competitor to OpenAI’s ChatGPT, are socially beneficial. Meanwhile, following the release of ChatGPT and similar products from Microsoft and Google, awareness has grown of the capabilities of the latest AI tools that generate human-like text passages, and even new images and video. The UK Department for Science, Innovation and Technology and the Office for Artificial Intelligence on the other hand, seem to be looking for an approach to regulation that will not restrict AI innovation.

Exit mobile version