Ukraine develops AI-powered Air Defence

IMHO: I understand why people against AI, but we should blame companies that push AI where it is not needed (like Copilot in microslop) or “creators” using AI to generate slop.

I see AI just as tool, like knife. You can use knife to cook salad or use it to carve shit on desks.

This news shows that AI can be useful, when not used to make slop.

1 Like

Your tool analogy is sound but any tool you don’t have 100% control over will yield less than ideal results. The real question here is, will it still be worth it (now that this precedent is being set which I’m sure is already being set by China and others which we may not know about publicly).

1 Like

“here is proof AI can be utilised for the military industrial complex! so it’s not all that bad! AI can be good too. we can let it suck some of our natural resources!” how soul-less do you have to be to make this post? haha, this article makes me want to blow my head on the carpet

1 Like

I don’t understand why you so upset?

It is only natural that USA (word innovation leader) and UA (as leader of innovation in Europe) will develop something like that.

Plus it is air defense, so it is about protecting people, tnot killing them like some inventions in China.


And you missed my point. I meant that Generative AI make slop and not so useful. But AI (as neural network) in general can be used in various things like military, security (Cloudflare uses it) etc.

I meant that we should not blame AI as tool, but blame ones that use them to generate slop. Something like that.

Because you’re trying to make some sort of point about AI (which has been in use within sectors like this as early as the 1980s) is useful, which literally noone denies, you seem to think all AI is made of LLMs and that whatever these turrets are using are identical to ChatGPT.

AI is an umbrella world for many multiple technologies.

I’m “upset” because you chose, out of literally everything, a literal army project as to support your case, the most immoral and unethical thing a human can invent (you can ask Oppenheimer!).

Eitherway, no clue what this post has to do with security or privacy.

2 Likes

What’s actually natural is for a nation mired in a conflict of attrition or policing other nations PRing its weapons business.

Hopefully, this statement isn’t what I think it is , but…


Can’t have digital security without physical security :wink:

1 Like

Agreed. There’s a very recent article on amateur mathematicians using AI to solve long-standing math problems. A UK-based not-for-profit physics organization also did a survey about the intersection between physicists and AI and found that AI could be beneficial to physics. Lots of things in science benefits from AI (though there’s something to be said about the scientific community developing a reliance on AI), that I think are more worth bringing up than war.

In cases like this, I think it’s better to educate them than to condemn them for their ignorance.

Yes. The main utility for AI is prediction, and its main drawback is hallucination (or confabulation). When an AI outputs something that we deem not grounded on the training data, that is a hallucination. It’s most prominently attributed to LLMs. For example, getting the output “December 26th” from the input “When is Christmas?” using “accurate training data”.

Notice that groundless predictions are not the same as bad predictions. Say we have a dataset of multiple different sets of points representing circles. From what we know, the dataset is full of sets of points which accurately represents circles. And say that we trained an AI with this dataset. We ask the AI to draw a circle (i.e., we ask the AI to predict a set of points that represents a circle based on its training data), but it then draws an oval shape instead of a circular shape.

If the probability distribution of that output is flat, we know that it is likely a hallucination. A flat probability distribution means that every possibility is equally likely to be predicted, meaning that the answer an AI gives is likely going to be confabulated or made up insofar as it’s choosing a “random” answer. However, if the probability distribution is not flat, and the oval output was found to be more predictable than other outputs, we know that it is just a bad prediction based on training data that we assumed to be accurate. Hallucinations reside in the realm of flat probability distributions where the AI does not know what should come next and instead confabulates an output. This is why it is not grounded in the training data, and why “hallucinations” as we know it (AKA confabulations, a more accurate analogy) are ungrounded predictions rather than bad predictions.

Yes. In the context of AI being a tool, the problems that AI solves will be one of two types, closed and open.

When the problem space for an AI is sufficiently narrow, it’s less likely to hallucinate. This is known as closed problem spaces, which is when there are relatively limited variables, clearly defined conditions for success and failure, and a limited set of possible outputs. Chess, for example, has a very specific set of pieces. Those pieces have limited ways in which they can move. And there are clearly defined conditions for winning and losing the chess match. Confabulation is very unlikely to occur for sufficiently closed problem spaces because the possible outputs that likely lead to winning or avoiding losses is more easily calculable. I.e., flat probability distributions are unlikely to occur because there are more possible states in where success can be mathematically calculated rather than randomly guessed by the AI.

But when the problem space for an AI is very big, flat probability distributions are very likely to occur. This is known as open problem spaces. In problem spaces like this, the variables might as well be considered near endless, the exact conditions for success is extremely vague, and the number of possible outputs can get significantly large. Human conversations are very open problem spaces, which is why LLMs are the usual archetype to be attributed hallucinatory. Given a chess engine input of chess pieces on a board in the middle of a game, there are very limited paths to winning (outputs) and to win efficiently. Compare this to a given LLM input “What do you think about digital privacy?” having a lot of possible responses (outputs) and ways to respond “normally” like a human would.

Therefore, if

(a) AI’s utility is predictability
(b) Its drawbacks are confabulations
(c) Closed problem spaces are more predictable
(d) Open problem spaces are more prone to confabulations

then

(e) AI as a tool should be confined to closed problem spaces to maximize its utility

As discussed above, there are lots of cases where AI is very useful outside of war. And as pointed out before, they’ve been useful for a very long time.

This stuff leads to nuclear arms races btw. Nations nowadays hop on the up and coming technology and scientific development because it’s been proven with the creation of the atom bomb that technology means power, power means control, and control means either (a) stability (peace) or (b) monopoly over power (domination).

The funny thing is that without stability (rules, regulations, laws, etc.) the prisoner’s dilemma tends to take hold, but stability cannot if there is a chance of domination.

In other words, without a unifying set of norms for a collective to follow, self-interest becomes the primary norm and the collective becomes only a set of individuals. This is why nations are hoping to be the first to successfully integrate AI into their power mechanisms in the first place, either through surveillance or military power or economic influence. Self-interest develops because there is a lack of international AI regulation/policy. But even if there were an agreed-upon policy (i.e., even if there were stability), nothing logically stops the individuals from still being self-interested (because domination is still on the table). Only when domination is off the table is there stability.

As a concrete example, take the nuclear arms race and its modern day denuclearizzation. Prior to nuclear bombs, there was no nuclear disarmament treaty or policy. Nothing to regulate nuclear bombs on a global, international level, nothing to prevent other nations from developing it, etc. This instability in norms led to an increasing prisoner’s dilemma, which then led to a nuclear arms race. Only after many nations attained the power of nuclear bombs did the means of domination for any individual nation stop. I.e., the only reason we can enforce denuclearization is because no one had the upper hand.

Since AI is new, no one has the upper hand still, but the difference is that someone could have the upper hand (i.e., domination is on the table). In the same way that even though no one had the upper hand prior to nukes, America decided that it could have the upper hand and thus created the Manhattan Project.

1 Like

Yeah pretty much. We’ve been using AI in fields like photography & astrophysics for decade(s), no issue. The problem didnt emerge until megacorps decided to wager a fifth of the world’s wealth (hyperbole) on rapidly establishing AI products that yield no profits a little net benefit. Economy’s gonna crash. Just my opinion

Building weapons of war probably isnt the best example to demonstrate AI’s potential value. But it is absolutely essential in processing the massive volume of astronomical data collected by orbital telescopes - my favorite example

2 Likes

Ukraine is fighting for their very survival in the face of daily drone attacks. Using a machine with reasoning and advanced pattern recognition capabilities to shoot down autonomous murder munitions isn’t immoral or unethical. It is the definition of national security. Drones also perform reconnaisance which infringes privacy. I’m sure the Ukranians would rather stargaze than shelter from missiles but that isn’t an option. They have the right to defend themselves.

Thank you! You’ve put into words what I struggled to come up with, this isn’t something to be celebrated, very easily the companies behind these models could use the profit made by selling anti-drone turrets, to make turrets for whichever army pays them to! This is capitalism we’re talking about!

Bit hyperbolic, eh? Ukraine is in no threat of non-existence, the country’s current war is based on very much ethnic lines, they might lose a significant portion of territory based on which of the said territories’ civilians identify as “Russian” and natural resources should they lose their war but by no means is the Ukrainian state going to “die” in any sense in 10, 50, or even a 100 years if we’ll be honest.

Hmm, sure, one problem though.
Governments are simply enterprises of violence, the existence of a country is simply its monopoly on violence within set borders.

Might I remind you, dear sir/madam, that Ukraine has been repeatedly caught selling drones to ISIS and Al-Qaeda in Africa? which, by the way, has prompted several African countries to complain to the UN

Might I remind you.. that Ukraine has used Syria in its civil war as testing grounds for their drone technology, sending hundreds of drones to an Islamist organisation, that has, as of now, already killed 20,000+ people of Alawite, Kurdish, and Christian background in a series of Holocaust-style ethnic cleansing campaigns? (I have video proof if you want, I can send you so many videos of children’s corpses with their houses turned into mass-graves, it’s all been televised through social media) - as of yesterday, they quite literally “freed” a prison which held some 1,000 ex-ISIS officers, ISIS is now, for all intents & purposes re-emerged as a fighting force in Syria (and likely soon Iraq) due to many of their higher-ups being freed from prison and reorganising, in fact they have already started their first terrorist attack this week, all because of that organisation’s actions. Ukraine is indirectly responsible for the rebirth of ISIS. All in the guise of “resisting Russia”.

So, sorry, but no, not a single country on this earth has the right to train AI models for any sort of warfare, defensive or offensive, this notion of there being “good sides being oppressed” and “bad sides being the oppressor” is nothing short of hilarious to me - you are very clearly not up to date on what modern war entails, wars are no longer fought over ideologies, they’re simply fought over money. The current Ukrainian state has not stopped fighting because the oligarchial Cabinet of Ministers of Ukraine is currently finding new and inventive ways of privatizing national industries, land, and everything they can squeeze for money before eventually leaving the country to rot. Russia wants to invade Ukraine because the Donbass and Lugansk regions contain large amounts of oil and other sorts of natural resources that could make them hundreds of billions of rubles, if they cared about the whole civil war schtick - they wouldn’t have pushed so hard for “unification” with the two breakaway republics that initially planned to be independent states.

Any sort of “national defense initiative” being planned by a for-profit company is immoral, and is unethical.

Everyone has the right to train AI models. The rest is pontification.

So NASA paying private companies to put equipment into orbit is immoral because they save money and advance space exploration? We aren’t talking about Vault-Tec here.

Ukraine is one of the most corrupt developed nations. That doesn’t mean children need to die while we moralise over AI. Using machine learning to shoot down weaponised drones isn’t unethical and you can’t rage bait me over this issue. Show some empathy.

Every country have rights to defend itself. In any way. Not strike, exactly defend. Launching nukes is immoral, but shooting down explosives is okay.

If you say so, why would you lock door, set password and have security guards systems? Just stop fighting and let bad actors in.

It is not about country. It is about purpose.

I feel defending against drones is ethical (it is not even planes with pilot, nobody will be injured). But if this technology ever evolve to autonomous killing systems that harm HUMANS (like kill them) this is not normal.

And not because of ethics, but because such systems cam be breached or used by terrorists.

But shooting down drones? Even if hacked, then what? You will die if your drone shot down out of blue? Unless persons or animals harmed, I am not against.

Plus let’s not discuss “why this is happening”. We are discussing technology, its cons and pros. No political holywar please.

P.S: My employee from there. And I have right opposite photos and videos. But please, lets keep flames and holywars for x.com or facebook.


I am anti-AI, but I am against GENERATIVE AI as I feel like it can only make slop. Using AI for science is good. Remember Alpha Fold?

1 Like