An insightful conversation worth listening to about AI, tech, tech policy.
AI is maybe the worst thing humanity has ever invented
The real back bone of any innovation ever in human history is technology and anything that can come from it will come from it, in time and inevitably. By this logic, technology is the worst thing humanity has ever invented. Not AI or anything else.
Technology is a category of many different things.
To say that because many things of this category exist, ANYTHING in that category is INEVITABLY, doesn’t logically follows.
I’m not sure I follow your comment/view and sentence structure. Please explain differently.
Ok, I try to explain it differently:
Your argument:
Could be put in a syllogism that looks like this:
“”"
-
proposition)
AI is part of the category technology -
proposition)
Humans have created many things that are part of the category technology
Conclusion)
AI is inevitable to exist.
“”"
I don’t agree that the conclusion is following from the proposition as described above.
If I forgot proposition or make logical error’s, please correct me.
Why don’t you see the logic in my argument/thinking here? To me that’s clear. You have just said it yourself in the quotes. What am I not accounting for that makes you think my argument is not logical?
So your argument is correctly represented in the syllogism?
I think so.
Again, I ask you why you don’t think so. Perhaps that will shed light on the reason behind your disagreement with my first comment.
Simply put: why do you think I am wrong in my thinking?
Nice, I just wanted to check if I understand your position correct before I address it.
That things we create some things inside a category, doesn’t mean that we need to create all things that are part of this category.
For example:
A society could build many different types of buildings, but that would not force them to build all possible types of buildings.
A company could sell some things that are inside the category of electronics, this doesn’t mean they need to sell all types of electronics.
So the point is, because you have some things that are inside a category, you don’t need everything in that category.
I don’t agree.
Humans will try all things it can to see all the things that can be done. Even if something turns out to be useless, we will still make it at least once before going on to learn it is useless. But once something turns out to even marginally be useful, humans will continue to progress on the same even if the usefulness of said something is minimal to keep trying to better or improve its usefulness.
Following the logic, argument, and metaphor - since electronics and software was developed and invented along with the internet which has proven to be immensely useful thus far, it would indeed be inevitable we stop with innovation with tech and software here and not go on to make thinking machines who can aid humanity in however small or large manner (which has yet to be seen fully and completely).
And that’s why I still think my original comment above remains true.
But you can argue that the first real innovation of humans was invention of language. And everything followed from that. Hence language is to blame. But I’m not sure if it makes sense to go this far to find the root cause.
The problem is that it seems like you want to blame a prior invention, because you argue like other inventions would necessary follow from that invention.
That A is a dependency of B, doesn’t mean that B is a dependency of A.
If you learn to write Java, this doesn’t mean that you have to write any program that can be written in Java.
You could learn Java and only write certain Java programs.
I don’t think that AI is an aid to humanity under our current circumstances.
Maybe it could be very useful under different circumstances.
I don’t think we’re going to see eye to eye here. We think differently and believe the other to have faulty thinking.
Let’s drop it.
I don’t believe that you have faulty thinking in general.
I suspect that either you or I have wrong assumptions and/or draw wrong conclusions from them
That’s what thinking means. But alright.
LLMs are not bad. Humans can be good and bad. Many powerful technologies are used for bad and good things.
LLMs can do many positive things for society.
Less privileged people can get advice on complex topics they couldn’t afford an expert for. Students that couldn’t afford tutoring can get help from an infinitely patient tutor. People building privacy software can increase their productivity and security of their products.
LLM’s have many advantages.
And if they would stay on the current level, I would agree that they are a net benefit for humanity.
But we would have a big problem if they advance to a point where they can take the job’s of over 50% of the population and/or if they would make it possible to build human independent armies.
To thrive today, it would be ideal for parents to teach their children at home and outdoors. This brings numerous benefits, including challenging tasks that help raise their reasoning skills, develop natural abilities, self-esteem, discipline, and so on.
From there, they develop a sense of privacy, self-confidence, and other qualities. Technology should be used appropriately for each situation.
This will never replace, much less help, AI.
I admit I’ve used AI to create songs, for example; I don’t deny it. But for fundamental things, no matter how much this technology claims to offer highly complex solutions, it will never replace reality itself.