Not talking about AI overviews here, though idk if they’re the same thing, but the AI Mode button that is in the top left corner.
I’ve often myself using it to double check info that I know to be correct, but just in case, and it’s been a time saver. Instead of delving into reddit threads which seems to be only thing google seems to be good at finding these days, it give me information in mere seconds which honestly is making me much more reliant on it as it’s been more often than not very much correct and made me feel like I’m using old google which was actually useful.
But comes with it are privacy compromises, and although I’m not logging into google account to use it and using private mode, I don’t know how to feel about the reliance I’m developing on it instead of traditional search.
I previously would use chatgpt search in rare instances I couldn’t find exactly what I’m looking for, but it would hallucinate sometimes which I don’t think the google’s ai mode has, but I haven’t thrown in tricky stuff on it so who knows.
With regards to the veracity of the results, it’s really the same as any tool though isn’t it? How can we trust wikipedia or any site that makes any claim? If we really want to know, we have to go to the source (and maybe its source ) to verify. I welcome new tools so long as they actually help accomplish our goals.
I’ve made good use of notebooklm from google’s AI offerings, but it certainly doesn’t get everything right and I have to massage its results sometimes to get what I’m looking for. But that’s not too different than pasting together the results of many different sources myself and missing some pieces or misreading some parts.
As for privacy compromises, that comes with the territory of online AI resources in general, except maybe services like Lumo or duck.ai?
Always be wary of hallucinations. Note that you are actually using Gemini’s search feature, but without directly going on the website. Not only it is impossible to know whether the sources it’s citing are correct, but the fact you are using Gemini also changes how you should perceive the information you’re getting.
Ask yourself whether you feel comfortable clicking the search feature on Gemini itself. They are functionally the same thing.
So the real question is, are you willing to sacrifice accuracy for quick access to info?
I’ve never interacted with Gemini directly so can’t comment much there. And I double check info, though this is slipping into mission critical territory instead of every search
Why though…? AI is technologically incapable of metacognition for the kind of tasks we employ LLMs for. “Double checking” information is an act of metacognition. You are making sure that whatever you think to be true is actually true or not. Since AI cannot do metacognition, and since double checking information conceptually falls under that category, you are therefore not really “double checking” anything at all. The desired end goal you have when you want to double check something is not adequately reached when you use AI.
I would admit that using AI mode is at least better than using Reddit, lmfao. We agree on that.
Relying on AI for your metacognition is going to end up badly, because again, it cannot metacognize in the context of LLMs. People ages ago thought that having a computer the size of your hand right in your pocket was going to make people better somehow because they could have instant access to the world’s entire knowledge. Basically the world’s largest book yet thinnest book. I doubt that making an AI tell us which page to flip to will make humanity any better.
Please don’t come to your own individual consensus on this. There’s a literature on this if you want to delve into it. If you do, please do not use AI to condense or summarize it…
There are AI experts whose job it is to know and research this stuff. As far as the general consensus is… LLMs hallucinate, even Google’s AI mode. It is inevitable.
AI is most definitely not the same as any other tool. Prior to LLMs, “AI” was primarily used to sort information. Nowadays, it is used to generate information. That’s the key difference… An LLM can certainly cite which source they used to spit out an output, but that generated output is still a blackbox. It can say something completely different from what the actual source says. The difference between this and Wikipedia is that Wikipedia is human-generated (at least as of right now). If someone edited an article on Wikipedia to include information that is completely different from the source, another human will come around and fix that. AI cannot do that to itself without us making it do that.
It is in fact different. Both you and the LLM are generating information. The difference, again, is that you can metacognize. You can step back and look and decide whether you may be right or wrong, or whether the information you are reading is correct or incorrect, etc. You can acknowledge that you are messing up, and this acknowledgement drastically changes your cognitive behavior for the better. All the LLM can do is spit out an output which says that it is messing up, but it won’t act on that because that requires metacognition. We ourselves have to train it to act on that messing-up-ness. It’s an external factor, whereas humans can do this internally. The generated content from humans will therefore be drastically different from the generated content of AI.
Where the cultural apocalypse arrives is when we collectively rely on AI for our collective metacognition. Once this happens, human-generated content will be indistinguishable from AI-generated content. This has many social ramifications, but the main one is when future AI uses this badly-generated content for training data. This will pollute the AI, and out comes extremely horrible, useless content. (Side note: This is also why the forum is against AI-generated content.)
I also welcome new tools as long as are functional in achieving our goals. The problem with this is that people will not use AI in the best way possible. The side-effects of this misuse is illustrated in the above paragraphs. People will rely on AI for their own metacognition because they are generating content. This is illustrated by the OP when they admitted to using Google’s AI mode for “double checking” information. This is going to be a bad thing once it is culturally accepted. AI is extremely functional, and I am not anti-AI. We can employ AI to do lots of interesting tasks. The problem, specifically with LLMs, is the kind of relationship people will have with it.
It being useful or functional does not mean there are no ramifications to it… Shunning a tool is the right call when there is evidence that its misuse is detrimental. There are things AI can be good at and things AI can be bad at. Shunning it when it is being tasked to do things it is bad at is actually a good thing.
I use it for small stuff, like when I’m trying to find a word that’s on the tip of my tongue, or a movie that I’ve forgot the title of, etc.
I agree with most of your points here, and that being my thinking made me not interact much with AI, to the point that only LLM I have used til date is ChatGPT, and then Lumo to try it out.
And I think we have a misunderstanding on what kind of searches I’m doing. I’m not using it for mission critical stuff, but things that would make me open an article or a reddit thread to find bits of info.
For example, I need to see what are the price offerings that Proton is currently offering across their different services, and what they usually offer during sale periods. Google’s AI mode instantly gets me all of that while comparing the price differences, such and suck. It’s especially useful when a service tries to mislead about its offerings, for example when you try to buy google one sub it doesn’t show you its entire catalogue etc. There’s an argument to be made that yes, eventually the AI could manipulate my spending habits here, but it’s not like I’m completely relying on it, I do my due diligence, but an eagle eye from the start is nice to have.
Another example would be troubleshooting, or when I can’t remember what settings is where, I can explain it exactly what I’m looking for and it finds me in an instant.
That‘s the sort of stuff I’m talking about, but I do admit it’s a bit of a slippery slope.