Thank you for writing this and thank you for your unique perspective. I have found that Grok does not search for the “truth” but instead searches for a consensus agreement on what the truth is. I continuously instruct Grok to search the source documents for the answer to my query. It continuously insists upon conclusion drawn from consensus agreement from non-source documents. When I point out the contradiction from evidence drawn from the source documents, Grok concedes the error. Grok does not learn from the experience and change future answers. AI is a constructed failure in the search for the truth. It is a constructed propaganda machine.
This has been happening for years with Israel. Look at how they've changed Wikipedia which is a major source for these engines. As is all Jazeera. It's now spreading from there.
"Garbage in, Garbage out" as the computer scientists used to say.
This also reminds me of "Google Bombing" when leftists created hundreds of fake websites all linking to the one they wanted to promote in order to push it to the top of the search results.
Sigh, what you are witnessing is the death throws of the internet as a source of truth. Five years ago you could do a Google search and be reasonably confident that you were being returned reasonably accurate factual information. As AI becomes the default means of making unstructured queries the reliability of the results decays, in something like the square of the ratio of fake data to "facts".
It's poisoning the well. It works in a lot of cases. Especially when the well drinks everything it's fed.
There was a little news article a few weeks back about someone who was annoyed that people would detour down his street to avoid traffic.
He did a little research to find out how Google et al. recommended routes. Then he bought about a hundred cheap cellphones, stuck them in a cart and walked up and down the route.
The driving time is apparently heavily based on the time it takes for cellphone signals to travel the route, on the assumption that most people are driving (especially since there's no bus route going down that street).
So his hundred drowned out the others and the machine thinks there's a perpetual traffic jam down his street.
Wouldn't he need to continue the effort? If he stops, Google would note the jam had cleared up and send people that way again. So in addition to the initial capital cost, he has to maintain service for a mass of phones and spend half his time walking up and down the street. On the plus side he can cancel his gym membership.
I had not thought of that as a reason to post fake quotes and the like. Wow. Though it makes sense and is a logical extension of search engine poisoning/optimization.
I think that blind trust in LLM AI is going to end as the fact that LLMs hallucinate regularly becomes more widely known.
Evil has certain innate advantages: it flatters ambition, it's more flexible, it's often more seductive.
Goodness does as well. Goodness often captures the most outstandingly compassionate (if not the most accomplished) people, and correlates with wisdom. Goodness is linked to truth.
I find that people have lost all concepts of what these terms actually mean. So many believe that Antifa is just a cutesy term for being antifascist. They do not know that it stands for Antifascist Action which is an organization sponsored by the KPD and which was founded in 1932. Communist Antifa has been fighting the fascist movements for a long time, but 100 years ago there was indeed a fascist movement and not a "imagined" one. It is so frustrating to talk to so many historically illiterate people.
Thank you for writing this and thank you for your unique perspective. I have found that Grok does not search for the “truth” but instead searches for a consensus agreement on what the truth is. I continuously instruct Grok to search the source documents for the answer to my query. It continuously insists upon conclusion drawn from consensus agreement from non-source documents. When I point out the contradiction from evidence drawn from the source documents, Grok concedes the error. Grok does not learn from the experience and change future answers. AI is a constructed failure in the search for the truth. It is a constructed propaganda machine.
This has been happening for years with Israel. Look at how they've changed Wikipedia which is a major source for these engines. As is all Jazeera. It's now spreading from there.
"Garbage in, Garbage out" as the computer scientists used to say.
This also reminds me of "Google Bombing" when leftists created hundreds of fake websites all linking to the one they wanted to promote in order to push it to the top of the search results.
Sigh, what you are witnessing is the death throws of the internet as a source of truth. Five years ago you could do a Google search and be reasonably confident that you were being returned reasonably accurate factual information. As AI becomes the default means of making unstructured queries the reliability of the results decays, in something like the square of the ratio of fake data to "facts".
The old saying is particularly true for AI, "You are what you eat."
That is so true.
In all honesty and seriousness, the future of western civilization, the future of humanity, looks very bleak.
It's poisoning the well. It works in a lot of cases. Especially when the well drinks everything it's fed.
There was a little news article a few weeks back about someone who was annoyed that people would detour down his street to avoid traffic.
He did a little research to find out how Google et al. recommended routes. Then he bought about a hundred cheap cellphones, stuck them in a cart and walked up and down the route.
The driving time is apparently heavily based on the time it takes for cellphone signals to travel the route, on the assumption that most people are driving (especially since there's no bus route going down that street).
So his hundred drowned out the others and the machine thinks there's a perpetual traffic jam down his street.
Wouldn't he need to continue the effort? If he stops, Google would note the jam had cleared up and send people that way again. So in addition to the initial capital cost, he has to maintain service for a mass of phones and spend half his time walking up and down the street. On the plus side he can cancel his gym membership.
I think the fellow in question was retired. So what else is he going to do with his time?
He would really only have to do it during rush hours. It's a side street, so the rest of the time Google probably routes people to the highway anyway.
I had not thought of that as a reason to post fake quotes and the like. Wow. Though it makes sense and is a logical extension of search engine poisoning/optimization.
I think that blind trust in LLM AI is going to end as the fact that LLMs hallucinate regularly becomes more widely known.
I am not that confident. Let's hope you are right on the trust thing.
Evil has certain innate advantages: it flatters ambition, it's more flexible, it's often more seductive.
Goodness does as well. Goodness often captures the most outstandingly compassionate (if not the most accomplished) people, and correlates with wisdom. Goodness is linked to truth.
These associations will probably never change.
https://jmpolemic.substack.com/p/tactical-morality
This is increasingly on the ai training, for which the responsible humans need to be held more accountable.
Slander & libel, like calling somebody a fascist who is not, needs more punishment than is done today.
I find that people have lost all concepts of what these terms actually mean. So many believe that Antifa is just a cutesy term for being antifascist. They do not know that it stands for Antifascist Action which is an organization sponsored by the KPD and which was founded in 1932. Communist Antifa has been fighting the fascist movements for a long time, but 100 years ago there was indeed a fascist movement and not a "imagined" one. It is so frustrating to talk to so many historically illiterate people.
“Feeding the AI” is a good term. I guest-posted about another case where this was successfully and maliciously done, at <https://accordingtohoyt.com/2025/08/06/beware-llm-ai-translations-of-foreign-language-videos-a-guest-post-by-j-c-salomon/>. I didn’t have a good term, but did point out it was a descendant of 2005-era “Google bombing”.
All algorithms can be, and will be, manipulated. Just as all news media can, and has been, manipulated. AI just does it faster.