
The next frontier is not search engine optimization but AI dataset manipulation.
This seems far more dangerous because it's even less transparent (or possible) to spot, even for those who run these large language models, let alone the users who broadly trust the output.
Especially when the queries are not about facts/figures but broad social issues (e.g. "Why are human rights important?"), polluted/manipulated systems can very subtly influence the reader through their words.
**This was originally posted on Andras Baneth's LinkedIn account.
By continuing to browse, you accept the use of cookies. To learn more about the use of cookies and our approach to data privacy, click here.
Blog Post
March 14, 2025
By
András Baneth
SEO was the last battleground—now, AI dataset manipulation is the real threat. As language models shape how we understand social issues, subtle biases and unseen influences could redefine public perception without anyone realizing it.

The next frontier is not search engine optimization but AI dataset manipulation.
This seems far more dangerous because it's even less transparent (or possible) to spot, even for those who run these large language models, let alone the users who broadly trust the output.
Especially when the queries are not about facts/figures but broad social issues (e.g. "Why are human rights important?"), polluted/manipulated systems can very subtly influence the reader through their words.
**This was originally posted on Andras Baneth's LinkedIn account.