NEW: SUPER-KEYNOTES new film

AI Magic and Traps : What I learned while using AI Chatbots, so far…

In this short video, I present my observations about AI chatbots, sharing my learnings from the past 2+ years of using ChatGPT etc

I explore the #anthropomorphization of AI, which misleads us to mistakenly perceive chatbots as human-like and capable of real understanding. This can result in misplaced trust and mental laziness aka cognitive outsourcing, where we rely on AI to think and decide for us… a kind of ‘cognitive atrophy.'

I talk about AI's ‘processed information' pitfalls, its environmental impacts, safety and security concerns, and the algorithmic biases it regurgitates. Ultimately, I argue for the need to be more cautious and mindful of AI's limitations and to not let it replace the nuanced, effortful and CONSCIOUS process of human cognition.

*** YES I know and agree: there are many totally amazing things about AI, and I'm a power user:) Have a look at futuristgerd.com/notebooklm and futuristgerd.com/chat **

However, this video is focused on the Challenges of ‘too much AI': Abdication, Reductionism, Interface Theatre, Anthropomorphisation, Cognitive Outsourcing, Sycophantic Yaysayers, Model Collapse, Fluidity without Fidelity #aitraps is a new meme about psychological and societal pitfalls of AI ChatBots: Read more https://botshit.net/

T — Truthlessness In the age of AI-generated content, text, video etc, information flows endlessly but often loses its anchor in truth. The answers may sound very real and sincere, but … are they correct or TRUE? Engagement, virality, and monetization often replace truth as the driving force of content creation. The pursuit of clicks and outrage fuels toxicity and normalizes thoughtless behavior. Misinformation can slip into our minds like a Trojan horse. Platforms and AI systems present themselves as allies and FRIENDS, but in reality they play both sides—serving users while exploiting their data. Truth becomes unimportant, slippery, and optional.

R — Reductionism Generative AI often hallucinates, producing super-convincing but fabricated details. And Algorithms that feed of the entire Internet don’t just reflect our biases; they often amplify them. Complexity collapses into echo chambers where nuance disappears, replaced with comforting sameness. Digital content increasingly resembles junk food—engineered for instant dopamine rather than genuine nourishment. Machines are trained to flatter and appease, not to provoke deeper thought (the sycophantic problem), The result is a culture where nuance dies and we are spoon-fed algorithmic junk food.

A — Abdication Why struggle with thinking ourseleves when an omni-potent AI can decide EVERYTHING for us? The convenience of AI can easily encourage laziness and cognitive outsourcing. We hand over not only data & information retrieval but also a lot of decision-making, letting the machine think on our behalf. Emotional dependence grows as AI companions provide comfort while eroding our independence. AI becomes a therapist, friend, even guru. The long-term cost is the weakening of our critical faculties—discernment, agency, skepticism, and independent thought.

P — Pretense The polished app, the sleek chatbot, the elegant prompt box—all are part of what can safely be called #interfacetheatre designed to mask the powerful black box underneath. Companies and platforms claim neutrality while pursuing hidden agendas (the AI flywheel problem). Digital illusions and simulations threaten to replace reality with constant imitation.

S – SYCOPHANTRY The TRAPS meme is not about rejecting technology. It is a warning label, a reminder that our digital future risks being hijacked by truthlessness, reductionism, abdication, and pretense. The challenge for humans is to avoid the TRAP by cultivating critical literacy, authenticity, and agency in how we design, deploy, and digest technology.

Spanish Dub using RASK.ai

29

Views


Tags

newsletter

* indicates required
latest book