the age of artificial illiteracy is here, and i don't mean it in the sense that people can't read anymore, obviously everyone can read.
you and i have lost the ability that takes anything longer than 30 seconds to process, and takes any sort of deeper, complex, critical thinking. we're all getting dumber but we can't help it.
we have never had access to more information, more sophisticated tools for learning, and yet we're much, much more behind, especially in getting started.
your brain is plastic and you're moulding it wrong
your capacity for sustained attention is lowering, and shorter, summarized articles that 'save you time' so you can do 'more important things' like scroll youtube shorts ARE NOT helping.
when was the last time you opened a youtube video purely for the content you assumed you might be learning from it, and you didn't immediately scroll down to the comment section "to have something to read", while the video plays?
have you developed the habit of reading big paragraphs from the bottom because you've subconsciously gotten used to having a little TL;DR summary at the end that chatgpt usually provides?
have you, in the last week or so, copied a giant article and put it into an llm like chatgpt and asked it to "explain it like you're five"?, following it up with a "shorter" because your brain was overwhelmed by a slightly big block of letters?
great, you've become artificially illiterate, your reliance on these amazing tools that do make you smarter in many ways has cooked your brain. your brain's neuroplasticity is working against you, you're triggering the same instant gratification part of your brain that hundreds of other social media and miscellaneous apps use to grab your attention.
we're being fundamentally rewired whether we like it or not
oh hold on now, i know you want to scroll away and away and off into your numb little space again where you don't have to think as much, but think with me.
this isn't some moral failing or you being lazy, your brain is doing exactly what it's supposed to do - adapting to survive in this environment. it's optimizing for a world where information comes in quick hits, where answers are instant, and where being fast matters more than being thorough. your brain is literally rewiring itself to thrive in this new landscape we've created, where depth is optional and skimming is a survival skill.
but just because it feels so natural and good doesn't mean it's good for us. just because we can adapt doesn't mean we should. our brains are incredible at adjusting to whatever environment we put them in, but that plasticity is a double-edged sword. we're training ourselves to be really good at something that might be making us collectively dumber as a species in the long run.
accelerating to... what exactly?
%2Feacc-logo.webp&w=3840&q=75)
LLMs 'supercharging' us is probably something you've felt if you use AI tools on the daily, but at what cost? these tools are so good at giving us instant answers that we've completely stopped developing the patience for slow understanding or more recently, research.
the convenience is undeniable; AI tools help us process information super-efficiently and handle routine cognitive tasks. but when you can ask ChatGPT to summarize any document, explain any concept, or distill any argument and opinion, why and how they got there, into digestible chunks, why would you ever engage with the original source material?
when we consistently choose the summary over the source, the instant answer over the gradual click, we're truly, deeply uncomfortable with ambiguity, complexity, and the slow burn of genuine learning. these LLMs, acting as 'global problem solving calculators', are destroying your critical thinking, attention, and the ability to sit with uncertainty while you work through complex ideas.
THE OUROBOROS (we're eating our own tail)
%2Fouroboros.jpg&w=3840&q=75)
the very researchers who were responsible for making the majority of breakthroughs were absolutely excellent and had amazing cognitive abilities that we're losing. they spent years reading super dense papers, wrestling with abstract mathematical concepts and doing extremely slow, methodical work that the core of their expertise is built upon.
but if AI makes that deep engagement pointless, rather impossible for most of us from here on out, where do future breakthroughs come from? innovation from surface-level understanding and talking to an llm is not a real thing and will not lead to real world results... yet.
no novel connections will be made if we continue to use these tools to get a summarized version of everything. only people who have developed the cognitive stamina to sit with difficult problems for months or years, those who have read broadly and deeply enough to make these connections, those who have trained themselves to think in ways that can't be easily replicated by AI.
the next generation is doomed to be consuming this knowledge instead of creating new novel ideas. this is absolutely fucking terrifying from an innovation standpoint.
the vanishing of the human spark
%2Fhuman-crop.gif&w=1920&q=75)
deep reading has always been a form of empathy training - when you spend hours inside someone's mind through their writing, following their thoughts and arguments across hundreds of pages, you develop an incredible patience for perspectives that don't immediately click with your worldview.
to lose the ability to sit with challenging material, we also lose our ability to engage with challenging people and difficult conversations. the same impatience that makes us scroll past anything longer than a tweet will probably also make us tune out the moment discussions become complex or uncomfortable.
we want the TL;DR version of everything now - including other people's lived experiences and nuanced social issues that deserve so much more than a quick skim. we're training ourselves to expect simple answers to questions that fundamentally don't have simple answers.
we're collectively deciding that the human experience can be compressed into concise bullet points and quick takes. but some things, the most important things, can't be summarized without losing their essence. and that's what truly terrifies me about this shift: we're not just losing our ability to read deeply, we're losing our ability to feel deeply.
in this rush to optimize and accelerate everything, we're slowly becoming less human. that spark of self-awareness, that ability to sit with our own thoughts and truly understand ourselves and others; we're trading the messy and alive complexity of human consciousness for the clean, efficient emptiness of letting a machine summarize it for you. and the scariest part? you know it's happening, but you're okay with it because it makes you better, faster, more efficient, but where do you think you'll draw the line when the time, inevitably comes?
to be or not to be (productive)
%2Fcursor-ss.webp&w=3840&q=75)
part of the problem lies in how we've framed the relationship between humans and AI. the dominant narrative focuses on productivity, efficiency, and optimization - AI is valuable because it helps us do more, faster. but this framing misses something crucial about what makes human thinking valuable in the first place.
the goal of human cognition isn't always to reach the right answer as quickly as possible. sometimes the value lies in the process itself - in the slow work of understanding, in the patience required to grapple with complexity, in the satisfaction that comes from working through difficult problems without external assistance. when we optimize away these experiences in favor of AI-powered tools, we may be optimizing away some of what makes us most human. you don't have to agree with that, i think we can find a balance that truly gives us more time, and still continues to allow us to think deep and feel deep.
2 steps forward, 4 steps back
%2Firony.webp&w=3840&q=75)
(a little irony never hurt anyone)
intention beats bad habits.
fortunately i don't think this will be as difficult as quitting cigarettes, i also think in some ways it might be easier to quit cigarettes that it is to quit AI, given how much of it surrounds most of us on the daily. AI was absolutely used while i wrote this markdown file as a blog in cursor because i was too lazy to look up how to embed images properly, and it made me a little ai (artificially illiterate). i'll work on that, i promise.
but my point isn't about rejecting AI or returning to some imagined golden age of human attention. these tools are powerful and beneficial when used thoughtfully. the question is how do we harness their benefits while preserving our cognition, without rotting our brains.
this requires intentional effort. just as we exercise our bodies to maintain physical fitness in an increasingly sedentary world, we need to exercise our attention spans to maintain our cognition in an increasingly distracting world. this might mean regularly reading long-form articles about the things you want to learn about, watching full lectures without seeking out summaries through transcriptions, or tackling challenging books that require sustained focus.
by being a little more selective about when we use AI, and by choosing to engage with primary sources when the goal is learning rather than just information gathering, we can recognize that some forms of understanding can't be shortcut, real insights only emerge through the slow work of deep engagement.
the path forward is about finding ways to use these tools that enhance rather than replace our natural capabilities. we need to be mindful of when we're using AI as a crutch versus when we're using it as a tool to augment our own understanding. a little discipline and a little faith goes a long way.
the windmills of your mind
%2Fwindmills.jpeg&w=3840&q=75)
welcome to the crossroads:
we're at this weird inflection point where the same tech that could free up human intelligence for higher-level work also makes us cognitively dependent and shallow.
the tools that promise to augment our thinking could end up replacing it entirely if we're not careful. and that's a paradox that i'm glad i have the honour to sit with.
artificial illiteracy isn't inevitable, but preventing it requires conscious effort. we need to value depth alongside speed. we need to recognize that in an age of artificial intelligence, one of the most radical acts might be the simple choice to read something long, think something through slowly, or sit with uncertainty until understanding emerges naturally. you remember how it felt for something to click in your head, it's time to take pleasure in that feeling again.
the future of human intelligence may well depend on our ability to remain literate, capable of the sustained attention, complex thinking, and patient understanding that no AI can replicate.
because if we don't maintain these capabilities, we're just going to be really good at asking AI to do our thinking for us. and that's not intelligence, just a very, very, sad autocomplete tool.
thanks for reading!