Adithyan Ravikumar Adithyan Ravikumar
· Thoughts · · 5 min read

New Dependency

A drone strike took out an AWS data center. Claude went offline. And I realized just how deep the dependency goes.

Claude went down this morning. Not a graceful degradation, not a slow timeout. Just gone. I sat there staring at my screen the way you stare at a light switch during a power cut, flipping it on and off like the third try might work.

The cause turned out to be wild. Iranian drone strikes hit three AWS data centers in the UAE and Bahrain. First time in history that military action physically took down a major cloud provider. Fire suppression systems flooded server rooms. Power was cut. And thousands of kilometers away, sitting in my apartment, I couldn’t draft an email.

That’s when the question hit me: when did I become this dependent?

The new power outage

We all know what a day without electricity feels like. The Meta outage a few years ago had a similar flavor. No Instagram, no WhatsApp, and for a few hours, millions of people rediscovered that phones could make calls.

But Claude going down felt different. It wasn’t like losing a convenience. It felt like losing a capability.(Now, some of you might be wondering: why didn’t I use another LLM? Answer: once you use Claude Opus 4.6, there is no going back to other models for heavy tasks.) I didn’t seamlessly switch to doing it myself. I hesitated.

That hesitation is the thing worth examining.

Reflecting on this, I realized it’s not just me or just tech savvy people, even the older generation and those less familiar with technology have already started using LLMs.

ChatGPT has 800 million weekly active users now. India’s user base doubled in a single month after OpenAI launched a cheaper tier. In India, about 22% of users already interact in Hindi and regional languages. Voice assistants powered by LLMs are shipping in every phone, every browser, every app.

And unlike Instagram or TikTok, LLMs don’t just entertain. They think for you. That’s a fundamentally different kind of dependency.

Your brain on ChatGPT

I don’t want to overload this with studies, but one finding genuinely shook me. MIT’s Media Lab ran an experiment where they put EEG headsets on students while they wrote essays over four months. The students who relied on ChatGPT showed measurably weaker brain connectivity compared to those who used search engines or just their own brains. Not self-reported feelings. Actual neural measurements.

They called it “cognitive debt.” I think that’s the right term because debt compounds.

The people I worry about most are the ones whose brains are still forming. The prefrontal cortex keeps developing into your mid-twenties. If a 19 year old outsources their thinking during those years, they might never build the cognitive foundation properly. And with 85% of young people aged 18 to 24 in some countries already using generative AI daily, this isn’t a future problem. It’s happening now.

The internet is getting gentrified

I keep coming back to an analogy that feels right. I think LLMs are doing to the internet what urban development did to cities.

When everyone uses the same LLMs to generate their content, even simple things like a Reddit comment or an Instagram caption, the output converges. There was a study that used Italy’s ChatGPT ban as a natural experiment. When restaurants lost access to the tool, their Instagram posts became 15% more lexically diverse. And engagement went up by 3.5%.

People responded better to the messier, more human version. Let that sink in.

Similarly designed websites, similarly worded ad copy, similarly structured blog posts, similarly captioned reels. The internet starts to feel like a single author wrote it. And if that sounds dramatic, researchers have already found that AI image generators drift toward blandness purely from repeated use, no retraining needed.

I should probably mention this

Full disclosure: I used Claude to help write this article. Most of the recent posts on this blog have had AI involvement in some form. Not to generate the ideas or fake the voice, but to do what I can’t do efficiently on my own: search the web, find credible studies, validate whether my gut feelings have data behind them, fix typos and grammar errors, etc. But always the thoughts here are mine.

I think that’s the honest version of how most people should be using these tools. And I think the distinction matters. There’s a difference between using AI as an assistant and using it as a replacement for your own thinking. One keeps the muscles working. The other lets them atrophy.

What I’m doing about it

We can’t stop using Claude or ChatGPT. They’re genuinely useful, and the productivity gains are real.

But think first, then use the tool. Draft the rough version yourself before you ask the AI to refine it. Use it as a sparring partner, not a ghostwriter. Keep the cognitive muscles active even when the shortcut exists.

And maybe more importantly, I’ve started thinking about what I want to preserve. Our writing voice. Our capacity to sit with a question and not immediately reach for the chat window. These aren’t nostalgic luxuries. They’re the things that make us useful when the tool goes down, or when it gives us something polished but wrong, or when the situation calls for judgment no model can provide.

A drone hit a data center in the UAE, and I couldn’t write an email. That’s a sentence that would’ve sounded absurd five years ago. It doesn’t sound absurd anymore. And the fact that it doesn’t is, I think, the most important thing to sit with.

Tagged in

ai llm technology culture

Share this

Join my newsletter

← Back to all writing