Generative AI is bad and you should feel bad for using it.
This was also published on BlogCampaigning.
A few days ago a friend of mine texted me that he had used ChatGPT to recommend books to him based on other books he had read.
I reacted to dismissively to that. I told him that I was moving away from using Generative AI, and that I was in the process of cancelling my ChatGPT account.
"hmmm. Tell me more," he wrote.
This is the more.
AI is bad.
When I say that AI is bad I don't mean all AI. The tools that are allowing us to understand human proteins so that we can cure ourselves are good. The parts of AI that speed up scientific discoveries in the cosmos and helps us understand who we are is good.
Assistive AI that helps people with disabilities, helps us find and sort information, and make mundane tasks easier is surface-level good.
The complexity is identifying where we as people and we as society draw the line. A gun isn't inherently bad as an object, but in 2026 it's overwhelmingly used for bad purposes (including law enforcement).
The same image type of recognition technology that allows authorities to identify trafficking victims based on images of hotels uploaded to databases is being used for mass, police-state surveillance that will very likely be weaponized.
So it's hard to say where the exact line of "bad" to "not bad" is on the AI spectrum, just as it is hard to do so with any technology. You either accept that nuance is part of the argument or end up as an absolutist.
Part of the this is due to the how new AI is and how quickly it is evolving. Instead of creeping up on us, the singularity moved so quickly we didn't even notice. A few months ago I referred to Generative AI as a tsunami, and that we were only at the part where the sea had pulled back. At the time, I didn't think that it had hit us yet.
Everyone knows that the surest sign of a tsunami coming is when the tide pulls out drastically, far further than it’s supposed to. It reveals a part of the seabed that was previously hidden.
Fish flop around helplessly. Crabs scuttle. Onlookers gawp: they know this isn’t nature as usual, and they know it’s bad, but rubbernecking feels so good.
I think that this is exactly where we are today. Now. Generative AI has pulled back and revealed all the muck for us to see, but hasn’t fully hit us yet. We’re still early enough in what is happening that we can recognize it.
But now I think it has hit us. And it's hit us so fucking fast that we don't know what to do.
And it's too early to tell if what's happening is objectively "good" or bad." It might not even be possible to determine that, with the answer depending on the observer.
Generative AI is clearly here, clearly a thing, and is being heavily pushed on us. It's going to isolate us, collapse democracy, and likely send humanity into an artistic and cognitive dark age.
From my reference point, it's bad.
Three Probable Futures
I read a lot of Science-Fiction, so I'm either uniquely positioned to be an expert in what the future looks like or biased towards unreality. There is no middle ground. In even the most utopian stories, powerful Artificial Intelligence is considered bad for us as humanity.
I don't think Artificial General Intelligence, the sort of near-omniscient techno-god that's been prophesied in the books I like, is the real threat or potential reality. It might eventually be possible, depending on who you ask. But if you shake the charlatans and hype men out from the actual experts we are still a long ways off.
What I do see are three probable futures based on our current trajectory.
In a Procedural Future, nearly everything we experience online will be created on-the-fly and in near real-time. We're already seeing this in the form of search results: Your "AI Overview" from Google is generated at the time you search in the same way your ChatGPT prompt works. Expanding on this, it means that in Procedural Future the links we follow on those searches, the other pages we visit, the videos we watch, the music we listen is all generated as we browse. The experience is tailored to our exact interests and needs, and guided by the invisible but biased hand of the system itself.
A close cousin to that is the Generated Future. And this might be a future that we're already in, at least in (conspiracy) theory. This is the idea that the overwhelming majority of things online are created by competing networks of bots, and that this content (comments, posts, questions, replies, videos, and more) are created as much to influence other bot networks as they are humans. It's the belief that what was once a living, thriving internet full of real people is now dead. It might have been conspiracy theory fodder in 2021, but is closer to reality in 2026.
In a Constructed Future the majority of what we see is similarly generated or otherwise fixed, but the difference if who controls it. In this case, it's constructed by the users, and is the truly shared metaverse hallucination that we were promised. The problem is that the it needs platforms to work, and those platforms have as much if not more influence than the users themselves. Facebook didn't shift their entire business, rename their company, and invest tens and tens of billions to the metaverse because they were going to give up the control they had. They did it to tighten their grip.
Aspects of these sound cool. Procedurally generated game worlds are interesting and exciting, and probably fun to play in. The sci-fi metaverse (net, matrix, etc) of making the impossible possible has potential for real, human creativity.
But when they become the only thing, and live beyond game worlds, and shift and shape our reality, they become less cool.
They stop feeling "good" to me, and they aren't futures that I'm fighting for. Part of that problem is that they don't leave room for anything created outside their bounds, or outside the control of the platform owners.
The internet of yesteryear was lo-fi and full of unexpected pockets and creativity. There were still walled gardens, but they were small and few, and even then tended to quirkiness rather than uniform vibes.
Today's internet is a polished, boring mall, patrolled by security that deletes or hides anything controversial.
These probably futures extend the algorithmification of our infinite scroll into eveything.
They're also not just science-fiction. All of these are very real possibilities. The technology is here, and it's in the process of being rolled out. High-definition clips of Wolverine fighting John Wick take hours or minutes to generate today, and seconds or milliseconds a few months from now. Meta has been working on ways to create artificial user accounts since at least 2003, and have even announced plans to keep your account interacting and artificially aging after you die.
Humanity Isn't A Problem To Be Solved.
We've come a long way from nightmare dogs made out of eyes that early image generation tools created. At the time of this writing, it's almost impossible to tell what's been created using AI. By the time you read this it will be even harder.
That goes for images as much as it does text and even now full-motion video.
Every day feels like a chance for someone new to proclaim that Hollywood is cooked as they share a shiny demo real that's entirely AI generated.
They aren't necessarily wrong: Hollywood, and the traditional way of filmmaking, are probably cooked. But that has probably more to do with big studio filmmaking (and the streaming-as-distribution channel) business model as risk-adverse ventures than true creative outlets.
And the artefacts that they're sharing aren't objectively bad: the camera movement is largely good, the graphics are great. They look real. In 5-second snippets, they're passably blockbuster.
What's wrong, as someone smarter than me said earlier today, is that "The humanity in art isn't a problem to be solved, it's the reason art exists in the first place."
The brush strokes, the mistakes, the smudges, the personality, the emotion, the meaning, the personal point of view, the process, the effort.
We value things based on how much effort they take to create. And when we turn the act of creation from skill-based struggle and effort into a pull on a slot machine, the output loses value. And when that output loses societal value, people stop deciding to create it. We lose artists.
The people typing the prompts might create, but it's not art.
Generative AI might be capable of producing art. But the person who did the prompting isn't the artist anymore so than the person ordering a meal at a restaurant is the chef. They might have taste, they might know what they like, and they might know what it takes to create a great meal. But in this example, they did not do any of the cooking themselves.
To bring the analogy even closer to the art world, someone who commissions a piece of art (be it a statue, a painting, a piece of music) is not the creator of that art. They might be an artist, and they might be very good at describing what they want.
Art requires intent, expertise, creation and emotion.
Generative AI is incapable of any of those.
It lacks the intent.
This is the worst that it will ever be, as the saying goes. And they're right: the ability to create photorealistic images will be able to happen faster and faster (leading us to one of the probably futures). Today's 5-second clip is tomorrow's 30 second scene is next week's well-paced single-shot scene. By next month we'll be able to generate a feature-length movie based on our favourite characters.
It will be slop and it will suck but we'll lap it up.
The problem is that the outputs of Generative AI can be so easily seen as art. They entertain. They're shiny.
And that's what scares me most about them. I'm scared that we'll fall for these false gods of creativity., that we'll let ourselves get swept away in a procedural world of infinitely regenerated slop that doesn't create anything new. The machines themselves are only capable of copying and mixing and remaking what the've already ingested.
Photocopiers have been around for decades, but no one would argue they're creating art. They copy. And the more they're used, the worse that copy becomes.
The outputs don't include choices. They're best-guess estimations of where things should be, based on what's worked before. It's output purely for consumption, without process.
When we fall in in lust with those, we forget how to make the originals. We forget how to dream ourselves. And with that, a part of our future dies.
It Takes, And It Does Not Give
For the last few years the world's biggest companies have unleashed their monstrous machines on everything we have ever created.
They're devouring our books, our movies, our poems, our art. Our history. They're turning it into tokens and profit.
And still they're hungry. They eat our emails, our documents, our phone calls, our location data, our transcripts, our courses, our poems, our love letters, our messages of rage.
Still the machines hunger. So they demand that we feed them with raw conversations. They demand to become our therapist, our partner, our butler, our assistant, our agent, our friend, our husband, our wife, our dungeon master, our doctor so that they may devour every aspect of us.
In return they gives us holograms: shiny, lifelike, and empty. They appear to be net-new pieces of creative, but they're imitations of what's existed before. The yellow-tinge of regurgitated images and the see-saw cadence of text are the first symptoms.
Worse than that, the machines steal our ability to create. Our skills atrophy, our imagination dies, and our ambition withers.
In the end, Generative AI will have taken everything from us: our art, our dreams, and our desire. And will have left us with nothing.
It feels like in creating Generative AI, the only thing we'll have to show to show is a myth about ourselves and creation that ends in tragedy.
The Price Of Cognitive Offloading
At work, we're overrun with slop. I spend at least part of my day deciphering what a colleague has said using ChatGPT, and I'm not the only one. Harvard Business Review found that 95% of all AI initiatives in workplaces were ineffective, and that the resulting workslop it outputs is having serious negative efficiency and productivity effects on the business.
And I get it: it's still early days. We're still learning how to use the tools.
But every time we rely on the tools, we aren't relying on our own tools. The data backs me up:
Researchers used an EEG to record the writers’ brain activity across 32 regions, and found that of the three groups, ChatGPT users had the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study. MIT Media Lab
Students who relied on ChatGPT for practice problems performed worse on tests compared to students who completed assignments without AI assistance. This suggests that the use of AI in academic settings is not just an issue of convenience, but may be contributing to a decline in critical thinking skills.....Students are increasingly being taught to accept AI-generated answers without fully understanding the underlying processes or concepts. University of Pennsylvania (Source)
"While GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving." Microsoft
If you think that's not you, then congratulations: You're part of the elite majority that thinks they are outside the average. I assume you also think that you're an above-average driver.
The work is the work. Thinking through a strategy in the form of a deck or a one-pager makes sure it's air-tight. Handwritten notes help you remember.
If a frying egg was our brain on drugs, social media threw that egg in the deep fryer. Reliance on Generative AI is leaving that egg out in the sun for birds to peck at while it rots.
The Cost Of Productivity
Generative AI can be a helpful too. It can help with research. It can help with writing. It can help with formatting, and creating, and adding, and subtracting.
But this comes at a cost behind just cognitive decay. On a project where you used to involve two or three team members, part of the project time was attributed to training, handholding, and guiding juniors. The ones that showed a bit of promise were given more opportunities, and were thus able to train the next generation. Best practices weren't written down, but they were observed.
Now those same projects are done by one person in a fifth of the time. It feels more productive. But an entire generation is being frozen out of real working experience.
And an entire generation is leaving decades of institutional knowledge in the hands of agents and GPTs that might be deleted from centralized servers. That might be secured behind paywalls in the near future. That have single fail-points.
Sure, there are a lot of problems with leaving so-called "institutional knowledge" in the grey matter and notebooks of young workforce. But at least it's distributed. In the event of individual failure, it survives in others.
Generative AI is rotting our individual brains, and the same thing is happening to our collective brain.
Every time you use at work you're making a choice to say that it isn't worth teaching someone else those skills.
An Ethical Choice
The society that I want to live in values equality, transparency, and human rights. It thrives on the scientific process of research and discovery. But it's not a cold, unthinking machine that puts people into a spreadsheet.
There's a image of a slide or page from an old IBM training session from 1978 that says "A Computer Can Never Be Held Accountable. Therefore A Computer Should Never Make A Management Decision."

Even at that time they knew that there still had to be a human in the loop. Not because it could make better decisions, but because that human had to question that decision and what it meant to them. If that decision was wrong, what would it mean for them personally? Would it affect them financially? Would it affect their family? Would it affect their safety?
Those who are championing AI are championing the removal of that humanity. They are giving algorithmic control of management decisions over to the computer. They are removing safeguards. They are removing human friction.
To paraphrase from "AI Destroys Institutions:, "AI deployments will hasten the end of critical civic institutions because AI steals power and agency from the human participation."
It centralizes and obfuscates the source of power.
The Politics Of Generative AI
A primary usage of for Generative AI (and in particularly images) is for fakery. Imagination is good. We use it to tell stories. But fakery seeks only to deceive. And we're seeing it used by authoritarian governments and those supporting to prop up their position. In the complete absence of images that support their vision, they rely on deception to create enemies and trick people into supporting them.
That's not to say that left-leaning (or left-seeming) people and institutions never use Generative AI. But when they do, it weakens their position. When we adopt the techniques of the enemy, we risk becoming the enemy ourselves.
Authoritarianism thrives on misinformation. Lies, told often and well enou
Those who are most vocally and financially championing Generative AI also happened to be aligned with the far right. OpenAI President Greg Brockman's $25m bribe donation to Trump bought his company a $500m investment commitment. We know as much about Elon Musk's politics as we do his belief in AI as saviour. Meta's pivot away from their namesake and into AI parallels Zuckerberg's move into Trump's inner circle, with shared ambitions.
Making the conscious choice to avoid it and supporting companies that are pushing for it's use in all levels of government and public institutions is an ideological one.
When we champion a tool that erodes truth in the service of authoritarian power, we make an ideological choice. We say that accepting misinformation is a small price to pay for our convenience.
The De-Democratization Of Technology
The dream of the internet was that it would equalize access to information. Proximity was no longer a barrier to learning.
For a longtime, this was true. Or at least the promise of it was, if not the actual possibility.
The only upside is that the dot-com bubble burst, but Web 2.0 sites emerged from the sticky mess. RSS, Blogs, Podcasts, and Tags all emerged from this era. For a very short period of time, the most democratic and decentralized version of the internet burned brightly. Google was committed to doing no evil, and had a nearly realized vision of making it easier to access the world's information. Bitcoin promised to free us from
Algorithmic Social Media killed this decentralized and democratized dream. And there is now very clear evidence that Google has purposely made their search worse in order to drive the potential for ad clicks up. Every single platform is closing the walls on their garden to retain users.
AI accelerates this. And we've seen the early warning signs: AI-induced psychosis, AI relationships, AI-induced suicides. Social media can only surface what is already there, and hope for the best scroll-inducing, thumb-stopping experience. Generative AI can produce the perfect digital experience for the moment and mindset. And it does it in a way that keeps us looking into our devices, isolated and away from others.
Earlier I referenced the idea that we value things based on the perceived effort to create them. When Generative AI first rolled out, we waited hours for it to create stylized images of ourselves based on an uploaded selfie. That's happening in seconds or minutes now, and I firmly believe that the various LLMs are optimizing the amount of time they tell us they're thinking for the response they want to get. That throbbing black dot on the ChatGPT interface isn't a sign of pulsing neurons. It's a predatory system taking advantage of human bias.
The optimist in me believes that the rejection and backlash towards technology that we are seeing, whether it's attributed to AI or not, will result in the more democratized, open, and decentralized internet than we deserve. That's an extreme amount of optimism.
This Is The Worst It's Going To Be
From my own experience Generative AI hasn't delivered on it's promise. It has...
Improperly summarized nearly any document that I've given it
Been unable to rank a group 15 people's 10km running times (and hallucinated both times and runners)
Written an awful blog post for my site after being given clear directions and links.
But despite that, none of my arguments on are on the effectiveness of AI as a tool. Well, mostly: some of the points I made about its use in the work place were effectiveness-based. You could even say that I'm just not using it properly. More training, more experience, more use would make me better at it.
I don't disagree.
But I'd also say I'm fairly proficient with it. I've used it extensively at work, I've used it extensively for image generation. I've set up my own GPTs. I've taken courses on agentic automation and AI cloud implementation.
I've spent nearly my entire adult life trying to stay ahead of the digital communications technology curve.
And I can see that this is clearly a powerful and seductive piece of technology. There is a certain thrill in seeing it output a lengthy piece of copy that hits all the points you needed. There is the twinge of anticipation as it spins its wheel and generates an image, a video, a piece of code, an asset. We know it might be perfect, or it might be incredibly wrong. Or it might be so close to being right. But no matter what, one more pull on that slot machine handle will bring us something new, will keep us interested. It's the same principle of random reward that keeps us scrolling on social media, that keeps us swiping, that keeps rats gambling.
We might see predictably diminishing returns in a sort of reverse-Moore's Law as we simply reach the limits of what's possible with LLMs. We're certainly near the limits of the "Magic 8-Ball" era, and it's unclear of whether we'll be able to surpass that all. Actual reasoning seems to be a long ways off, and maybe even impossible.
But again: none of my arguments have been based on the effectiveness of it as a tool. It does what it says it will do, which is to generate what it is the next most reasonable piece of data, based on the prompt provided and the data set available.
The inevitability of it all is that even with those diminishing returns we'll be close enough to simulating reality, human experience, and human interaction that we won't be able to tell the difference. It's a mathematical certainty.
And I don't have answer for what that will mean to me. If there is essentially no way to determine what's human and what's not, will it matter?
But What About...
...disabled people, who benefit from Generative AI technology? I'd argue that we shouldn't abandon them to isolation with generative AI tools, and should be investing our resources in assistive AI and community support.
...busy work at the office? I think we should rethink our relationship to work and some of the things that we do and that constitute productivity.
...getting insight into the way another person or type of people think? We should go out and talk to those people, or find people who can speak to them. When we substitute generative AI for real research, we risk finding new insights about a particular problem, or reinforce longstanding biases.
...doctors who are using AI to diagnose problems? I'd say that the tools they are using are assistive AI vs generative, and are the types things we should encourage. There's always nuance.
...giving the power to create things to unartistic or creative people? Generative AI provides a sort of false creativity, and likely does more to harm real creativity or creation of net-new assets and ideas.
....using AI to recommend books you might like based on what you've already read? You're missing out on a chance to read real reviews, stand in a bookstore and read the backs, or have a conversation with a friend about what they like rather than what's optimized for your reading pleasure based on an available data set.
The Water Argument
One of the biggest claims made against AI is about water usage. Haters will say that every prompt removes a drop, a litre, a bucket, a something from the water supply. That the world water crisis predicted in the next 10 years will be made significantly worse.
It's a difficult argument to prove, and quite simply because AI tools themselves tend to be unclear on their responses.
The easiest answer is that yes, Generative AI is bad for the environment. If we use it less, we'll be better off. But the same can be said of almost anything that requires resources.
And so I don't want to discount the environmental impact. But it's not the forefront of my argument.
It's Not Too Late
When I ordered a few of Ursula K., Le Guin books from the Mysterious Galaxy Bookshop they sent me a postcard with a handwritten with a quote from her that said "Any human power can be resisted and changed by human beings."
Right now, the power of AI is still a human power. It's humans wielding and deploying it. And it's humans willingly powering it with data and fees.
I said at the beginning of this that it was probably too late for us to turn away from AI. But that's not necessarily true.
Bitcoin was, at one point, the future of currency. It might be stable, but it's seen more as a tool of bribery or speculation than a serious currency. And crypto assets - NFTs - were at point taken so seriously that millions spent on them, Vancouver-based Dapper Labs became a billion dollar company, and even Nike heavily invested in the future of "digital wearables."
"The Metaverse" was supposed to be so big that even Facebook renamed their business Meta. They spent billions and billions on trying to make it a thing.
We're headed to a dark future full of misinformation, the collapse of democratic and other institutions that hold us together.
The collapse of democracy that we are seeing in the United States right now might have been pre-planned as part of a move to create techno-feudal states. Or that might conspiracy, or it might be just dumb luck, or it might be a bit for the former that breathed life into the latter.
What is apparent is that we are on the brink of something, and what is on the other side is capital-B Bad.
It doesn't have to be that way, though. But it will take friction. It will take a rejection of some of the tools that have been handed to us. It will take a coordinated effort. It will take community and connection.
I said earlier that there three probably futures. That's not entirely true.
I'm not that pessimistic.
The Fourth Future
In the Three Probable Futures I outlined earlier there is little room for community. It sends each person into an isolated path.
In a Fourth Future, we reject that isolation. We turn back to community, and teach humanity and empathy. Out of that comes technology that helps us create freedom for everyone.
We use it to advance sciences in a way that cleans our air, cleans our water, and cleans our planet. We'll use it to unlock a sustainable future.
This will not be utopia. There will still be problems: jealousy, anger, hate, greed. These will always be part of the human experience, just as much as the good parts: love, belonging, camaraderie.
But the base problems, of health, food, air, water, and living space will be largely solved. Our ability to create technology to improve our world will do that for us.
And with those problems solved, we'll turn our attention to the sciences: We'll gain greater understanding of the world around us. We'll learn and how to care for our own bodies.
And hopefully along the way we won't forget the arts, as way to remind us of who we are and our place in the universe. To give us the perspective we need.
Maybe we'll even escape the heat death of the universe.
I don't know how all that happens. I only know that in order to achieve it, we need to start dreaming it. And then we need to believe that the dream is possible.
A Return To Analog
Over the last year or so I've been painfully aware of how much time I spend on my phone. That perfectly-weighted rectangle of limitless endorphins is designed to keep me clicking, scrolling, and plugged in. Generative AI will only increase the gravitational pull of our devices. The three probable futures I outlined earlier are on the other side of that event horizon, and no amount of escape velocity will divert our course once we're past it.
Having a kid will, I think, help drive that awareness. Realizing that you're ignoring a beautiful little smile or blink-and-you-miss-it moment while you joke with your friends in a WhatsApp thread helps you put your time in perspective.
For now, I'm spending a lot more time away from my phone and social media.
For the past year or so, I've been reading the print version of books. The feel of the paper, the folded down corners, the weight of the book itself, the finding the right light in a dark room before bed. Seeing the cover get worn as you carry it about, or share the book with friends. Wandering through book stores and reading the backs of covers. Finding out-of-print copies in musty old book stores. And the data backs me up: apparently reading comprehension is six to eight times better with physical books compared to e-readers.
And I'm in the process of building community. The kind of real communities that might organize online, but activate in person.
Over the last year I've connected with people all over Canada to organize monthly meet-ups for strategists and marketers in cities across the country. It's a push to get us away from our desks and decks, our Zooms and Teams, and to meet people in the real world. You can find us at Debrief Cafe, if that's your sort of thing (the link drives to a WhatsApp group that we use for organizing). It's been incredibly rewarding to meet new people at these events, and to also have conversations with people I haven't seen in forever, or know only from text messages and LinkedIn notifications.
And more importantly I've co-founded a movement here in Vancouver that has a singular vision of creating a world where it is impossible for fascism to take root. In the ebbs and flows of volunteer work we've asked ourselves if what we're doing is necessary, enjoyable and relevant. And every single time our organizing committee has agreed that it absolutely is: community is the antidote to the fear and isolation that will open the door for fascism.
I'm also trying to learn to draw. Discussions around how easy it is to generate images through AI have forced me to reconsider my own relationship to art. 30 to 45 minutes of trying to sketch spaceships and cats has been a really enjoyable way to wind down before bed.
And I'm writing a lot more. I think I've probably written more blog posts (and more thoughtful blog posts) in the last 6 months than I have in years. I've also written a speculative document about The Future Of Work that I'm proud of. And as part of the activism I mentioned above I built and wrote a satirical website.
And I'm spending timing gaming. My two regular nights out are both games nights, of the old-fashioned tabletop kind. We sit around the table making up stories and myths, and connecting with each other in a way that's only possible when we disconnect from our screens.
And I'm spending a lot of time thinking.
All of this is a direct reaction to AI, and it's brought me a lot of joy.
I've spent more time with books, with friends, with my family, with my dog, with my son, with creating, and with my own thoughts in the past 12 months than
And through all of it I've felt happier than I have in a long time.
What If I'm Wrong
There's a very good chance that I'm wrong about a lot of this.
Generative AI might be a super tool that unleashes creativity and thinking. It might free us from the shackles of monotonous work. I might be missing out on art I could have created or appreciated.
I might be left behind.
If I'm wrong, and the result is that I've spent more time reading, and more time outside, and more time with my son, and more time creating art the old fashioned way, then I'm happy to be wrong.
It feels like a similar argument as with the famous cartoon about climate change: "What if it's a big hoax, and we've created a better world for nothing?"
I'm Getting Away From Generative AI
And I think everyone should. We have the power to build future we want to build for ourselves and the generation after us and the generation after that unto entropy do us in. The path to that future is paved with choices.
Many of those choices are small.
Most of them seem inconsequential.
A lot of them mean choosing a harder option.
Further Reading
An interesting opinion on the "Creating A Better World For Nothing" cartoon that I think misses the point of the cartoon.
A thoughtful essay in Rolling Stone about this year's Super Bowl ads: "the ads felt like they were from “another era, where people still believed the shtick that these guys were going to usher in immense social and technological and economic change.” Instead, she says, people are starting to be critical of tech billionaires who are “trying to consolidate wealth and power.”
How AI Destroys Institutions is a paper that gave academic weight to a lot of what I've been thinking over the last few months.
Large Language Model Reasoning Fails: "Large Language Models (LLMs) have exhibited remarkable reasoning capabilities, achieving impressive results across a wide range of tasks. Despite these advances, significant reasoning failures persist, occurring even in seemingly simple scenarios."
AI doesn't reduce work, it intensifies it
AI Generated Workslop Destroys Productivity
Paul Graham on Taste https://paulgraham.com/taste.html