A few years back, I wrote an essay about how remote controls were staging a hostile takeover of my life. Now, looking at AI, I recognize a similar pattern but with a bigger more sinister plot.
But I’m ready and fearless, because somewhere between TV rabbit ears and the remote-control uprising, Gen X learned a simple truth: change is coming whether you like it or not, so you might as well figure out how to make it work for you. Or at least how to laugh while it tries to ruin your day.
Which is exactly how I’ve decided to approach AI.
Not cautiously. Not skeptically. Ok, maybe selectively skeptical, like when AI confidently told me George Washington invented the burrito to keep his hands warm, citing two sources and delivered it with the unearned swagger of a guy who just discovered podcasts. But mostly I did what any self-respecting Gen Xer would do. I downloaded all of them. Every app, every platform, every “this will change your life” tool that promised to organize, optimize, summarize, generate, translate, illustrate, or otherwise insert itself into the chaos of my daily existence. My phone now looks like a digital junk drawer curated by a caffeinated futurist.
Why? Because if the 90s taught me anything, it’s that the first version of anything cutting edge is going to be a little janky, slightly confusing, and somehow still irresistible.
AI, it turns out, is the remote control of this decade. And just like the remotes, none of them are labeled in a way that makes immediate sense. You open one and it’s brilliant. You open another and it’s like talking to a very confident intern who has read half a Wikipedia page and is ready to lead a meeting about it. Some require prompts so specific they feel like you’re programming a VCR in 1987. Others just nod enthusiastically and hallucinate their way through your request like a dinner guest who refuses to admit they don’t know what you’re talking about.
Naturally, I am beginning to kind of love it.
Because here’s the thing about Gen X, we didn’t grow up fearing change. We grew up troubleshooting it. We were the beta testers of modern life. We went from rotary phones to smartphones, from encyclopedias to the Wikipedia, from maps you had to fold like origami to GPS voices calmly telling us to “recalculate” our life choices in real time. We learned early that the manual was either missing or useless, and the only way forward was to start pushing buttons and see what happened.
So that’s what I’m doing with AI. Pushing buttons.
I use it to draft things faster, then argue with it like a co-worker who needs “just a few tweaks.” I use it to run what if scenarios regarding my potential retirement spending. I use it to plan trips I may or may not take, organize ideas I may or may not follow through on, and generate images that make me laugh harder than they probably should. It’s part assistant, part experiment, part entertainment. Some days it makes my life easier. Some days it makes it weirder. Occasionally it does both at the same time, which feels like a peak technological achievement.
My Gen Z kids, of course, already interact with AI like it’s always been there. They don’t fear it. They don’t marvel at it. They don’t question it. They just use it. The same way they use streaming services without ever once wondering what it felt like to wait a week for the next episode of anything.
Meanwhile, I’m approaching it like a teenager in the house: I’m giving it some chores and a little bit of trust, but I’m still checking its browser history and waiting for the moment it tries to lie to my face.
Do I trust it completely? Absolutely not. But I didn’t trust remotes either, back when they were chunky little rebels with five buttons and a tendency to disappear into couch cushions like they were being hunted. Then they evolved. More buttons, more power, more control over more things I didn’t even know I needed to control. And somewhere along the way, resistance turned into reliance… and then into delight.
Now the remote isn’t even a remote. It’s an app on my phone. The same device I once used strictly for texting and occasionally calling people I didn’t want to talk to is now running my entire house like a slightly overachieving intern. One tap and the lights dim just right, the music kicks in, the hot tub starts heating like I’ve got my own low-budget resort on standby. It’s ridiculous. It’s unnecessary. It’s also undeniably better.
That’s the pattern. Confusion, resistance, overuse, dependence. Rinse and repeat.
The difference is, this time, I’m leaning in earlier. Not because I think AI is perfect or inevitable in some grand, polished way. But because I know how this story goes. The people who figure it out first aren’t the ones who understand it completely. They’re the ones willing to play with it, break it, laugh at it, and keep going until they can do something new and unique with it that others simply can’t.
The AI train is leaving. Are there risks? Absolutely. Should smarter people than me be working on guardrails? Yesterday. But waiting for perfect safety before boarding isn’t caution — it just increases the certainty of getting left behind.
Don’t get me wrong. I recognize it’s not all roses and rainbows. Some of the people already on the train are definitely up to no good in the back car. AI in the wrong hands isn’t just inefficient. It’s genuinely dangerous. Deepfakes. Scams so polished they’d fool a forensic accountant. Propaganda that writes itself, personalizes itself, and scales itself while you’re still deciding whether to fact-check it. Eyes open is part of leaning in.
You can be enthusiastic and not naive. In fact, that’s basically the Gen X brand.
Gen X has never needed things to be perfect to make them useful. We just need them to work well enough to improve the moment. So here’s the thing: if AI can help me write faster, think differently, create more, and maybe even find the metaphorical remote control for the rest of my life?
I’m in.
And yes, I know what the room is thinking. But what about the erosion of jobs?
Fair. Real question. Worthy of more than a dismissive wave and a pivot to hot tub logistics.
Here’s the honest answer: AI is going to take some jobs. It already is. The same way the personal computer quietly escorted the typing pool out of the building and handed their desks to software. Somewhere in the mid-eighties, an entire profession of people whose entire value was speed, accuracy, and the ability to format a memo without pulling your hair out—gone. Not because they weren’t good. Because a machine got faster, cheaper, and tragically better at not needing a lunch break.
The secretaries didn’t disappear though. They became executive assistants, office managers, project coordinators, operations leads. The job evolved. The title evolved. The paycheck, unfortunately, took its sweet time catching up, but that’s a different essay for someone angrier than me at the moment.
The point is: every time a machine ate a job, it also quietly spawned three new ones that didn’t exist before. Most of which required someone who could actually think alongside the machine instead of just feeding it paper. The computer didn’t eliminate the need for humans in offices. It raised the stakes for what humans were expected to do in offices. Which was stressful for everyone at first, and then it just became Tuesday.
AI is doing the same thing. Faster, louder, and with considerably more drama.
The fear is real. I’m not going to dress it up in a bow. If your job is primarily about producing the first draft of something—reports, code, dashboards, emails, images, summaries, legal boilerplate—AI is already in your lane, driving with one hand on the wheel and zero anxiety about your mortgage or how you’ll pay for healthcare. That’s not nothing. That’s a genuine disruption for real people and it deserves more than a shrug from those who will probably land okay.
But here’s where I get cautiously optimistic, not in a toxic-positivity “everything happens for a reason” kind of way, but in a “I’ve watched this movie before” kind of way:
The jobs that are emerging? Weird jobs. Good jobs. Jobs that didn’t have names two years ago. Prompt engineers. AI trainers. Ethics reviewers. Workflow architects. People who specialize in teaching AI what it doesn’t know yet—which is considerable and occasionally humbling to witness. There’s an entire economy being built around the gap between what AI can do and what it should do, and right now that gap is approximately the size of a Costco parking lot.
And someone has to manage the chaos. Someone has to catch the hallucinations before they end up in a legal brief. Someone has to ask the question the AI didn’t think to ask. Someone has to bring the actual lived human experience to the output and say, no, that’s not how people talk, try again.
That someone is still a person.
The train is already moving, no seatbelts, no printed schedule, and the conductor is making some of it up as we go. But it’s going somewhere interesting, somewhere new, and I’d rather be on the train than standing on the platform left behind.
So, for now at least, I’ll keep pushing buttons, feeding prompts, and daydreaming about how AI can work for me.
Stay educated. Think critically. We’ll figure it out on the way.