Back in the 80s and 90s, life was simple. Kids roamed the neighborhood. Parents didn’t track them with GPS. Commercials interrupted your TV shows and you watched them.
If you wanted to know if Ross and Rachel got back together, you waited – patiently – for the next episode. Research wasn’t a quick Google search. It was a card catalog, a library book and the satisfying sound of flipping pages.
Today, technology has commandeered every aspect of our lives and replaced the experiences of yesteryear. Kids are in diapers while nose-deep in screens. Shows are streamed and binge–worthy, making it possible to watch a full season in one lazy weekend. Need to research? Ask AI, and critical thinking becomes optional.
It makes you wonder: Are we blindly outsourcing what makes us human in our attempts to make life easier? Are humans fading dangerously into the background? What will happen to Gen Beta, the youngest population born between 2025 and 2039? They have no recollection of simpler times in a tech-free world. They are being born into a life of self-driving cars, AI agents, algorithm-fed media and socialization in the metaverse. Fortnite is a fitting example of that. Will Gen Beta have discernment when using emerging technologies to question an output or experience?
And the pace of all this technology? Blistering.
If you remember, ChatGPT was released in November 2022. Less than three years and it feels like forever and a day ago. Within that time, we have witnessed a frenzied acceleration of technology which has spurred business disruption, spawned competition and saturated our intake of information – putting the foot on the metaphorical gas pedal. Our senses are overloaded.
Everything is AI. But not just AI in general terms – we’re talking the likes of agentic AI and quantum AI. Google search is being altered by AI agents, fetching what they deem relevant. And somewhere in the future, Artificial General Intelligence (AGI) is percolating as a technology capable of performing any intellectual task of a human with real, cognitive abilities. Just typing that makes me shudder. Regulations keeping up? Barely. It’s globally complicated.
Microsoft just scrambled to resolve a vulnerability issue with an AI agent designed to act autonomously, drawing attention to the prospect of hackers potentially gaining control of AI agents without a trace. Per Microsoft, the issue was swiftly fixed, and no customers were affected.
What will it mean to be human in this machine-altered world?
Innovative technology will become more ethically consequential. There’s no denying that. Much like the experiment that is social media. In the early 2000s, no one (maybe a few) foresaw the damaging consequences of social media on mental health, global political meddling or misinformation that crept into our communication channels. Deepfakes and algorithm-fed content to satisfy our own echo chambers. Manipulation by psychiatric savvy algorithms is as nefarious as the “hey kid, do you want candy,” from our past. Stranger danger, they used to say.
But social media is a good example to learn from. All emerging technology will have a similar impact of ushering in a lot of good with a lot of unintended bad. It’s time to prepare for autonomous AI.
AI Ethics: What it is and why it matters
The need for humans
Turning control over to a machine can be both exhilarating and nerve-wracking. I recently rode in a self-driving car, and it nearly plowed into a gate. The human driver stepped on the brake. I suppose the machine got more training data and I got a spiked heart rate from that experience.
As we venture into autonomous AI, governance and human oversight remain critical. Humans can’t step out because AI is not perfect, and it certainly can’t be trusted in high-stakes scenarios, like health care, when human life is on the line. Or when AI agents gain access to proprietary or classified information without safeguards.
Are LLMs useful liars?
I recently listened to an episode of the Pondering AI podcast called LLMs are useful liars, in which Andriy Burkov explained how large language models can be unapologetic liars. They depend on the training data, and with bad data, they can be prone to error. Knowing this, will language models make good AI agents?
Emerging technology must be supervised for safety and to reflect human values, ethical standards and legal frameworks as AI acts on our behalf. Our future depends on guardrails, responsible innovation and a human element that no machine can replace – at least right now.
As for Gen Beta? AI literacy pedagogy will need to reflect all aspects of AI, including ethical use and critical thinking. Let’s hope this budding generation will find balance with technology and what makes us human to preserve the essence of humanity. Time will only tell, but I have hope – and fear.
I’m just being human.
Follow Pondering AI – a podcast that covers the spectrum of society and technology with a diverse group of innovators, advocates and data scientists eager to explore the impact and implications of AI – for better or for worse.
Read more blogs on data ethics and responsible AI
Guest: Andriy Burkov, PhD
In this thought-provoking episode of Pondering AI, Andriy Burkov explores the paradox of generative AI – how large language models can be both helpful and misleading. We explore the fine line between utility and deception and what it means for trust, transparency, and responsible deployment.