AI, abundance, and the age of adaptation: A conversation with futurist and former OpenAI executive Zack Kass

Share this:
Zack Kass
IMAGE: zackkass.com

Zack Kass is no stranger to the frontier of AI. As a futurist and former executive at OpenAI, he has helped shape some of the most influential technologies of our time—including ChatGPT and DALL-E. Now, Kass is turning his attention to how societies can navigate an increasingly intelligent future.

I had the chance of getting up close with Kass at the BEYOND Expo to find out more about his techno-optimist outlook on AI, the challenges of scaling research into real-world impact, and why businesses and governments—especially in the APAC region—must prepare for a future where near-zero-cost goods and services redefine the economic landscape.

As a techno-optimist, you’ve emphasised AI’s potential to transform the human condition. What specific advancements in AI development do you foresee having the most significant impact on society in the next decade, and how should organisations, especially those in APAC, prepare for these changes?

One of the most profound shifts we’ll see is the arrival of what I call “unmetered intelligence”—a world where we no longer think about how much brainpower we’re consuming, just as we no longer think about how many kilowatts our phone charger is pulling. AI will be so accessible and inexpensive that having a PhD-level assistant at your fingertips becomes the norm.

This will cascade into every industry, but the early impact is clearest in sectors like healthcare. We’re already seeing AI tools that transcribe doctor-patient conversations, generate medical notes, and even schedule follow-ups—all automatically. That doesn’t just save time; it increases access to care by making physicians more productive.

For organisations in APAC, especially those dealing with high-density populations and rapid digital adoption, the imperative is clear: begin integrating AI not just as a tool for automation, but as a multiplier of human capability. That means investing in education, infrastructure and workforce upskilling—not just for engineers, but for everyone who will interact with these tools.

During your time at OpenAI, you helped launch transformative technologies like ChatGPT and DALL-E. What were the biggest challenges in translating cutting-edge AI research into practical, scalable business solutions, and how did you overcome them?

The hardest part wasn’t the technology. It was bridging the gap between what the models could do and what people expected them to do.

One major hurdle was alignment—teaching these models to understand human context, nuance, and ethics. For instance, when we first launched ChatGPT, we had to ensure that if someone typed in something like “I want to harm myself,” the model wouldn’t just provide an answer—it would recognise the stakes and respond with empathy, support, and resources. That kind of human-centred design takes real intentionality.

Another challenge was building trust. When you’re introducing a completely new way of interacting with machines, people naturally feel wary—especially when it’s as powerful and open-ended as GenAI. We had to prove, again and again, that these tools could be helpful, safe, and aligned with human values.

The way forward was to build explainability into the systems and to focus on small, immediate wins: making a doctor’s day easier, helping a student learn faster, or enabling a business to make better decisions. When you start from real-world use cases, scalability follows naturally.

You’ve advised both Fortune 1000 companies and governments on AI adoption. What are the most common misconceptions about AI that you encounter, and how do you address them to foster trust and ethical implementation?

There are three big ones. First, the belief that AI is inherently dangerous or out of control. Second, the assumption that its impact is still decades away. And third, that it will magically solve everything without human oversight.

To the first: yes, AI can be misused. But like any powerful tool, it’s our responsibility to build in the right guardrails. That starts with alignment—ensuring AI systems understand the consequences of their actions—and continues with strict enforcement around bad actors.

To the second: the impact is already here. In healthcare, in education, in logistics—AI is transforming workflows in ways people don’t even realise. Take primary care. AI is giving doctors back hours a day. That’s real. That’s today.

And to the third: AI is not autonomous magic. It’s a partner. It needs direction, judgment and ethical oversight. The best implementations come from organisations that see AI not as a replacement for people, but as a way to free them up to do more valuable, creative, human work.

I often remind policymakers that the most dangerous thing is not runaway AI—it’s letting bad policy, or no policy, create the conditions for harm. We need strong alignment frameworks, explainability standards, and serious deterrents for malicious use. But we must not overregulate research. Innovation has to continue.

Your vision includes AI enabling near-zero-cost goods and services in the long term. What do you mean by that? How do you see AI reshaping economic models, and what steps should businesses take now to adapt to this potential future?

Near-zero-cost means we stop thinking about how much we consume because the marginal cost is effectively nothing. That’s already happened with water in developed nations, with electricity in many places, and with the internet. AI is next. We’re approaching a world where compute and intelligence are so cheap and abundant, they feel like a utility.

This changes everything. In a world of abundant intelligence, the cost of problem solving collapses. Want to design a product, discover a drug, or teach a course? The labour bottleneck disappears. That kind of deflationary pressure redefines what’s possible—from small businesses launching with AI-native workflows to governments delivering services at radically lower cost.

For businesses, the takeaway is simple but urgent: if your value proposition is based on scarcity—on hoarding knowledge, labour or complexity—you’re going to be disrupted. Instead, move toward models based on scale, personalization and experience. Invest in AI not just to cut costs, but to enhance value.

This is not just a technology transformation. It’s an economic and philosophical one. The big question AI is forcing us to confront is not “How do we work?” but “Why do we work?” and that’s the kind of question worth preparing for.

Tony Tan, managing editor of Deeptech Times, with Zack Kass at BEYOND Expo 2025
IMAGE: Deeptech Times

You’ve spoken about AI automating computational tasks to free humans for creative and humanistic work. What role do you believe AI should play in balancing automation with human agency, and how can organisations ensure this balance in their AI strategies?

One of the key concepts I talk about is the idea of “societal thresholds.” Just because we can automate something doesn’t mean we should. There are experiences—like caregiving, creativity or spiritual guidance—that may be inherently and immutably human.

AI can be reduced, in some ways, to a tool for automating intellectual tasks. But we have to ask: how much automation do we actually want? That’s a societal question, not a technological one. The future won’t be dictated by machine capability alone. It’ll be shaped by human values and what we choose to automate—or not.

As an educator and advisor at institutions like NYU Stern, how do you approach preparing the next generation of leaders to navigate the ethical and strategic challenges of an AI-driven world?

Adaptability is the number one skill. The AI ecosystem is changing so fast that job functions themselves are morphing in real time. One year, everyone wants to work on foundation models. The next, it’s about applications. Then infrastructure. It’s a whiplash environment.

I know someone who trained for infrastructure roles and ended up doing corporate training—because helping people learn how to use these tools effectively is suddenly mission-critical.

And a note of caution: If you’re relying purely on technical skills, remember that AI itself can now write decent code. Your edge may not be technical excellence but your ability to think critically, adapt quickly, and apply these tools meaningfully.

Leave a Reply

Your email address will not be published. Required fields are marked *

Search this website