AI, Creativity and the Human Element with Tan Siok Siok

AI, Creativity and the Human Element with Tan Siok Siok
Tan Siok Siok shares her perspectives on the future of AI and its impact on humanity based on her new book "AI For Humanity"

Fresh out of the studio, our host, Bernard Leong sat down with Tan Siok Siok, author of AI for Humanity, for an insightful discussion on the evolving role of AI in our lives. During the conversation, Siok Siok explored how AI makes us more human by emphasizing the importance of authenticity over perfection. They discussed the creative process, with Siok Siok viewing AI as a tool to enhance rather than replace human creativity. The interview also touched on the broader implications of AI, including job displacement and the geopolitical nuances of AI development. Tan Siok Siok shared her thoughts on the need for humans to guide and nurture AI responsibly, dismissing the notion of a doomsday scenario driven by AI. The conversation offered a nuanced perspective on how AI can be a partner in human progress, rather than a threat, encouraging listeners to engage actively with AI’s potential.


"Well, I think AI makes us, makes me more human in terms of understanding that nirvana or the ultimate achievement is not to be perfect. The ultimate achievement is to be authentic, present, and yourself. I guess it exists in this human dimension. One of the things you realize is that when something is too perfect, whether it's your selfie or something you've written, people start to distrust it. Right now, if I generate or design something, or if I write something and it sounds too fluent and flawless—without accent, pauses, or mistakes—people might suspect that it's not Siok, but actually the digital twin of Siok. One question that people ask me and ask themselves is: how has AI made me more aware of what it means to be human?" - Tan Siok Siok

Introduction: Tan Siok Siok, co-author of "AI For Humanity" with Andeed Ma and James Ong. You can find Siok Siok on LinkedIn & X (formerly known as Twitter)

Here is the refined transcript of the conversation between Bernard Leong and Siok Siok:

Introduction

Bernard Leong: Welcome to Analyse Asia, the premier podcast dedicated to dissecting the pulse of business technology and media in Asia. I'm Bernard Leong. How do we navigate in the age of AI? With me today, Tan Siok Siok, author of "AI for Humanity, Building a Sustainable AI for the Future", talked about where we are in AI today and how humans will navigate AI to the future. Welcome back, Tan Siok Siok. I think it's been a very long time since we last spoke.

Siok Siok: It feels like a century since we last spoke, given everything that’s happened—a once-in-a-century pandemic and a few wars breaking out. By the way, I remember telling you that I received a cold email from someone in the U.S. who thought I had great potential as a podcast guest and offered to represent me after hearing our interview from long ago. I hope I can live up to that potential in our conversation today.

Bernard Leong: I remember when we last spoke, you were working on a documentary called Twittermentary. Of course, now we refer to Twitter as X, or whatever it’s called today. It's been nearly a decade since that conversation, and just a few days ago, I celebrated 10 years of Analyse Asia. It’s interesting—we often overestimate what we can achieve in a year, but vastly underestimate what we can accomplish in a decade. So, what have you been up to over these past ten years, especially with everything that's happened—the pandemic, wars?

Siok Siok: Congratulations on the 10th anniversary of Analyse Asia! Podcasting is tough, so I admire your persistence. Over the past decade since we last spoke, I was published in China for my photography—some people might know that I take a lot of black-and-white, nostalgic iPhone photos of the Beijing Hutongs. When I started, I didn’t see myself as a photographer; I was just experimenting with mobile photography and Chinese social media. Surprisingly, I gained a following and was even signed by an agent, which is quite remarkable for an iPhone photographer. I published two books, one of which, a collaboration with a novelist, won the prestigious Lucian Literary Prize, one of the highest literary awards in China. Recently, that book, Beijing City In Time, was translated into Arabic and featured at a major book fair in the UAE. Another significant development was my move back to Singapore after more than a decade in Beijing. It wasn’t planned—I happened to be in Singapore when the pandemic broke out, and as life unfolded, I just ended up staying.

Career Lessons from Siok Siok

Bernard Leong: I was about to have you on the podcast earlier, but you asked me to wait because you were working on a book about AI, which was a pleasant surprise. Before we dive into today’s main topic, I’d like to ask: What life lessons have you gained over the past decade? You've had incredible experiences—from your time in China to your work on projects like Boomtown Beijing and Twittermentary, and your photography book. It seems like you've accomplished so much more than I have in the last 10 years. What insights can you share with my audience?"

Siok Siok: I think the key lesson is to make uncertainty your friend. Growing up in Singapore, we often feel the need to be responsible for everything and to control every aspect of our lives. But what the pandemic and the past few years have taught me is that life has different seasons, and many things are beyond our control. During the pandemic, when I unexpectedly ended up back in Singapore, I finally allowed myself to relax, realizing that I couldn’t control everything. Sometimes, you just need to go with the flow and take things as they come.

Another important lesson is to flow with your community and contribute to it. Instead of focusing on fully formed plans, it’s more valuable to explore, bond with friends, listen, and offer help. You don't need to know everything or see far into the future; just be a friend to everyone and help them succeed.

Lastly, I've learned that whatever you think is unimaginable can become reality. I never imagined I’d return to Singapore or become a co-author of an AI book. And yet, here we are, having this conversation a decade later, with Analyse Asia still going strong. So, you can imagine it, and you can reimagine it. That’s a hopeful message, I think.

AI and Creative Process

Bernard Leong: it's time to dive into the main topic of the day—your new book on AI. I've been working in AI for nearly 30 years, starting from my days as a theoretical physicist. The book, AI for Humanity: Building a Sustainable AI for the Future, has you as one of the co-authors, along with Andy Ma and James Ong, whom I know. To get started, it might be helpful for my audience if we establish some basics. How would you define AI, machine learning, and generative AI, particularly for a layman today?

Siok Siok: Here’s a secret—I’m a layperson when it comes to AI. So when I saw your question, I panicked a bit, thinking, 'Oh no, Bernard’s giving me a test!' As Asians, we often think about prepping for tests. Full disclosure, I’m not an AI expert. I didn’t become one just because I co-authored a book about AI. I always say I’m not an AI expert—I only play one on Zoom, and here we are on Zoom!

My understanding of AI is that it’s not just one thing; it’s really a collection of technologies. It’s the effort of technologists, scientists, and entrepreneurs to mimic human intelligence using machines. It’s a very complex field because different people have different theories about what constitutes human intelligence. One key takeaway from researching and writing the book is not to get too caught up in the current buzzwords—they’ll come and go.

There are different hypotheses about what human intelligence is and how we can best replicate or surpass it with machines. Various schools of thought have gained prominence, and machine learning and generative AI fall under the connectionist approach, which focuses on learning from data rather than encoding intelligence with explicit rules. This approach has been on the rise over the past decade, but that doesn’t mean it will always dominate. Even with generative AI being a buzzword recently, we need to be cautious because there’s so much we still don’t know about AI—and about what it means to be human, intelligent, and wise.

Bernard Leong: So, what inspired you to write the book, and how did you go about putting it together? I assume your co-authors are also well-versed in AI, right?

Siok Siok: One very important thing to know is that we started writing the book before the launch of OpenAI's ChatGPT, actually earlier in 2022. It wasn’t inspired by ChatGPT or generative AI—it was much more innocent than that. We didn’t anticipate that ChatGPT would go viral, creating this 'iPhone moment' where everyone suddenly realized, 'Oh, AI is something we need to pay attention to.' The beginning was quite innocent. One of my co-authors [Andeed Ma] is an AI governance expert, and James Ong has a PhD in AI from the University of Texas at Austin. They were collaborating on the book because they recognized the risks and potential of AI and wanted to share their expertise.

I got involved for a very mundane reason—after the lockdown during the pandemic ended, I had lunch with James, a longtime friend, and innocently asked him what he was working on. He mentioned the book, and I asked how I could help. I had published two books before, but I failed to remind him they were photography books in Chinese, not about AI, not in English, and not nonfiction. The whole quest was very simple: to explain the risks and potential of AI to a general audience. We approached it from an interdisciplinary perspective, believing that bringing together different viewpoints would provide a more holistic picture of what AI means to different people.

But then, ChatGPT launched early in our writing process, and suddenly everything was evolving so rapidly. AI was in the headlines daily, and we were trying to capture something like lightning in a bottle. It became a challenge to keep up because there was always new research, new news, panic, and exuberance. That’s the origin of the book.

Explaining AI to a Non-Technical Audience

Bernard Leong: It’s even harder now because I’m teaching AI at the National University of Singapore, both in the NUS Business School and the Institute of Systems Science, while also trying to keep up with the latest developments. On top of that, I’m working on my own AI startup, which is pretty overwhelming. I imagine when you started writing the book before ChatGPT, things were relatively quiet, and then suddenly, everything just started coming together all at once. Who is the intended audience of the book?

Siok Siok: I think the caveat here is—Bernard, I have a question for you. Since you’re teaching AI to both technical and non-technical audiences, how do you explain what’s happening with AI? You’re facing the same challenge we did, right?

Bernard Leong: The hardest part is probably explaining the traditional methods in computer science or digital applications. Typically, you have an input, give it instructions, and then it generates an output. But with artificial intelligence, you’re dealing with data that tries to identify patterns and rules. You feed new data into the model to see if it produces the same output that was originally intended. This process naturally leads to probabilistic outputs. For example, is it a cat or not a cat? That’s easy to define now, but if a completely new breed of cat appears, the AI might struggle to identify it. How do you help AI improve in such situations? This is where most laypeople struggle. I’m probably one of the few who sees hallucination in AI as a feature, not a bug. But people often misunderstand this concept.

Siok Siok: I agree.

Bernard Leong: Exactly. The real challenge is understanding hallucination from the perspective that, when implementing generative AI, the key is to constrain it as much as possible.

Siok Siok: I agree with you—hallucination is a feature of probabilistic calculations in AI. Because it’s calculating probabilities, you end up with a range of outcomes, some of which align with expectations and others that fall on the outer edges of probability. The interesting tension here is that when you make AI safer and more obedient, it can also become less interesting. Part of what makes AI fascinating is how it surprises you, prompting you to think in unexpected ways.

This unpredictability is challenging because we’re used to programming computers to follow explicit instructions and execute tasks accordingly. When AI behaves unpredictably or offers a range of outcomes, we often struggle with how to handle it. However, as a creative person, I find this aspect of AI the most captivating. I’m always intrigued when AI presents seven key points, and I focus on the one or two that surprise me, asking, 'Can you expand on point three and point five?' I want to understand how it arrived at those insights. This reflects a different expectation and philosophy when it comes to interacting with AI.

Bernard Leong: One of the biggest changes in AI today is that it used to be communicated primarily through mathematical language. The beauty of generative AI, like ChatGPT, is its ability to use natural language to communicate with a black box that generates content that sounds very human-like. However, it’s important to remember that this output is just a reflection of the data it’s been trained on, which is all human-generated data from before 2021—what we might call the 'age of the human-made internet.' Going forward, we’re entering an era where AI will be trained on a mix of synthetic and human data.

Siok Siok: You're absolutely right, and as someone who’s not great at math but excels with languages, this shift is a huge advantage for me. It allows me to be precise in how I prompt AI and adjust the outcomes to my needs. One thing I’ve realized is that, as my friends often say, the hottest programming language today—thanks to generative AI—is English, Chinese, or whatever language you speak, whether it’s Bahasa Indonesia or another. This means that anyone can become a technologist if they can clearly express what they want the AI to do or collaborate on. It’s great news for me because, while I may not be strong in math, I’m very good at communicating my needs to both people and machines.

AI's role in enhancing human abilities

Bernard Leong: That brings us to another point. Now that you can interact with a system that can generate what you want to see, how do you view this as a creative? Do you find that it expands your scope of work, or do you lean more towards the purist approach, where some creators reject anything generated by AI because it’s essentially remixing the work of thousands of artists into one form? As an artist, how do you perceive the generative capabilities of AI?

Siok Siok: That's a great question. I don’t think I’m entirely representative of creatives and artists as a whole. Many of my friends and peers in content creation, filmmaking, design, and marketing are quite fearful that AI technology will displace them. We're already seeing early signs of this, with people losing jobs to AI models, and agencies, marketing firms, and production companies losing contracts and projects. It’s a very sobering moment.

However, I’m quite open and excited about AI. I don’t focus on using AI for automation or efficiency; instead, I see AI as a sparring partner or player-coach. If you imagine competing in the Olympics, it’s like playing against an AI model or partner—it helps you quickly elevate your game. For me, the exciting aspect isn’t the automation or generation but the augmentation and collaboration it enables.

Bernard Leong: A lot of people are quite curious about AI, especially in the private sector and government agencies, where I teach a wide range of individuals. They’re genuinely interested in how AI can help them, but the question that frequently arises is whether AI will replace their jobs. I think it’s too early to make that call. A great study by the and Rest of World highlights this issue. You may know that many image designers, masters of Photoshop, in countries like Bangladesh, India, and the Philippines, work on platforms like Fiverr and Upwork. When generative AI tools like Midjourney emerged, many designers saw their jobs disappear and lost their livelihoods. However, the study found that after a few months, these designers returned to the platform, rebranding themselves as 'image prompt engineers.' They now generate AI prompts and touch up the resulting images, a skill still in demand.

So, my question is about this so-called displacement and augmentation of jobs. How do you and your co-authors view it? I know you discussed whether we’re ready to mitigate the risks of human-level AI. Why don’t we start by exploring the displacement of jobs from that angle?

Siok Siok: Job displacement is very real, and I’ve been surprised by just how extensive it is, especially as I’ve been promoting the book and talking to people. However, your observation is spot on—there’s a cycle to this. Initially, people fear they’re being replaced, but then they come to realize that humans are still essential in the workflow.

In the book, we advocate for a shift from 'human versus AI' to 'human with AI.' Most of the stories you hear in the media focus on how AI outperforms humans in tasks like chess, and Go, or even scoring higher on exams like the bar. But what’s more productive is to think about how humans can work with AI.

Take your example of image prompt engineering—where people who once did this work manually now realize they can do it better, faster, and perhaps command a higher price if they know how to effectively use AI tools.

I agree with you—we're still at the beginning of this cycle, and it's important to remember that we're also in a period of global economic downturn. This is why we're seeing cost-cutting measures and job displacement. However, as the economy improves, I believe people will increasingly realize that they can create greater value by working with AI, rather than being replaced by it.

We're likely to see new job titles emerge, like 'image prompt engineer,' along with many other possibilities. We need to give time for growth and allow things to play out. This is why we advocate for human-AI symbiotic intelligence. Instead of viewing automation as machines doing everything, we should think about how to break down workflows into human-AI collaboration. In this model, humans direct, curate, command, guide, and nurture the AI, ensuring it works effectively alongside us.

A simple example is what you mentioned about prompt engineering for images, but there are many other examples, such as drug discovery or weather prediction, where humans act more like engineers, using plain language rather than needing to know complex math or programming languages.

Bernard Leong: One thing I’ve noticed when discussing AI’s ability to beat humans is that it’s elevating our performance levels. Take, for example, the fourth game between Lee Sedol and AlphaGo. Lee did manage to beat the AI with what’s known as a 'God move.' For those unfamiliar, a God move is an extremely rare, game-changing move that can completely alter the course of the game, turning a losing situation into a winning one. In the history of Go, such moves are incredibly rare. Despite being defeated in the first three games, Lee’s shame from those losses pushed him to achieve this extraordinary move, and since then, he has never lost to any player.

Similarly, even the European player who helped train the AI significantly improved after playing against it. This was highlighted in the documentary. So, rather than fearing that AI will take over our jobs, isn’t it possible that AI can help us reach new levels of achievement?

Siok Siok: I think you’re right. In fact, chapter 8 of our book opens with a story about a Go player who managed to beat an AI model—with the help of coaching from another AI. It’s a case of humans with AI versus AI, which I find both interesting and exciting. That’s why I use the player-coach analogy; it perfectly captures this dynamic.

If I were competing in the Olympics and playing against a former Olympic champion, I’d improve much more quickly. Now, imagine if it weren’t just one coach, but hundreds, thousands, or even millions. That’s the kind of opportunity AI presents. However, the short-term challenge, especially for us in Singapore—a relatively small market—is that this rapid advancement can be overwhelming.

In a small market, the challenge is that it’s often shallow, with most people operating in the lower to middle tier in terms of budget, expectations, and skill. This is where the risk of automation becomes significant. However, if I were in a place like Hollywood, where projects can range from $5,000 to $500,000 or even $50 million, there would be much more room to explore and innovate. I could quickly scale from $500,000 to $5 million by leveraging AI in my work. While there may be short-term challenges, the key is to reimagine what it means to compete and succeed on a global scale with the help of AI especially for people in smaller markets and developing economies.

Bernard Leong: Yes, but living in Singapore, we have no choice but to engage with the global market. My calls at 4 a.m. aren't with local developers who are working on API solutions—they're with friends at DeepMind and OpenAI in the U.S. While our local community focuses on developing with existing tools, globalization has at least made the sharing of information much more accessible.

Information flows quite freely now, allowing anyone to be as skilled as the best in the world—it’s just a matter of upskilling. One thing that’s not fully appreciated yet is how effectively AI can help people enhance their skills, even more than they might expect.

We often focus on the replacement of jobs, but I’d like to shift the conversation to the risk of a doomsday AI scenario that many people have brought up. When it comes to artificial general intelligence (AGI), people often ask me when we’ll know if we’ve achieved it. My response is that we’ll know it when we see it. However, as of today, I don’t believe AGI is anywhere close.

Different Definitions of AI and AGI (Artificial General Intelligence)

Siok Siok: How do you define AGI for you?

Bernard Leong: For me, AGI must have the ability to reason, synthesize information, and create new knowledge. We haven't reached that point yet. Many people wonder why so many theoretical physicists, like myself, have transitioned into working in AI. What usually happens is that much of higher-dimensional physics involves the use of tensors, which are mathematical objects. You've likely heard of terms like TensorFlow—these concepts originate from physics. These mathematical frameworks are designed to generate a space of all possible outcomes.

Some of these possibilities are hallucinations, as you’ve mentioned, but others could be new insights we haven’t seen before. However, they still require human or experimental validation. The same principle applies in physics—we can generate numerous insights with string theory, but we still need to seek out experimental evidence to determine whether they’re real.

For AGI, the key point for me is that it must be able to reason and make decisions independently, not just based on what it observes from humans. That’s when we’ll truly see AGI. There are different definitions of AGI, but I lean more towards Google's definition, as it offers a more complex and nuanced perspective compared to OpenAI's approach. In fact, OpenAI's definition of AGI is essentially what Google considers level two AI—agents that can perform functions and reasoning. I don't think we've reached that point yet. I believe much of what we’re seeing is still just a reflection of ourselves.

Geopolitics and AI

Siok Siok: That's a great point, and it's something we discuss in Chapter Two, 'The AI Trap.' We're often unaware of how much we're projecting ourselves into AI models, creating an infinite mirror of our own traits and different versions of ourselves within these systems. For me, this is especially relevant given the many years I spent in China.

For me, one of the challenges with artificial general intelligence is that we're overlooking geopolitics. We're not considering the fact that different cultures and regions have varying interpretations of reality, truth, and what constitutes general intelligence. As a result, we’re unlikely to see a single, universal model of general intelligence that applies globally. With 7 billion people on Earth, we're likely to see different models of reality and intelligence emerge, rather than a single, unified model. This is a significant obstacle we’re not fully considering—all the human factors like politics, economics, and sociology will inevitably come into play.

One of the things I find particularly interesting is that if you prompt two different AI models—one based in China and the other U.S.-centric—about the same world event, whether it’s a war, a pandemic, or anything else, you’ll get two different sets of answers. Truth, reality, and intelligence are far more complex than we often consider. It's not just about calculation or accuracy.

Bernard Leong: Yes, but it's a physical abstraction of something you can represent in reality. For example, justice is an abstract concept, but the law isn’t—it reflects certain aspects of justice. Law is a representation in reality that can capture most, but not all, of what justice means.

From that perspective, it's similar with AI. AI is an abstract concept, but what we can create in reality today is through machine learning, which relies on data. You spend time with many of the top technologists in Beijing, and I believe my audience may not realize that if they spoke with you more, you could likely connect them with some of the most powerful founders of Chinese tech companies.

One question that really struck me is related to Lee Kai-Fu's discussion on AI superpowers. The surprise was that China didn’t invent ChatGPT—the Americans did. I’ve pieced together part of the story, and my sense is that China didn’t miss out; it’s more that their culture and focus on resource constraints played a role. Similarly, Google didn’t miss it either—they deemed it too dangerous to pursue. What are your thoughts on this?

If we have different large language models (LLMs) across regions, we’ll end up with very different perceptions of truth. I’ve personally had difficulty accessing LLMs in China. I've tested every LLM I can find, but to try any of the Chinese LLMs, I had to borrow a number from my Chinese students in class. They showed me how Baidu's LLM works, as well as Hunyuan from Alibaba and Tencent's LLM, just to see how a Chinese LLM would approach the same question from a very different perspective.

Siok Siok: When discussing AI models, we often overlook the complexity of truth. Truth is intertwined with morality, philosophy, history, permission structures, and social and cultural factors. The idea of a single AGI or general intelligence model that can apply universally is, in my opinion, premature. Perhaps it could be realized someday, but the reality is that all these human factors will inevitably come into play. Everyone will want to define their own sense of what is factual and what constitutes reality.

Take fact-checking as a simple example. A pure technologist might focus on achieving the highest level of accuracy, but ideology, judgment, and individual interpretations of reality also influence fact-checking. For instance, consider the recent U.S. presidential debate—people don't agree on what is factual, which highlights the challenge of defining truth when we talk about developing a general intelligence model.

Bernard Leong: But then, when it comes to factuality, is it because we're now in a postmodern world where everything is relative? I don’t subscribe to that view. I'm more aligned with Immanuel Kant's idea that there is a categorical imperative—some things are inherently true.

The postmodernist view suggests that truth is relative, but you can’t embrace relativity entirely. There must be a common baseline we can agree on. For example, in the U.S. presidential debates, despite differing opinions, the truth—the factuality of the matter—still exists.

While I agree that pinning down what AGI truly is may be challenging, I believe there are aspects of it that can be captured in a pragmatic and implementable way. But bringing it back to another question: Do you think we are facing a doomsday scenario with AI?

Doomsday Scenarios and AI Responsibility

Siok Siok: I don’t think so at all. If there is a doomsday scenario, we can’t simply blame AI. We often assume that AI is an evil force, but that’s problematic. When we hand over our human agency to machines, we’re not taking responsibility.

The central idea of AI for Humanity and the concept of responsible AI is that we need to reclaim our sense of agency.

We must recognize that this isn’t something happening to us—it’s something we’re directing, curating, guiding, and nurturing.

The doomsday scenario is often framed as an external event, like a natural disaster, but we must remember that many natural disasters have human causes, stemming from our lack of awareness and accountability.

The real threat isn’t AI itself; it’s our lack of awareness, and our inability to communicate, negotiate, collaborate, and understand each other. We need to maintain perspective—it’s far easier for humans to destroy one another than for AI to become sentient and take over the world.

AI as a Tool for Self-Improvement

Bernard Leong: What is the one thing that now you know about AI that very few do then?

Siok Siok: One thing I know about AI that few people realize is what you mentioned—it’s a tool for attaining higher levels of awareness, intelligence, and wisdom. I see AI as a sparring partner, a player-coach, someone you can compete against to elevate your skills.

You mentioned Go as an example—how players improve by playing against AI. The remarkable thing about AI is its ability to simulate hundreds, thousands, even millions of games in rapid succession. What I know about AI is that it has the potential to make me better. However, it’s crucial to have the wisdom to guide and nurture it. I need to be clear in directing AI towards what I truly desire and want to achieve. This is something I emphasize to my creative friends who are worried about AI taking their jobs.

One thing AI does is strip us bare. In the past, technical barriers in editing, transcription, and content creation provided an excuse for not focusing on originality and insightful analysis. But now, with those barriers removed by AI, our true abilities—good or bad—are immediately exposed.

I think that’s quite shocking for many people. If you’re a poor writer, AI can make you worse; if you’re a good writer, AI can elevate you to greatness. Essentially, we need to improve—not just in our skills, but also in our ability to guide, nurture, and constrain AI effectively

Making AI more Human

Bernard Leong: So what is the one question that you wish people would ask you about AI that very few do? After so many book tours and answering countless questions, is there one question you wish people would ask you, but they never do? One that makes you wonder, 'Why has no one asked me this?"

Siok Siok: The one question I wish people would ask me about AI is, 'How does AI make us more human?'

Bernard Leong: So now I'm going to ask you—how does AI make you more aware and more human?

Siok Siok: I believe AI makes me more human by helping me understand that the ultimate goal isn't perfection but authenticity. It's about being present, being yourself, and existing on a human level. One of the things I've realized is that when something is too perfect—whether it's a selfie or a piece of writing—people tend to distrust it.

Right now, if I create or write something that sounds too fluent and flawless, without any accent, pauses, or mistakes, people might suspect it’s not really me—it’s my digital twin. That’s one way AI makes me reflect on what it means to be human. A question I often hear, and one I ask myself, is: How has AI made me more aware of my own humanity?

Bernard Leong: You sound like my editor. You know why? Because 90 percent of the time, I tell him to go through the script and delete all the 'ums' and 'ahs.' But he always comes back and says it makes me sound less human. He tells me that my listeners expect those 20 to 30 minutes of unfiltered conversation—it’s what makes it feel real. It’s all about maintaining that human element.

Siok Siok: I believe it’s the pauses, the filler words, the accents—like our lingering Singlish accent as Singaporeans—that make us human. It’s the flaws, imperfections, and moments of vulnerability that truly define our humanity.

We're starting to see this in marketing and content creation—people are becoming more suspicious of anything that appears too perfect or generic. Having spent a lot of time in China, I’ve observed that they’ve perfected the art of AI-driven image editing.

They can take any photo and make your complexion flawless, erase wrinkles, and create perfect symmetry. But what we’re finding is that the definitions of beauty, intelligence, and originality are shifting. Imperfection is becoming a defining feature of beauty. The hesitations, the vulnerability, and the sense of uncertainty are becoming hallmarks of wisdom. That’s what we need to embrace. We’ve always celebrated intelligence as a bigger brain, the ability to think faster, analyze more clearly, and calculate numbers—but now, we’re recognizing that there’s more to it than that.

That’s because we’ve traditionally compared ourselves to other animals, positioning ourselves as the apex of the animal kingdom. But now, as we measure ourselves against AI and machine intelligence, other qualities are becoming more valued—wisdom, uncertainty, vulnerability, empathy, and emotional intelligence. These traits, often dismissed as 'soft skills' over the years, are gaining importance.

We’ve long associated these skills with women—listening, collaborating—but I believe we’ll see a shift in what we celebrate. Instead of focusing solely on measurable attributes, we’ll start valuing these intangible qualities that make someone likable, trustworthy, or worth collaborating with.

Bernard Leong: That’s an excellent point, which brings me to my traditional closing question: What does success look like for you if the book achieves its intended outcome?

Siok Siok: For me, success would be if anyone who reads the book, attends our events, or listens to this podcast feels empowered to do something about AI. They don’t need to be experts or know how to program; it’s about understanding the essence of AI and realizing they have the ability to shape and define the future.

What’s happened with the extreme reactions to AI—ranging from fear to exuberance—is that people often feel powerless. Those who are fearful want to avoid it, while those who are overly enthusiastic project all their hopes and aspirations onto AI.

For me, success would be if everyone felt they could understand AI. Even if they don’t know much about it yet, they’d feel confident they can learn more and actively participate in the conversation about how to make AI work for the benefit of humanity.

Closing

Bernard Leong: I encourage everyone to check out your book, AI for Humanity: Building a Sustainable AI for the Future. Siok Siok, thank you so much for being on the show. Before we wrap up, I have two quick questions. First, are there any recent recommendations that have inspired you?

Siok Siok: Yes, I’d recommend two books—one fiction and one non-fiction. The first is Klara and the Sun by Kazuo Ishiguro, a Nobel Prize-winning author. It’s a poignant novel that explores the human-robot relationship, focusing not on AI taking over the world, but on themes of loneliness, alienation, and how we come to terms with ourselves while coexisting with robots and machines.

The other book is a memoir by Dr. Fei-Fei Li, often referred to as the godmother of AI. While it’s a bestselling and widely celebrated book, I found it particularly interesting when reading it with my book club, which includes women from different cultures, both Asian and Western. The memoir sparked surprising debates—some felt she wasn’t critical enough of AI, while others, like myself, were more empathetic to her perspective, understanding her sense of history and her reluctance to be overtly critical or contentious. So, I’d recommend both Klara and the Sun and Dr. Fei-Fei Li’s memoir.

Bernard Leong: It's called "The Worlds I See". So, how can my audience find you?

Siok Siok: Thankfully, my name is easy to remember—Siok Siok. If you search 'Siok Siok' on any social media platform—LinkedIn, Twitter, Instagram—you’ll find me. My name means 'to cherish,' so I hope to connect with you online.

Podcast Information: Bernard Leong (@bernardleongLinkedin) hosts and produces the show. Proper credits for the intro and end music: "Energetic Sports Drive" and the episode is mixed & edited in both video and audio format by G. Thomas Craig (@gthomascraigLinkedIn).

Comments