Learnovate, AI and EduTech with Joon Nak Choi

Fresh out of the studio, Bernard Leong sits down with Joon Nak Choi, an Adjunct Associate Professor at Hong Kong University of Science and Technology and founder of Learnovate, to dive deep into the future of education and AI. JC shares his career journey and the back story of why he started Learnovate focusing on AI-assisted grading and what it is meant to solve for the users. Last but not least, JC shares his perspectives on how the future of education will evolve with generative AI and explains what success will look like in the next few years.


"The humans are going to be empowered to become superheroes like Tony Stark, and because you have your loyal A.I. assistant, Jarvis, doing all this stuff in the background, that's the example I always use when I give lectures on this topic. What ends up happening is that you need to make sure you can use A.I. correctly. If you offload too much, offload inappropriately, or become too dependent on AI for tasks where you shouldn't be dependent, then suddenly, you're no longer Tony Stark. You're one of those fat human descendants in Wall-E. The ones who can't even get back in their own chairs because they've forgotten how to walk, they've forgotten how to think, they're being fed a steady diet of soda pop from AI." - Joon Nak Choi

Joon Nak Choi, Adjunct Associate Professor, Hong Kong University of Science and Technology and Founder, Learnovate Technologies (LinkedIn, HKUST)

Here is the transcript of our conversation:

Bernard Leong: Welcome to Analyse Asia, the premier podcast dedicated to dissecting the pulse of business technology and media in Asia. I'm Bernard Leong, and we've heard about how AI will impact education from accessing student assignments, To personalize learning. What does the future of education look like with AI? With me today, Zhu Nachoi, an adjunct associate professor from the Hong Kong University of Science and Technology and founder of Learnovate, will help us decipher the future of AI in education. JC, welcome to the show.

Joon Nak Choi: Thank you very much.

Bernard Leong: Bernard Leong: I believe we got acquainted through the Undivided Ventures team, where we both served as advisors on various aspects of the AI initiative. During my visit to Hong Kong in June, we engaged in an extensive conversation about education and AI, particularly at the exciting intersection of edutech that large language models are now supercharging. To kick off our discussion, let's delve into your origin story. How did you begin your career?

Joon Nak Choi: My career has been quite long and convoluted. Feel free to call me JC if that's easier. I was born in Korea, hold American nationality, and currently reside in Hong Kong, which leaves me feeling a bit uncertain about my origins. I spent most of my formative years in the U.S. Like many undergraduates, I initially aimed to become a management consultant, a role I held for a couple of years before returning to pursue a PhD in social network analysis. This field involves the quantitative analysis of how people are linked to one another and has always been influential in technology. For instance, the Google PageRank algorithm, which dates back to the 1970s, originated from social network analysis, though many people don't recognize it as a technological advancement. My PhD is in sociology, and like many sociologists, I eventually transitioned to a business school after completing my degree. Interestingly, the research I conducted on social connections using quantitative algorithms was reclassified as machine learning while I was asleep one night. A few years later, during my visiting professorship at Stanford, it shifted again to AI. So, in about a decade, I transitioned from being a sociologist to a management professor and finally to an AI expert, even though the core of my work has remained consistent throughout.

Bernard Leong: So, you have this very interesting balance of academia and startup life. Can you talk about the different endeavors that, how AI has been factored into your current journey?

Joon Nak Choi: The first time I encountered neural networks was in 2004 during one of my early PhD classes. Some psychologists were discussing it as the next big breakthrough, and while I thought it was fascinating, I viewed it as something distant that wouldn’t affect me for a long time. It actually took me about 15 years to fully engage with the concept. Interestingly, in the past five years, it has significantly influenced my career. I first began working with large language models around 2018-2019 while involved with a startup called Zectr, where we applied this technology to survey data. At that time, we were using earlier models, like BERT. I remember thinking, 'This is really cool, but the results aren't quite there yet.' A couple of years later, I founded Learnovate, which focuses more on the educational technology space.

What happened next was that everything I had been working on exploded in November 2022 when ChatGPT was released. Suddenly, everyone was amazed, despite the fact that the underlying technology had been around for quite some time. The primary difference between GPT-3 and ChatGPT was simply an improved user interface.

However, it made a significant impact and captured the public's imagination. As I had been advocating for online proctoring through visual processing algorithms, people in Hong Kong began to recognize me as an expert in AI and education. That's the intriguing part, isn't it? You began working on the early iterations of what we now refer to as large language models. It wasn't until someone decided to use the entire internet as a dataset that we discovered the scaling laws and the emergent behaviours of ChatGPT.

Bernard Leong: Let's take a moment to reflect on your fascinating journey up to this point. What key lessons would you like to share with my audience?"

Joon Nak Choi: Wow, there are so many key lessons. Could you provide a bit more guidance on what might be useful for your audience?

Bernard Leong: Anything related to life lessons or career advice; I think that's what they usually seek.

Joon Nak Choi: In terms of life lessons and career advice, I’ve had a rather unusual journey. Since my undergraduate days, I’ve resisted specialization and considered myself a jack of all trades. I like to think of myself as a master of some, though others might argue I'm a master of none. It can be challenging to succeed as a generalist, but it seems to be working for me now. I believe there are significant opportunities for generalists, but early in your career, it might be wise to take people’s advice and focus on becoming a specialist.

What’s another piece of career advice? There’s often a tendency to get swept up in the hype surrounding new trends, such as AI or blockchain, leading people to go all in without proper consideration. My advice is to ensure you are fully committed to something before changing your personal branding. Acting without full commitment can be extremely risky. I've observed many who were once fully invested in blockchain are no longer as engaged today.

I've had the privilege of working with many talented students who are achieving remarkable things at a young age. While that’s inspiring, it’s also essential to recognize that you can produce your best work in your forties and even fifties. There’s no need to rush or feel pressured to accomplish something significant early in your career, especially in the startup world. Research indicates that the forties and fifties are often peak productive years for founders, leading to the creation of successful startups. I hope these insights prove useful to your readers and viewers.

Bernard Leong: That’s incredibly useful to me. At least I know I won’t be irrelevant, especially as a startup founder approaching 50 and just beginning this journey. However, I’d like to shift our focus to the main topic of the day: EduTech and AI, as well as the work you've been doing with Learnovate.

To start, how do you see generative AI transforming education, particularly in terms of content creation and personalization for learners?

Joon Nak Choi: Before diving into the substantial potential that AI holds for education, I’d like to mention the various roles I occupy. I am currently a professor at HKUST and, while I do a lot of teaching, I serve as an adjunct because Hong Kong law prohibits someone at a public university from being both full-time and involved in a startup. That said, I maintain a quasi-full-time role at the university, where I teach extensively and have been invited to participate in several university-wide initiatives focused on AI and educational technology.

In addition to my academic role, I also run a startup, which is an HKUST spin-off partially funded by the university. I am truly grateful for the university's support in this endeavour. My unique position allows me to view developments from multiple perspectives, likely more than most. I’m also involved with organizations like the Digital Education Council, which will hold its annual meeting in Singapore in early November. If anyone is interested in meeting me while I’m there, I’ll be around that first week, so please feel free to reach out.

Bernard Leong: We have lunch scheduled for that week, right?

Joon Nak Choi: Yes, we do. So, where should I begin? AI, in general, is a topic that has generated a great deal of hype, along with skepticism about what it can actually achieve. This skepticism can obscure the reasons why AI shouldn't be applied universally, and education is no exception to this trend.

What can AI do in education? On one hand, it's crucial not to get swept up in the hype. Many people are advocating for an immediate shift to personalized learning models or suggesting a complete overhaul of educational systems. However, I believe they are buying into the hype a bit too much. The challenges we face are not solely technological; there are significant organizational and societal limitations as well. It’s one thing to consider what you can do, but it's another to understand what you are permitted to do, and these limitations must be acknowledged.

Conversely, we can look at what many corporations are doing. This is the approach I advocate for in education, especially in the short term.

How can we improve existing processes? Bernard, I believe you coined the term that distinguishes between doing better things and doing existing things better. I firmly believe we can enhance many current practices using AI in the short term while society catches up.

In some educational circles, there’s discussion about more substantial changes expected in the next 5 to 10 years. However, in the immediate future—specifically, over the next five years, which is when many of us will be building our careers and striving to make an impact—we need to focus on accomplishing tangible results. I foresee a trend toward incremental changes during this period, refining current processes, organizations, and mindsets through AI.

We will likely see more significant transformations in five to ten years when society is better prepared for them.

Bernard Leong: One of the first things I do in my class is present the written assignment I've given to my students. I then demonstrate ChatGPT in front of them, making it clear that as an AI practitioner, I do not encourage them to use it to write their essays. However, I emphasize that they can certainly outline their essay's logic flow and use GPT to enhance their writing, as long as they don't rely on it to complete the essay for them.

As educators, I believe you and I both consider the implications of this approach, especially regarding grading. What are some examples where you see generative AI making a tangible difference in classrooms or online education platforms?

Joon Nak Choi: To revisit some of the points I’ve emphasized, this is similar to what I discussed at the World Economic Forum event about a month ago and at a recent Turnitin conference in Hong Kong. Before we delve into how education will change and the impact AI can have, we must first consider the fundamental purpose of education. Many people overlook this background.

Historically, higher education was focused on cultivating values, character, and critical judgment in decision-makers. Over time, it shifted towards teaching practical skills, a model rooted in the German educational system from about 150 years ago. In the last 50 to 100 years, education evolved into a framework centered around measurement—what can be measured and how. Unfortunately, this has led to a situation where measuring has become the tail that wags the dog. Consequently, universities often aim for higher rankings, adopting specific practices to achieve those rankings, and everything else follows suit. I believe AI presents an opportunity to change this.

When we focus on measurable outcomes, we often fall back on memorization and standardized exams, which devalue certain types of knowledge. This shift can enable us to return to a more value- and skills-oriented educational model. It's not just about what you know; it's about how you apply that knowledge. AI can serve as a tool to assess and expand your understanding.

I'm sure we share this perspective. With that said, how does this affect day-to-day classroom dynamics? As you know, I am an advocate for integrating AI into teaching and learning, with learning being more important than teaching itself.

One critical aspect of this integration is the essay. Essays are far superior to multiple-choice or short-answer questions for assessing specific forms of knowledge. However, essays are also susceptible to manipulation. What happens when students use tools like ChatGPT or Gemini to write their essays? Various approaches have emerged to address this issue. For example, I met with representatives from Turnitin, who have developed AI essay detection tools. While these tools have limitations, Turnitin is promoting them as one source of input to identify AI usage.

However, I come from a different perspective. It’s not just about maintaining academic integrity; in-class handwritten essays provide a way to prevent AI cheating. A more intriguing use case, in my opinion, is teaching students how to use AI to write their essays collaboratively.

In discussions with many corporations here in Hong Kong, as you do in Singapore, there is a clear emphasis on AI readiness. Companies are looking for individuals who can effectively use AI to enhance productivity and produce high-quality work, rather than misuse it. As educators, we have a responsibility to integrate this into our curriculum. About a year and a half ago, I publicly advocated for requiring students to use AI in their essay writing as a form of co-production, which was somewhat controversial at the time. However, many others have since embraced this mindset.

Currently, I co-teach a new core class at HKUST with Sean McMinn, the Director of Educational Innovation, who is a leader in AI and education both regionally and globally. If you haven’t already, I encourage you to look into his work. In our course, we explore AI, examining its capabilities and limitations while also considering human intelligence—what humans excel at and where they may struggle. We discuss how to effectively combine human and AI strengths, not just at an individual level, but also within teams and organizations.

A good analogy to illustrate our approach comes from an old American anti-drug commercial. In the ad, they hold up an egg and say, "This is your brain," then crack it and fry it in a skillet, concluding with, "This is your brain on drugs." I like to adapt this metaphor: "This is your brain," followed by, "This is your brain on AI." The next step is, "Let’s make an omelette." This encapsulates our teaching approach, where we explicitly discuss how to use AI in relation to our cognitive strengths and limitations.

In our class, we have a final essay and quizzes. The quizzes consist of handwritten essays conducted in class to ensure students think independently. The final essay, however, is a group project where students document their reflections on how they utilized AI throughout the process. Ultimately, they will present not only the final product, which is an AI-assisted essay, but also their experiences using AI. They must be mindful of their thinking processes, engaging in what we call metacognition—reflecting on how their thought patterns evolve.

Bernard Leong: Can I interject for a moment? In the context of essays, generative AI can produce various tonalities and expressions. Much like literature, there isn't a single source of truth in what you write or how you think about it from an abstract perspective.

On the other hand, when it comes to coding—a significant application of generative AI in education—some schools in the U.S. and U.K. permit students to use tools like GitHub Copilot or CodeWhisperer, while others do not. This presents two distinct schools of thought.

I believe the key issue here is that when learning to code, students should develop the skills to see and audit their code to ensure it functions correctly. Over-reliance on AI can lead to mistakes, as it’s tempting to let the AI handle everything. How would you address the balance in this coding context, especially since many educators are currently grappling with this challenge?

Joon Nak Choi: It's interesting that you mention coding because many people overlook the fact that coding is simply another language, and large language models excel at processing languages. There are two extremes we should avoid in this context. On one hand, it's essential to leverage AI; if you don't, there’s truth to the notion that while AI may not replace you, your competitors who utilize it certainly will. For instance, in Vietnam, I've observed that a relatively small number of programmers are embracing AI, yet those who do are achieving two to three times greater productivity.

Conversely, some people believe AI can handle everything for them. This perception is misleading. AI does not think, reason, or plan in the way humans do. While some newer models, like ChatGPT-4, may offer some capabilities in these areas, they remain slow and inefficient, and you won't achieve optimal results by relying solely on them.

The ideal approach is to have a mix of roles: humans as supervisors and AI as junior-level assistants. This is the model that many seem to be gravitating toward today.

Bernard Leong: I can build on your point about Vietnam. In my last corporate role, I worked with a team of 12 Vietnamese engineers on a 15-month coding project. We structured the project to evaluate productivity by splitting the team: 50% used GitHub Copilot while the other 50% did not. We found that productivity improved by 50%. After the first month, when we provided Copilot to everyone, the project's timeline was reduced from 15 months to just nine—this was before ChatGPT was available.

Additionally, when I attempted to write code myself—albeit not at a production level, but more as a proof of concept—I was able to replicate the entire project in three weeks using ChatGPT. This underscores the significant impact of using AI effectively, especially when you understand how to leverage it.

Joon Nak Choi: I believe that familiarity with AI tools improves over time. As users become more acquainted with what they can accomplish and what they should avoid, they become more efficient. This trend is common with most AI tools; not only do they reduce the time required for tasks, but they also broaden the range of individuals capable of performing certain activities.

People excel at planning and articulating technical tasks, figuring out objectives, and breaking them down into manageable components. If you can prompt AI to handle these tasks in a clear and concise manner—while ensuring there's no room for misinterpretation—you'll achieve outstanding results. In fact, AI can deliver fine-level execution that may surpass human capabilities.

Let me illustrate how this is transforming the nature of work. Do you have any questions before I transition to the next topic?

Bernard Leong: I have a different question, but it relates to a point you just made. There are issues of bias and, at times, data privacy in AI-driven educational tools, right? We can view this from two perspectives: that of educational institutions and the outcomes we want to achieve for students. I want to keep those separate. How do you approach ensuring that AI solutions are ethical and do not perpetuate bias or inequality in outcomes, especially in contexts like coding or essay writing?

Joon Nak Choi: That's a great question, Bernard, but I need to pause here; if I dive into it, I could go on for hours. Let me shift back to the changing nature of the workplace. One of my key points in advising the Master's program in Business Analytics at HKUST is that the technical skills of traditional computer science-oriented data scientists and modern business analysts are rapidly converging. The traditional advantage of data scientists has been their engineering expertise and backend execution capabilities, which many business analysts may not possess.

Typically, data scientists and computer engineers excel at the intricacies of coding, while business analysts understand the fundamentals but may take longer to solve problems due to not knowing every detail of Python. In the past year and a half, I've observed that business analytics students are increasingly using ChatGPT to tackle those intricate coding challenges. They no longer have to determine what the API interface for a model should look like; they can input that directly into ChatGPT, which drafts it for them. This dramatically boosts their productivity, making business analysts four times more efficient while maintaining their competitive edge.

Business analysts are better equipped to understand business processes and problems than most data scientists. While the best data scientists do grasp these concepts, the number who truly understand processes and strategic objectives is relatively small. Those who do tend to get promoted quickly. I see an emerging role for greater hiring from business analytics programs because these graduates possess both business acumen and technical skills, enabling them to function almost like "MBA lights" while also being capable of leading technical teams.

This shift is often overlooked but is quite significant. It's diminishing the emphasis on the nitty-gritty knowledge of languages like Python, as tools like ChatGPT and Copilot can handle much of that work. At the same time, it highlights a critical skill that many C-level executives tell me is in short supply among their technical staff: a solid understanding of the business side of operations.

Bernard Leong: If we rewind and consider the future of work, how would you respond to concerns about privacy and bias in AI within educational tools?

Joon Nak Choi: Generally speaking, the future of work is likely to resemble The Avengers, where humans become empowered to be superheroes, akin to Tony Stark, with AI assistants like Jarvis handling tasks in the background. This analogy is one I often use in my lectures. It’s crucial to use AI correctly; if you become overly dependent on it, you risk losing your capabilities. If you offload too much responsibility, you might find yourself akin to the sedentary characters in Wall-E, who can’t even stand up because they’ve forgotten how to walk or think, relying on AI for everything.

At the start of each semester, I ask my students if they would consider cheating on an essay using AI. Most of them are honest with me, and about a quarter raise their hands. I then show them a relevant movie clip, explaining that if they misuse AI—such as having it write their essay without engaging in the thinking process—they risk becoming one of those people. This question often leads them to reconsider their approach.

Bernard Leong: Just one question before we discuss Learnovate. What is one thing you know about the intersection of AI and educational technology that few others do?

Joon Nak Choi: I believe the short-term potential of AI might be more limited than many assume, but its long-term potential is much greater than people realize. When evaluating technology, it's essential to consider the societal context. There are legitimate concerns regarding AI ethics, privacy, and security, which we should take seriously, especially as educational institutions are inherently cautious organizations.

Given their connections to government and social missions, schools are slow to adopt new technologies. Any significant change requires navigating multiple committees and obtaining numerous approvals, which can be frustratingly slow. However, we are on the brink of major changes within the university system.

In the next five years, AI in education will primarily focus on enhancing existing processes and making them more efficient. However, we are beginning to see significant changes on the horizon. For instance, many universities rely heavily on international students, particularly from mainland China, who pay higher tuition. Changes in regulations can severely impact universities; in Australia, for example, the higher education sector is currently in a state of panic over anticipated revenue shortfalls.

Simultaneously, there is growing skepticism towards universities, particularly in the U.S., where one major political party is increasingly anti-higher education. These pressures will lead to a greater push for innovation. In the next five to ten years, we will see more willingness to build new models that replace outdated ones. In the short term, however, institutions will focus on refining their current structures.

I believe that stronger universities, like HKUST, are well-positioned to adapt to these changes. While you might not see immediate, sweeping changes driven by AI in the next five years, significant transformations could emerge in the following five to ten years. Many people may feel disappointed during this period, thinking nothing is happening, only to witness substantial shifts in a few years due to research pressures. As we've learned in sociology, people are resistant to change unless compelled, and resource pressures will ultimately necessitate a transformation of the business model. A notable example is the University of Adelaide, which recently announced plans to eliminate in-person lectures in favor of personalized online education. While there has been considerable pushback against this approach, we can expect to see more experimentation like this as universities face increasing pressures.

Bernard Leong: Let’s discuss Learnovate. What inspired you to start the company, and what is its overarching vision?

Joon Nak Choi: My journey began from my perspective as an educator. I was actively working on Zectr while also teaching at HKUST, though I was in a part-time role at the time. Eventually, I transitioned out of Zectr to focus more on university teaching. This shift coincided with the onset of COVID-19. I realized that some of my responsibilities, particularly addressing student cheating during online exams, were quite exhausting. However, my true passion lay in finding ways to make essay grading more manageable.

I don't believe multiple-choice assessments are suitable for university-level education. While they can serve a purpose, they fundamentally limit students' ability to think critically and make judgments—skills they will need as AI continues to automate lower-level tasks. I assigned numerous essays across four sections, totaling around 270 students. This workload meant I was spending approximately 12 hours a week grading, which was a significant commitment.

To tackle this issue, I approached senior management at HKUST and was connected with Huamin Qu, a senior computer science professor. Together, we began developing solutions, securing university funding, and eventually spinning off into our own company. While Huanmin is less involved now, I continue to lead the initiative.

Our focus has evolved since the launch of ChatGPT, shifting from online proctoring to enhancing essay grading, feedback, and commenting. Along the way, I became more integrated into the educational innovation ecosystem, thanks in part to my colleague Sean McGinn, who has invited me to numerous conferences and organizations. This engagement highlighted the need for assessments that prioritize learning rather than just measuring student performance.

The goal is to provide timely and detailed feedback with a human touch, as I don't fully trust AI yet. Our approach involves using AI as a grading assistant. In institutions like Stanford or Harvard, they have the resources to employ grading assistants, but most universities lack this capacity, meaning instructors or TAs handle the grading themselves. Typically, they receive a rubric and are tasked with grading a large stack of papers, which often takes longer than expected.

In contrast, our AI grading assistant can process these assignments quickly—returning results in minutes, rather than weeks. This efficiency allows instructors to provide feedback in three days instead of three weeks. The AI can generate detailed feedback that would be too repetitive and time-consuming for humans to write, enabling us to return that feedback to students promptly while the material is still fresh in their minds.

I believe this aspect of AI in essay grading is crucial. It's not merely about saving instructors time; it's about shortening the feedback loop for students so they can learn from their assignments. This approach works particularly well at the undergraduate level, though I feel that MBA students may still require more human interaction for feedback.

Bernard Leong: Learnovate appears to leverage technologies like natural language processing and large language models to enhance the grading experience. It seems your focus is less on simply minimizing grading time and more on accelerating the feedback cycle. That’s a significant distinction. How do you envision applying these technologies in other aspects of educational assessments or pedagogies?

Joon Nak Choi: I find this topic fascinating. There's a lot of truth in the adage that sometimes you learn more by doing than by thinking. Initially, my goal was to save instructors time, but through discussions with senior professionals in education and AI, I've realized how crucial it is to provide students with rapid feedback.

As a teacher, I had an intuitive understanding of this need, but I hadn't articulated it clearly until now. I believe there is a significant role for technology in shortening feedback cycles. There are many ways this technology can be utilized, but moving towards personalized education cannot happen in one big leap. Societal and organizational resistance is likely, so it's important to implement changes gradually. We need to assess what is feasible and take measured steps to ensure people feel comfortable with these innovations.

The reality is that progress will be slow for these reasons. People often feel uneasy about change. You might see faster adoption in corporate training than in education. However, I believe there's an immediate need to better assess educational outcomes. We need to understand what students are learning to provide them with timely and effective feedback.

When dealing with a large class of 270 students, it can be overwhelming to sift through comments and gain insights right away. Thus, we need a robust analytics platform—not just to track grades but also to analyze subcomponents of those grades along with feedback comments. Currently, we are prototyping a tool designed to deliver fine-grained assessments, allowing educators to identify which groups of students excel in specific areas and which do not.

This capability enables teachers to set up specialized review sessions for those who need them, ensuring targeted support. Additionally, the traditional method of evaluating teaching effectiveness through student evaluations is inadequate. For instance, a teacher's humor may lead to high ratings without truly reflecting student learning.

By integrating this technology with feedback mechanisms, course coordinators can gain a clearer understanding of overall performance across multiple sections of the same course. This can even help senior administrators identify outstanding teachers for recognition.

I believe that combining this technology with traditional dashboarding and analytics will yield transformative results in education. The conventional dashboard approach feels outdated; we should leverage technology to provide insights alongside data visualization. AI is evolving rapidly, and I anticipate that significant changes will occur in the near future.

Bernard Leong: I have two more questions. The first is, what is one question you wish people would ask you more about AI and EdTech?

Joon Nak Choi: I wish more people would ask, "Why are we doing this?"

Bernard Leong: I'll ask you that. Why are you doing this?

Joon Nak Choi: Many are driven by FOMO—fear of missing out—motivated by hype and trends. They push boundaries not out of genuine interest but to appear innovative for career advancement. This raises an important question: What does this mean for the students? When was the last time you reflected on the actual purpose of education?

Focusing solely on FOMO can lead to disastrous outcomes. Instead, take the time to consider your goals and how AI can genuinely help you achieve them. That’s where you’ll find the answers.

Bernard Leong: My traditional closing question is: What does great look like for Learnovate Technologies, or for you in enhancing education with AI?

Joon Nak Choi: For Learnovate, success initially looks like developing a tool to help teachers enhance their teaching using traditional processes, thereby improving the learning experience for students. That’s our short-term goal.

In the long term, we have even bigger aspirations. We aim to connect our efforts to corporate needs, addressing a significant pain point in recruitment: the lack of granularity in student grades. Corporations often rely on standardized exams for incoming recruits to assess basic skills before deciding whether to interview them.

What if we could provide corporations with more detailed information—on a voluntary basis, of course, to respect student privacy—about students' strengths and weaknesses at a rubric level? Based on conversations with senior HR leaders, I believe they would value this information. As universities face increasing pressure over the next five years to demonstrate effective teaching, this could serve as a valuable addition to their offerings.

However, let's not focus solely on the long term for now; we need to ensure we're effectively grading essays first. Historically, there was a lot of hype around AI essay grading earlier this year, with many innovative secondary school teachers eager to implement it. Yet, many proceeded without a fundamental understanding of the basics. Some of the brightest and most forward-thinking individuals made critical mistakes, such as assuming that running a greater number of essays through a tool would automatically improve grading quality.

That’s not how it works. Specific training parameters are required, and even then, the process is more complex than many realize. As a result, I’ve seen many of these initiatives falter. Some teachers, especially at the secondary level, express frustration, claiming AI essay grading doesn’t work. My response is that it fails because it isn’t being implemented correctly.

Achieving effective AI essay grading is more challenging than it may seem, and it took us longer than anticipated to refine our approach. However, I believe we’re now at a stage where it’s effective for a variety of tasks. If we can get this right and surround it with the appropriate analytics to benefit teachers in the short term, I would consider that a significant win. It would make an impact, allowing us to focus on achieving even greater results.

Bernard Leong: JC, thank you for joining the show. In closing, I have two quick questions. Any recent recommendations that have inspired you?

Joon Nak Choi: I really enjoyed our conversation, and I want to thank you for speaking at my events in Hong Kong; I truly appreciate it. Our lunch together was refreshing, as it's rare to meet someone who understands both the business and technology sides of this field. Most of the time, I’m either discussing with business professionals or engineers focused solely on technology, so it was inspiring to connect with you.

Another source of inspiration for me is the students. They are eager to embrace change. The new core course that Sean McGinn and I are teaching at HKUST has become a fantastic program. It fulfils one of the university's critical thinking requirements, and we are blending advanced technology with cutting-edge teaching pedagogies. Half the time, students work in small groups to explain complex concepts like backpropagation to an 11-year-old using digital tools. What they produce is often funny, entertaining, and heartwarming, clearly demonstrating their understanding of these concepts without relying on complicated mathematical jargon.

I'm witnessing first-year students at HKUST explain how AI models are trained in simple language that anyone, even a child, could understand. This realization is tremendously inspiring and makes me question whether we are overcomplicating these concepts. If we can elevate everyone's AI literacy to that level, I believe our chances of getting this right—rather than mismanaging it—will increase exponentially.

Bernard Leong: How can my audience find you?

Joon Nak Choi: Hopefully not through lengthy explanations! I enjoy talking, but I appreciate the opportunity to share my thoughts, and I hope what I’ve said has been informative.

Bernard Leong: I’ll point them to your LinkedIn, Learnovate, and other platforms.

Joon Nak Choi: Thank you very much! I hope to see you when I’m in Singapore for the Digital Education Council's annual meeting in the first week of November.

Bernard Leong: I should be in town. To everyone listening, you can find us on YouTube and our main site. I’ll be posting the transcript, which I’ve spent some time editing, but I believe I've found an AI workflow to speed up the process. I look forward to seeing you in Singapore, JC, and continuing our conversation.

Joon Nak Choi: Fantastic! Thank you, Bernard.

Podcast Information: Bernard Leong (@bernardleongLinkedin) hosts and produces the show. Proper credits for the intro and end music: "Energetic Sports Drive" and the episode is mixed & edited in both video and audio format by G. Thomas Craig (@gthomascraigLinkedIn). Here are the links to watch or listen to our podcast.