The AI revolution: Myths, risks, and opportunities published at the Harvard Business School

The AI revolution: Myths, risks, and opportunities published at the Harvard Business School Artificial intelligence isn’t coming for your job—unless you ignore it. In this compelling conversation with the HBS Institute for Business in Global Society, Vercept co-founder Oren Etzioni dismantles the biggest myths about AI, explores its transformative power, and outlines why those who master AI will have a competitive edge. Key takeaways – AI is a tool, not a sentient force—separate Hollywood fiction from reality. – The AI race has enormous stakes, spanning economic and national security concerns. – AI literacy is essential—leaders must understand and embrace it to stay competitive. – Transparency and ethical AI governance are non-negotiable for responsible deployment. – AI won’t replace humans—but those who leverage it effectively will outperform those who don’t. Timestamps 00:00 – The biggest myth about AI 02:15 – AI as a power tool: Use it or fall behind 05:42 – The race for AI supremacy: Why it matters 08:30 – Can AI be trusted? The truth about bias and transparency 12:10 – AI and the future of jobs: Who will thrive? 15:20 – AI’s role in national security and corporate strategy 18:45 – The biggest risks: Deepfakes, misinformation, and cybersecurity 21:30 – What business leaders must do now to stay ahead This interview was filmed on campus after Etzioni’s research presentation to Harvard Business School faculty. Harvard Business School's Institute for the Study of Business in Global Society (BiGS) invited Oren Etzioni to speak about governance of artificial intelligence (AI) during BiGS's AI in Society seminar series. Etzioni, the first student at Harvard to major in computer science, is a University of Washington professor emeritus of computer science. He is founding CEO of the Allen Institute for Artificial Intelligence (Ai2), and most recently founded TrueMedia.org, a nonprofit backed by Uber co-founder Garrett Camp, which last year helped fight deepfakes and misinformation in elections around the world. (TrueMedia.org ceased operations on January 14, 2025.) I spoke with Etzioni about the risks and benefits of AI for business and society, what types of guardrails and regulations we need around this technology, and how the industry can combat deepfake misinformation and fraud. This interview has been edited for length, clarity, and style. –Barbara DeLollis, head of communications, BiGS What are the biggest myths about AI? The biggest myth is the one promulgated by movies like The Terminator: that AI is a being —a monster — that's out to get you. It's a power tool, so you could get hurt, but it's not a being, and it's not out to get us. I think it's much more likely AI will save us from climate change, pandemics, or super bugs. How powerful is today's AI relative to where it could go? I would give it a 7.5. A lot of people have an overblown notion of it. It's much better than it was even three years ago. As far as where it can go, the sky's the limit, but it will take time. AI does understand context, and we can see examples of that every day when we give nuanced queries to ChatGPT. But it has what's called a jagged frontier, meaning sometimes it understands things really well and then the next query it doesn't understand things at all. You have to be very careful to verify the output and make sure what you're getting is real and trustworthy. AI is the new electricity, [renowned AI researcher] Andrew Ng has said. And in every domain — from human sexuality to education to HR to finance and high-frequency trading — people are using AI. One of my favorite examples is people practicing their negotiation skills with AI as adversary. They're not just using it as a fancy search engine — they're using it for role play. There are so many things that humans do badly, like driving, which causes 40,000 deaths on our highways every year, and hospital work, where the third-leading cause of death [in the U.S.] is physician error. By augmenting humans with AI systems, we can save lives and drive these statistics down. And writers, artists, and many of us are using it to become more productive and creative. Are you optimistic or pessimistic about AI deepening inequality? I view AI as having tremendous potential to help some of the most marginalized communities in the world, though when it comes to income inequality, of course, we have some challenges. In education, particularly in the developing world, we don't have the resources to provide the quality of education that people need, and AI can help address that. And AI can be used for assistive intelligence for people who are disabled, whether they're in wheelchairs or have trouble seeing, hearing, or something else. Climate change is another huge problem, but one really important approach is carbon sequestration, which requires tremendous innovation. The technologies we have today won't solve the problem fast enough. We need to invent new technologies and new science, and AI is fantastic in helping our leading scientists do that. When it comes to regulation, what are reasonable guardrails to place on AI? We need to identify the most dangerous outcomes, like using AI to construct bio-weapons, and ensure queries along those lines are not answered. And AI should be labeled. If you pick up your phone and hear a voice on the other end, you don't know whether that's a person or a machine unless AI identifies itself. Corporations benefit from a rational and unified framework — not too much, but not too little, where you have a patchwork of local, state, and city regulations. I think we ought to aim for better and more unified regulation. A lot of the regulations we've seen have been shaped by Big Tech to their benefit and to the detriment of startups. For that reason, I'm quite skeptical about some of the regulatory efforts we're seeing. It's hard to predict what we might expect under the Trump administration. I expect Elon Musk's companies will do very well, but in a less cynical sense, I do see some well-intentioned and talented folks from Silicon Valley and the tech industry moving into government and trying to have a positive impact. Can you tell us about your work with TrueMedia and the issues with deep fakes? We're very pleased that in Indonesia, India, Europe, and numerous countries, news organizations and fact checkers used our tool to identify thousands of fake items. It was used to take down Russian disinformation sites and used in our own election in a variety of ways. We're open-sourcing our models to make them more broadly available, as well as our data and resources. We need things that are available to news organizations that don't necessarily have deep pockets. Deepfakes can be far from obvious, and there's a bunch of research that shows people's ability to detect whether something is real or fake is no better than a coin toss. Corporations are starting to understand the importance, but I think it's going to get worse before it gets better. We've had cases of money being wired out of a corporation due to a fake call, and I think we're going to have more problems like that before people are fully on board with the need to determine whether a communication is fake or real. [Internet] platforms are very well positioned to fight the last war, so if there's a single video or image having a disproportionately bad impact, they're well positioned to take it down or[make it less prominent].But a concerted attack of thousands of deep fakes being propagated by hundreds of thousands of fake accounts, I don't think we're ready for. As companies look to address their own use of AI, who should lead the charge? Absolutely the CEO, because there's so much potential here for lower costs, higher revenue, and faster time to market. They can talk to experts, but if you're an executive and you're not spending quality time with ChatGPT, Claude, or what have you, trying to figure out how to optimize aspects of your business and what it can and can't do, you're missing an opportunity.

Comments