AI agents and automation crash course by Hugi Hernandez, the founder of egreenews

0:36 Nice to have you here. Thank you for joining us today, for what I hope is an informative, engaging, and interesting conversation 0:45 about AI and its impact on almost every aspect of MIT's life. 0:50 So not just MIT's life, of your life. I'm going to start by saying, what I'd like to do in this talk is give you 0:58 a little bit of the history of AI, especially MIT's role in it, a little bit of a review of what AI systems do. 1:05 I know many of this well, but it's worth reminding you of what are the pieces involved in it, 1:10 and then talk about what MIT is doing to embed AI throughout the research at the institute 1:17 and to push it forward into the future. And so that's my goal. So AI is everywhere. 1:26 In the United States, if you watch television and you look at ads, it looks like any company 1:32 that can spell AI says they're doing it. And most of them are. 1:37 But it doesn't matter what you pick, whether it's finance, it's health, it's transportation, 1:42 it's commerce, it's security, AI is here. And it's having an impact. 1:47 And so it's worth reminding ourselves of what is an AI system? 1:52 And the standard definition from computer science is that it's intelligence exhibited by a machine. 1:58 So it's a rational agent that perceives its environment. It gathers information. 2:04 It takes actions in order to try and maximize success at a particular goal. 2:10 It's the fundamental of what AI does. Often people will say AI is exhibited by a machine when 2:17 it does something that we would associate with the human, hopefully good things that a human does and not mistakes that the machine makes. 2:25 And so that involves problem solving, which are those three steps. And it involves machine learning. 2:32 And that's, in essence, the definition of AI. It's the kinds of things that we want to do. 2:37 As a consequence, modern AI systems really incorporate information from four different areas, 2:44 obviously, computer science. But also from neuroscience, what goes on in our brains from cognitive science, how we think 2:51 and from mathematics, especially reasoning about uncertainty. And that we're going to use to build out an AI system. 3:00 A little bit of history, one can debate how far back you want to go. But most people would point to the Dartmouth Workshop in 1956 3:09 as the founding of modern AI. I was three years old at the time. So I'm as old as AI, or a little younger than AI. 3:18 Three of the four founders or organizers of that workshop were MIT faculty members, John McCarthy, Marvin Minsky, 3:24 and Claude Shannon, Rochester was from IBM. McCarthy eventually left to go found AI at Stanford. 3:31 But we had an early role in it. And you can see the definition that they gave. They said every aspect of learning, 3:37 or any other feature of intelligence, in their view, can be so precisely described that you 3:43 can get a machine to do it. That was the motivation behind the founding of AI. 3:49 For 20 years, early AI was basically search. 3:54 If you wanted to prove a theorem, if you wanted to win a game, you started at some initial position. 4:00 And you executed a series of steps trying to get to the goal. And if you got to the goal, great. 4:06 If you didn't and you hit a dead end, you backtracked and tried the next thing. And you did that until you explored all the space 4:12 or you found a solution. I'm sure you can quickly figure out this does not scale well. 4:18 It runs into a combinatorial explosion. The number of things you have to explore becomes huge. 4:23 And as a consequence, for that first period, people looked at very small examples. 4:32 And they made a lot of ad hoc assumptions in order to remove things they didn't want to think about without any real basis on how well it 4:38 was going to work. And as a consequence, after about 20 years of funding in the US and elsewhere, 4:45 we hit the first AI winter. That is, funding dried up because these things, 4:51 these things were seen as just not usable. I will point out to you. 4:56 I started my own work in AI in 1975. In those days, it was something you scraped off 5:01 the bottom of your shoe. It was not highly respected because it had these problems. 5:06 It didn't handle problems well. Second wave of AI, rise of expert systems in the 1980s. 5:15 This was a focus on a particular domain and creating logical rules for deduction 5:21 so that you would basically say, given what I want to accomplish, here is the natural way in which I would get to it. 5:28 There were some early commercial successes. But, again, one of the struggles here was that they didn't scale well. 5:36 Even if I built a system to do Campbell's Soup maintenance-- which was the first successful AI application of which 5:42 I'm aware-- you couldn't apply it to some other problem. You had to start over again. It didn't learn. It didn't generalize. 5:48 And that led to the second AI winter. And now we're in the third phase. 5:54 And the third phase is really driven by bringing in solid scientific bases 5:59 from mathematics and from neuroscience. Mathematics, being able to reason about problems 6:06 under uncertainty and come up with a principled solution to it. And neuroscience, using what we know about how our brains work, 6:13 to give us a guide to how we might build a real systems. And of course, began to see early on in this phase 6:20 some successes. You'll decide for yourself. But IBM's Deep Blue system beating the world's chess 6:25 champion was certainly an indication of the power of these systems and early commercial successes. 6:31 And today, as you all know, this is really then driven by three trends, deep learning, which we're 6:38 going to talk about briefly, but using sophisticated statistical methods 6:43 to reason under uncertainty about finding solutions to problems. And being driven in part by what we know about how we think. 6:51 They don't have to be exactly the same. But it needs to be similar. The second one is the incredible growth of data. 6:58 And this is an issue I think all of us need to think about as we build modern AI systems. How do we get access to enough data to train these systems? 7:06 And how do we have confidence in the quality of the data and lack of bias in the data? 7:11 But as I'm sure many of you know, a current, modern AI system might have millions or hundreds 7:17 of millions of parameters. And you need hundreds of millions or billions of examples 7:22 in order to train all of those parameters. So massive data sets are important. And the third one, of course, is an incredible growth 7:29 in computing. Whether that's standard computing or that's GPU chips from NVIDIA or AMD, or pick your favorite company that 7:36 makes these things, the ability to do the computing allows you to get to it. 7:41 Notice two correlations, though, that come out of this. Not everybody is going to have access to the same data sets. 7:48 So there's going to be an imbalance about who can succeed in this space. And the climate implications, if you like, or the power demands 7:56 of these systems to do the training, causes other challenges for companies and governments 8:01 to think about, how do we want to balance the advantages with the cost of it? But those are the pieces that let 8:07 us build a useful application. Now, today, AI is mostly machine learning. 8:14 Not all of it, but most of it is. I'll let you read the joke here. Unfortunately, I think this is still 8:20 true for lots of application cases. You just throw a bunch of math at a bunch of data, stir it around. 8:25 If you get a good answer, great. If you don't, just stir it around until you get an answer you like. Not a satisfying way of dealing with things. 8:33 And so we want to talk a little bit about how one does better on it. But, essentially, it's machine learning. 8:38 And the definition of machine learning actually comes from the early days of AI. The first machine learning algorithm 8:45 was done by an IBM researcher named Art Samuel. He wrote a program to learn to play checkers, that simple little game. 8:51 Not very sophisticated, but it learned. And as he said, "It's the field of study that gives computers 8:57 the ability to learn without being explicitly programmed." All right, great. 9:02 So with that in mind, there's a more modern definition. But with that in mind, a quick reminder of what 9:08 goes into machine learning and why we want to think carefully about how we use it. 9:13 Traditional programming, you're a client. You give a program a specification. I want my program to do this particular task. 9:20 With this input, I want this output. They write the program and then you give new inputs. You get good answers out. 9:27 Machine learning, the programmer writes a machine learning program and takes a collection 9:34 of input-answer pairs. For this input, this is the answer. For this input, that's the answer. 9:41 And the machine then builds a new program, so that given new inputs, it will give you an answer. 9:48 And you can use it to make a decision about, is this the right thing I want to do? And you are still involved in that. 9:54 But notice there's an implicit program there. You never create the program. The machine does. 9:59 And one of the questions is, how well does it do it? And does it do it in a manner that actually meets my goal? 10:05 And notice a problem that many people acknowledge. The quality of the input data is going 10:12 to dramatically affect the ability of the algorithm that gets created. If you get it really good input data that covers the space, 10:22 it'll do well. If you give it input data that's missing elements of the space or has a lot of incorrect information, 10:28 you're going to have a problem. All right, a quick example. 10:34 You have a set of training data. I might say, here are a set of images that I know are cats. Here are a set of images that I know are dogs. 10:40 I want to build a system that will recognize cats and dogs. I convert each example into a set 10:45 of features, a set of numbers. In this case, it might be a set of numbers that talk about the shape of the nose. 10:51 A set of numbers that say the color and the texture of the fur. A set of numbers that say the shape of the eye, a collection 10:57 of pieces like that. And then I want to infer something about the process that created that. 11:03 And typically here, I'll just use a neural net. I will train a system on those examples to say, how do I weight the different features to have 11:11 something that I think actually captures the difference between cats and dogs? And then I want to use it on new data to make sure it works. 11:20 So I give it a new image. It says that's a cat. Give it a new image that says, that's a dog. 11:25 And there's still a challenge. Are these cats or dogs or very confused animals? 11:34 I'll let you decide for yourself. I think the one on the left is a dog. I think the one next in is a cat. 11:40 I'm not certain, the one after that. And I think the one in the far right is a cat. But you get the point. They're going to be edge cases that you still 11:47 need to think about as you use this system. So that's the paradigm for machine learning. 11:53 And I want to come back to how we use it today. But also some of those challenges in terms of the performance of it. 12:01 There's a range of things we want to do. Most common machine learning algorithms today are supervised. We give them those feature label pairs and we use them. 12:09 It's a whole range of algorithms, many of them dating back to the '50s that can be used. But today, at least to my experience, 12:16 almost everything is some version of a neural net. And, of course, the hot topic, our large language models, 12:22 which everybody seems to want to use, and we'll talk about that, as well. But these are all elements of what the systems use, 12:29 just a tiny bit more on what those systems do. Basically, we're going to use something called 12:34 logistic regression, which is going to learn a probability of assigning a label to a new example based on that training data. 12:43 And so I'm going to skip this tiny bit of math. 12:49 This is an MIT talk. There will be a quiz on the end. So you might want to take a few notes. But everybody will pass the quiz, so don't worry about it. 12:56 And, yes, by the way, my jokes are all bad. But I'm a tenured professor. So you can't do a damn thing about it. 13:02 All with that in mind, just it's worth reminding you what that system does. So in logistic regression, I've got a set of feature values. 13:09 In a modern system that might be tens of thousands of feature values. And the system is going to learn weights 13:16 to assign to those features, so that given an instance, it multiplies the weight by the feature value, adds them all up, 13:23 and it applies what's called a logistic function to it. And that function is designed to assign the probability 13:31 that this is, in fact, an example of what I'm looking for. And it's designed to very quickly push the probability 13:38 either towards zero or towards one. And the whole goal is to find what 13:45 are the best weights, which is what the learning algorithms do. And then set a threshold, so that when I have a new instance, 13:53 if I'm above that threshold, I'm going to say, this is an instance. And if it's not above that threshold, I'm going to say, 13:59 it's not an instance of it. And I raised the threshold because it's actually 14:04 important to think about. And, unfortunately, many AI systems don't. For example, if I'm building a system for autonomous driving 14:12 that is going to try and detect pedestrians, I want to set the threshold so I have very few false negatives. 14:21 I don't want to not recognize a pedestrian and have a disaster. So I want very few false negatives. 14:26 That tells me how to set the threshold. On the other hand, if I'm a faculty member and I'm running an exam, and I'm looking for cheating, 14:33 I hopefully don't use this solution, which I think comes from India. But I probably want very few false positives. 14:40 I'd much rather accept a few kids getting away with cheating than accuse somebody of cheating in that exam. 14:46 So you get the idea. The context really matters here in terms of how I set the threshold. 14:52 All right. Almost done with the preview. So what about neural nets? Based on our knowledge of neurophysiology, 14:59 how our brain works, but they're essentially a way of learning those weights in order to set up a logistic classifier. 15:07 And so an artificial neural net, again, simplified model of what goes on in the brain. I've got a set of inputs and a set of feature values, 15:15 a bunch of numbers. I then connect each of those features 15:20 to what's called a hidden node. And it takes the product of the weights 15:26 and the feature values, adds them up, and applies a function to it that gives you an output of that internal node. 15:32 The choice of that function is really important. We'll talk briefly about that. And then those are all connected up to an output node 15:40 with an additional set of weights. And that weighted sum is applied to the input to the logistic function to say, yes, this is a cat, 15:48 or, yes, this is a dog, or I'm not certain. It's one of those confused examples in between. 15:54 In the early days, artificial neural nets had maybe one hidden layer, mostly because 16:01 of computational costs and lack of data. Today they can be huge. One of my favorite examples from an MIT spin-off, SenseTime, 16:09 the Hong Kong-based face recognition and AI company, their system has 1,000 layers in their artificial neural net. 16:16 And I'm sure there are bigger examples around. But that's basically what we want to do. 16:23 All right. And then deep learning just refers to a complex neural net 16:28 trying to accomplish this. There are lots of variations. But most of them-- the early ones, at least in computer vision, 16:34 go back to the work of two Harvard neuroscientists that won the Nobel Prize, David Hubel and Torsten Wiesel, 16:40 who discovered cells in primate cortex that did what we would see today as an artificial neural net. 16:48 And I'm going to skip by the examples other than just to say that a modern neural net can do things like face recognition, character recognition, extremely well. 16:58 All right, and then large language models. I'm sure you're all aware of them. 17:04 They're the current rage. I think they have a lot to add to the system. Basically, I think of these as a deep neural net that has 17:14 some interesting properties. In language, it's a way of predicting, what's the next word in an answer I'm constructing 17:22 based on the words before it? Notice the size, though, here. A word in this system is represented by over 12,000 17:31 feature values. A word with a similar meaning is very close to other ones. 17:36 A word that has multiple meanings has multiple representations so that you can deal with the confusions in language. 17:43 In a sense, it's just a sequence of those feature vectors, one for each word. 17:48 The magic inside of here is something called a transformer, which is a system that basically takes one of those words 17:55 as a representation and uses other words to decide how to disambiguate meaning, how to use context 18:02 to associate pronouns with nouns, how to use other information to refine the words so that you can 18:07 then run the full system in order to come up with a solution. 18:14 To train this, DeepMind, who did one of the first versions of this, or OpenAI, which is probably 18:21 the better example of it, they trained their system on 30 billion sentences-- 18:28 30 billion. By the way, all of you are entitled to a little bit of revenue from OpenAI 18:34 because they probably used your Facebook page or your LinkedIn page or something else to gather that data 18:40 without your permission. You can worry about the legal ramifications of that. But they mined massive amounts of data in order to train this. 18:49 And now, given an input query, the system basically samples words from that query to start the system 18:56 and then predicts probabilistically, what's the likely answer I want to generate? 19:02 With an input as long as 3,000 words long, which is actually impressive. 19:07 And, of course, you can generalize this. If you want to create a chatbot, you take as input 19:12 a sequence of queries and responses, either generated by ChatGPT, or it's something that humans did. 19:19 You scale them or weight them in terms of what the quality is. And then you retrain the system in order 19:25 to create something that gives you conversations. So probably a longer preamble than you needed. 19:32 But those are the elements of modern AI. And I want to give you a sense of some of the strengths 19:40 and some of the challenges. I want to remind you that a modern, large-language model 19:46 based on all of this technology gives you a probability, gives you the probability of a good answer. 19:53 If you run the same query multiple times, you may get slightly different answers. You may get very different answers 19:59 because it's a probability. If there is bias in your training data-- and bias can be incorrect data, but it can also just be things 20:09 you're missing-- it's going to affect the output. In healthcare, this can be a huge problem. 20:14 A system trained on data from people that look like me may be very different than a system applied 20:20 to data from people that look like you or somebody else because of missing data. So bias in the data is huge. 20:27 And these systems don't have the ability to apply common sense to their answer, 20:33 comes up with a ridiculous answer you or I would look at it and say, no way. System doesn't have that ability. 20:40 So it's a tool. It's not a replacement. And I want to show you an example of this. 20:46 I'm stealing a little bit of thunder from a couple of my colleagues. But I want to show you very quickly a little bit of an example of using an AI system 20:54 and why you need to be careful about knowing when to trust the response. It's a study out of the Sloan School of Management at MIT, 21:00 I think in collaboration with some other people. They took, if I remember it right, 500 middle managers, HR 21:08 experts, management experts. And they divided them up into three groups, a control group, 21:14 a group that had access to GPT-4, and a group that had access to GPT-4 21:20 and a guide on how to use it. And they gave them two tasks. The first task was designed to fit very squarely 21:28 into the expertise of ChatGPT-4. It was coming up with a description of a new product. 21:35 And notice the results, the group that had access to GPT-4, as judged by experts, 21:41 their performance, both in efficiency and quality, improved 38% And if you were in the lower half, 21:49 you had less experience, less capability your improvement was 43%. If you're more experienced, not surprisingly. 21:55 it was still an improvement of 17% You got better. If you had access to GPT-4 and a guide on how to use it, 22:03 you did even better in terms of performance. This is great. You notice I have blocked out part of the slide, because now 22:10 you go to a problem that was designed very explicitly not to fit well with the capabilities of GPT-4. 22:19 And there, the group using GPT-4, their performance decreased by 13%. 22:28 All right, not great. And the group that had GPT-4 and a guide to use it, 22:36 their performance decreased even more. Why? 22:41 Because they trusted it. It's an AI system. It's got to be good. It must be right. 22:46 Ehh, not a good answer. My point is, it's a tool. And as a user, you need to know how 22:53 to judge when it is doing something that's acceptable and when it is not. And I think that's one of the challenges here. 22:59 Nonetheless, it's been fascinating to see the impact of AI. And I will simply point out to you, 23:05 this year's Nobel Prize in physics, two of the awardees are pioneers in deep learning, Geoff Hinton and John Hopfield, 23:15 Nobel Prize in chemistry, Demis Hassabis, another pioneer in neural nets, Hopfield spent a year at MIT on sabbatical. 23:24 Oops, sorry. I'll go back there. I went too fast. And Hassabis completed his postdoc at MIT. 23:29 So we had a little bit of effort in influencing these people. With that in mind, where are we today? 23:36 And what's MIT doing? There are a ton of areas of great success. 23:42 I've just listed a bunch of my favorites here. I'm going to show you four examples of them. But there's hardly an area that hasn't 23:49 seen an interesting application of AI into the system, systems mostly built around software. 23:55 But obviously there are hardware booths, certainly GPUs, but also specialized chipsets being 24:00 built to make these things go. And so success stories, you all use them. 24:07 Speech recognition, speech translation, remarkably good systems, and a range of them available. 24:14 My own field is, or was, computer vision. I find some of these systems really impressive. 24:19 Face recognition systems today actually perform better than humans, certainly on face verification 24:25 and even on face recognition. Autonomous driving, I should be careful as I say this. 24:32 I'm not a fan of some of the shortcuts that Tesla has taken here. But there are some impressive systems 24:37 for doing autonomous driving. Primarily one for me is Mobileye. It does an incredible job with it. 24:43 But computer vision has been dramatically changed. Robotics, I'm being biased. 24:50 This is an MIT event. These are four MIT spin-off companies, Mobileye with autonomous driving, iRobot and the Roomba 24:56 with household robots, Kiva, now part of Amazon with logistics. And perhaps an interesting one, Farmwise, 25:03 which does an AI-based vision system to actually pick weeds from crops without damaging the crop, in practical use today. 25:13 And, of course, question answering. Early version would be IBM's Watson system, but ChatGPT is a great example of the kinds 25:21 of things you can do in terms of answering questions. So what's MIT doing? 25:29 We're essentially embedding AI and machine learning across the entire institute. 25:34 And we're doing it both at the curricular level, in terms of what we teach students, 25:40 but also in terms of research. And I think in the interest of time, I'm going to skip over the curricular level. 25:46 They'd be happy to-- all right. Sorry, I'd be happy to answer questions. But we're changing how we train and teach students today to embed computation 25:54 throughout the curriculum. Every student is learning about AI, 25:59 no matter what their major is. What I want to show you is the level of activity and interest 26:05 at MIT today in terms of research using these systems. When we founded the MIT Stephen A. Schwarzman 26:12 College of Computing five years ago, one of the things we said was we were not only going to embed computation 26:19 throughout the institute, but we were going to add 50 new faculty lines to the institute. 26:25 It's the largest growth at MIT in 80 years. 25 of them are in computer science. 26:31 25 of them are bridge faculty. They are in between computer science or the college, 26:36 if you like, in another discipline. And that both changes the kind of faculty member 26:42 we hire, but it changes the way we think about using AI. So I know this is a busy slide, but here 26:49 are examples of the people that we have hired in the last five years in these bridge areas. 26:56 In management, somebody who does behavioral economics, mechanical engineering, agriculture management, chemical 27:03 engineering, new synthesis design. But then some places that may surprise you, 27:10 music, music technology. We have an offer out in philosophy for somebody 27:17 who does computational ethics. How do you think about the ethical use of these systems? 27:23 And then I have the pleasure this term of actually co-chairing a search committee to find somebody who sits between computation and history. 27:34 Going to be an interesting challenge to find somebody. But there are some interesting people there. But the idea is that we want to have 27:41 faculty that are bridges here. Two weeks ago, MIT did its review of faculty members 27:46 for promotions. And of the-- I don't know- 60 cases that we saw, 27:52 I would say 55 of them involved somebody using AI in a department, whether that was urban planning 28:00 or that was economics or that was philosophy or that was political science. So there's actually a real strong interest. 28:09 And let me show you some examples of this. One of my-- I shouldn't say my favorites. 28:14 I have lots of favorites. Two colleagues built a deep learning system-- this is before ChatGPT-- 28:20 that is able to identify new drugs to kill antibiotic-resistant bacterial infections. 28:26 It has a lot of structural knowledge about chemistry built in. But it also has a deep learning system underneath it. 28:32 The molecule they selected, based 28:38 on what the system gave them as recommendations, they tested on 25 known antibiotic-resistant infections. 28:45 And that new molecule was shown to have an effect on 24 out of 25. 28:53 It was one lung infection it didn't deal with. Since then, they've used it to design drugs very specifically 29:00 for particular antibiotic-resistant bacterial infections. You can see two examples here. 29:06 I can't resist telling you, because my colleagues have discovered a new drug, they got to give it a name. 29:11 And they chose to call this new drug Halicin, H-A-L-I-C-I-N. 29:19 That sounds like penicillin. It sounds like the name of a drug. And where did Hal come from? The AI computer in 2001-- 29:26 A Space Odyssey. So they're nerds. They're geeks. They have a terrible sense of humor. But they have the ability to actually build it. 29:33 But notice the impact, new drug discovery. A colleague, second colleague, actually 29:40 a colleague in both of these, is a breast cancer survivor. And she became fascinated with, how can he 29:47 do a better job of detection? She has built a deep learning system, which tested on retrospective data, old data. 29:55 She has shown with 85% accuracy, can spot the early signs of something that 30:02 is going to turn into a tumor five years before the radiologist will detect it. 30:10 It's now in common use at the Harvard hospitals as a screening mechanism, incredible impact on human health. 30:18 Third example from health, a colleague, Dina Katabi, is a wireless communication expert. 30:26 Here's a Wi-Fi signal here. As I'm pacing back and forth, I am slightly 30:31 disrupting that Wi-Fi signal. And her system, which could be in the corner of the room, 30:36 will not only detect that disruption, it will infer what caused it. So things she can do, she can detect 30:44 vital signs of a patient remotely with nothing attached to the patient. 30:49 During COVID, we used our system in all of the Boston-- or not all, most of the Boston-based hospitals 30:55 so that staff didn't have to go into the ICU. She can actually tell how well I'm walking. 31:02 So if I happen to be a patient with Parkinson's, her system can tell a clinician how the disease 31:08 is progressing, automatically. She can detect activities. The one that I find fascinating, she 31:14 can detect disruptions in sleep, which are often a signal of onset of diseases 31:21 like Parkinson's, all completely automatically. Transportation, obviously we're working on autonomous vehicles 31:28 extensively. We're interested in the traditional components of it. But we're especially interested in things 31:34 like behavior prediction. How do you predict what a vehicle in front of you is going to do? How do you predict what a pedestrian is going to do? 31:41 If I were in Bangkok-- my apologies-- how do you predict what all those scooters are going to do as they go flying by you in the wrong lane? 31:48 But it's really being able to do reaction, if you like, to the system. 31:54 And then things like last mile vehicle routing, a great logistics application. 32:00 An area I think is really primed for major impact is discovery of new things. 32:06 A young colleague in chemical engineering has built an AI system that comes up with new models for catalysis, for building catalysts, 32:15 aimed at coming up with solutions that are more efficient, less expensive and will produce good quality outputs. 32:22 Design of new materials, again, colleagues have built a system that will design or suggest 32:27 designs for new materials. An area of particular interest for us is things like concrete production, where 32:33 the system will predict ways to come up with new mixtures with lower costs, lower emissions. 32:40 But these are things creating new materials 32:46 Finance, my colleagues will talk about this, but studies on the economic impact of generative 32:53 AI on jobs of the future. There are a number of programs going on. The one I will simply highlight is work of our most 32:59 two recent Nobel Prize winners, Daron Acemoglu and Simon Johnson, looking at the economic impact 33:06 of what is going to happen to jobs as we think about AI there. You can see a bottom example there, as well. 33:12 There's a whole range of examples here of things that are going on. Again, it is hard to find an area where there isn't 33:18 some research going on at MIT. And the way we think about it is that machine learning now 33:25 is the third leg in the stool of scientific discovery. 33:31 A good researcher will, at least in science and engineering, will use mathematics to do a formal model of something that 33:38 will make predictions, which they will then gather information on experiments that they do physically. 33:44 But they then use AI and machine learning to add the computational component, 33:49 not just to do the analysis, but also to do the simulations to look at other things that should be tested here. 33:56 And so we refer to our students and our faculty now as being computational bilinguals. 34:02 They speak the language of chemical engineering and the language of computation. But these are the range of things that MIT is looking at. 34:09 And our goal going into the future is to embed that notion of machine learning throughout every department at the institute. 34:18 It means we're going to hire new faculty members, new kinds of faculty members. We need to break down disciplinary boundaries. 34:25 And of course, we had to follow-- sorry, have to face one of the challenges, which is our data 34:31 needs, our data storage needs, and our computing power needs are growing. 34:36 And we need to think about how we're going to address those things, which we're actively working on. 34:42 I wanted to give you a sense of what MIT thinks about AI. 34:47 Boy, I need a better AI system here, what MIT thinks about AI. I wanted to give you that sense of not only the history of how 34:55 we've dealt with it, but especially how we see it moving into the future. New discovery mechanisms, new management methods, 35:03 new economic models, but even in places that you wouldn't expect, 35:10 a new planning of design for cities, use of this in political science, 35:15 use of this in economics. Every part of MIT is now using AI in a different way. 35:22 Before we go to questions, I will simply say, as somebody who started in this field 50 years ago, 35:28 it's been fascinating to see where it ended up. I hope the next 50 years aren't quite as tumultuous, 35:35 but that they lead to something different. But even though there were two AI winters, 35:40 this one feels like it's here to stay. And we need to adapt to it, adjust to it, and use it. 35:46 And with that, Kwan, I'm going to let you pick the questions. I see people have put things in here. 35:51 But wherever Kwan is, do you want to pull one of them up for me? 35:56 36:04 Oh, there we go. All right. People talk about artificial general intelligence a lot. 36:11 Can you explain it in plain English? And how can this concept be applied in business? 36:17 I'm sorry, I shouldn't smile. Answer to the first question, no. I'm joking. 36:23 The goal of people who are pushing AGI is to say, can we build an AI system that really 36:29 does behave the way we do? It's not just built for a task. But it is built to apply intelligence to anything. 36:38 As I said earlier, it's an area where to do that, you need to capture common sense reasoning, 36:45 the kinds of things that a two-year-old knows and an AI system doesn't. It's a wonderful goal to have. 36:51 I will show you my bias. I think it's a long ways away. We'll get closer to it. 36:57 But I think we're likely to see successes in particular areas more quickly than building a general AI system. 37:06 If you had it, it'd be great because you could use one system to do marketing planning, 37:13 to do customer service, to do management of your finances, all 37:19 of those sorts of things. But, again, I think it's much more likely that in the shorter term, you're going to build a generative AI system 37:27 for a particular application. But that's what it's about. There are people who think it's going to happen. 37:33 I'm a little skeptical, but I may be wrong. All right, second question. 37:38 37:44 You're an optimistic group. I like this. 37:51 Since history is often a predictor of the future, I should be careful. 37:56 But I think, as I said, this wave feels much more real than the two previous waves in those AI 38:04 winters. And why do I say that? This wave is built on much more solid scientific foundation. 38:12 It really is built on deep understanding of the mathematics. It's built on deep understanding of at least what we 38:18 know about how people think. And you can just see it in terms of the commercial impact. 38:25 I'm sure you're all affected in your own companies with this. So will there be another AI winter? 38:31 Possible, could be affected by government agencies deciding not to provide funding for it. 38:38 We've just had an interesting event in Washington yesterday. And we'll see what happens in terms of funding there. 38:44 But I don't see this having an AI winter any time really soon. 38:49 Having said all of that, as a quick comment, I want to remind you that even though these current systems are 38:56 built on a model of what we think goes on in the human brain, it's not a perfect model. 39:03 And there may be alternatives. And I'll give you a very quick example. It comes from my colleague, Josh Tenenbaum, 39:08 who raises a really good point. These systems are impressive, but they need to be trained on hundreds of millions of examples 39:16 in order to be effective. For those of you who have young children, or had young children, 39:22 think about a two-year-old. You can show a two-year-old a stack of blocks on a table 39:28 and ask them what will happen if you hit the table. And with good, reasonable accuracy, 39:34 they'll be able to tell you what happens. They don't have hundreds of millions of training examples. 39:41 But somehow they learn very quickly how to predict what happens in a setting. And so there may be alternatives that we 39:49 see that change the path of AI. But to the winter case, I don't think it's coming soon. 39:54 Or at least I hope it is not. What's my next question? 40:06 I'd like to be able to catch a flight in a couple of days and still get back to the United States. So I'm not certain how much I want to answer this question. 40:14 But it is a great question. 40:19 Let me answer it this way. I'm not going to answer the particular one of, is it a wise decision? I'm sorry. That just gets me in trouble. 40:26 But I do think, and I think many companies, certainly in the US, have taken a role in this, that we 40:32 need to think about the regulatory issues of how you use this. 40:38 Where are the places where you want guardrails? Where are the places where you're happy to use this? What are the standards that a product 40:45 should meet before you allow it to be deployed in terms of use? 40:50 I will point to the European Union, which I think has done a really interesting job of this. You can quibble with pieces of it. 40:57 But they've laid out an interesting structure for how you think about the regulatory issues here. 41:03 And so I don't know whether-- let me say it this way. I don't know if politics should have an influence on this. But government agencies absolutely should have it, 41:11 but working with the companies. And I will point to in the US, at least my recollection, my colleagues may be able to correct me, 41:17 Microsoft took the lead in creating a consortium of other major US companies 41:23 to begin building a framework for what was an ethical use of AI in different areas, 41:29 not to maximize their profit. It would help, but to actually make sure that it was doing what it should, 41:35 which is to protect people while building them. And so, absolutely, there should be government role 41:40 in regulatory components. Politics is a little different. I'll stay away from that one. 41:47 All right, Kwan. Pick me one that doesn't get me in trouble when I try and get back into the US. 41:56 Yes, I've heard [? Massa ?] talk about this. 42:02 It's an interesting question. How realistic is it? I'll do it the following way, a completely general AI, 42:12 I think, as I said, unlikely in 10 years time. But in particular domains, an AI system that outperforms humans, 42:22 yeah, I do think it's possible. You can already see it in some places. I'll give you two examples, just ones I happen to like. 42:30 Again, I'll go back to that face recognition system. There are ethical issues about how you use it. 42:35 That's a separate issue. But the ability of modern face-recognition systems, it is better than humans. 42:41 And it's something that really is going to have an impact on safety and security in places. 42:47 When I flew over here, in order to board the flight, basically it was using a face-recognition system. 42:54 The second example I personally find impressive is some of the autonomous vehicle systems. 43:00 And, again, I'll point to Mobileye, which I think builds a great system. I'm biased. It's an MIT spin off. 43:06 But I had the pleasure of actually riding in a car using that system. There was a person behind the wheel. 43:12 He never touched the wheel-- in the streets of Jerusalem. Lots of narrow places, lots of people walking across, 43:20 and it was impressive how well that system navigated, merged into traffic, didn't use the horn anywhere 43:27 to honk at anybody, which in Boston you would use regularly. It's an example of something that is better than a human. 43:34 And I use that example because, think about the impact if that could actually be brought to bear. 43:41 You'd reduce traffic fatalities tremendously. I forget the number, but it's like millions a year 43:47 around the world. You would reduce the cost of that tremendously. So those are places where I think it is realistic 43:54 to [INAUDIBLE], I don't know if it's 10,000 times smarter. But in specific areas you will see things that certainly outperform humans to our benefit, I think. 44:04 All right, Kwan, I've got time, I think, for one more question. Yeah, great question. 44:10 I guess short answer is, I think that's a topic 44:17 of a session this afternoon. But a slightly longer answer, I would say, is there's no one solution to this. 44:27 But our experience at MIT is that if you find ways to encourage students, faculty to explore new ideas, 44:38 find ways to connect them up with sources of funding, and find ways to connect them up with users in the real world, 44:45 you lead to interesting things that can happen. So for Thailand, that is your choice. 44:51 But I hope that you will work, both with your local universities, but with institutions internationally 44:57 to identify places where you can have an impact, and to use it to actually then collaborate in that way. 45:06 I'll give you one small piece of advice. Like any piece of advice, it's not worth very much. But I'll give you one small piece of advice. 45:13 I know from my MIT colleagues, we get many requests to collaborate. 45:19 We're always interested in listening, but often the really interesting collaborations are when there is something about a local setting that 45:29 doesn't exist in Cambridge, Massachusetts. And it will draw the faculty member here 45:34 because they're really curious to explore it. You will know what those are. 45:39 That could be a particular disease. That could be an opportunity to think about transportation in a different way. 45:45 It could be something else. But to the extent that you can find those opportunities, you will draw talent from around the world So yeah, I teach in the management school. And so what I focus on-- he said technology, innovation, 0:56 and leadership. I try to take technologies and make them-- to demystify them, to help leaders understand what's happening there 1:03 and really think about how do you design your organization and how do you lead transformation? So I did some of the earliest research 1:10 on digital transformation 15 years ago. To tell you how long ago that was, nobody used the word digital transformation back then. 1:18 And so I wrote some book on that. I had done a work with IT organizations before that. I've done work in skills and careers. 1:26 And now, of course, AI, all of those things come together. AI is affecting all of that. 1:32 And so what I want to do today is talk to you a little bit about how do you think about generative AI and how do you work that in your strategy? 1:40 Now, Eric has been doing AI for almost as long as I've been alive. 1:46 And I want to try to tell you about AI from a management 1:52 standpoint. And that means it will-- I'll try to make it crystal clear for you in trying to make 1:57 decisions in your organization. But it also means that if Eric and I disagree, he's correct. 2:05 So that's where we want to go. My job is to be as clear as possible, help you figure out the questions you can ask, 2:11 the instincts to develop, as you are making decisions, because all of you, probably, are having to make decisions 2:17 with AI. How can AI help you understand that a little bit better? Or if I could say it a different way, 2:23 how can I make it a little bit less scary and make you feel more competent in dealing with this stuff? So are you ready to go? 2:31 Three pieces here-- number one is, what is AI? Now, Eric already talked about what is AI. 2:36 I'm going to give you my version of it. And think about these elements of the different kinds of AI 2:42 and how to think about it. Number two is GenAI because that's really where we're spending a lot of time now, 2:47 a lot of attention on ChatGPT, and Gemini, and Midjourney, and all these, Sora, and all these others. 2:52 How do you think about making that work in your organization? Because it's tremendously powerful, but also very risky. 2:59 And how do you think about that? And then the last is, what does that mean? How are companies innovating with this right now? 3:06 So first, what is AI? And like I said, it's a little bit scary, after having followed an AI professor. 3:12 But like I said, it's mostly correct and hopefully very clear when we talk about it. What is AI? 3:17 Well, the thing about what is AI is, it's really hard. The definitions Eric gave were from 2003. 3:24 Now, Google does not track Google Trends back in 2003. They start in 2004. 3:30 And look at generative AI. Nobody was talking about it because it didn't exist back 3:35 then. But then deep learning, he talked about that, and you could see how that came on 10 years ago or something 3:42 like that. That was really cool. We had a thing called the Work of the Future Initiative. 3:48 And the question there was, with deep learning doing what it is doing, what jobs will be left 3:54 after the robots eat the jobs? And we found that, actually, they don't eat the jobs. They eat pieces of the jobs. 4:01 And also, there's a lot of extra effort that goes in. It's not just the technology. But that was the scariest, most interesting AI. 4:09 And now, what do we call that? We call that traditional AI. That's how fast these things move. 4:14 Go back a little bit farther, this is machine learning. And you can see, why is it bigger? 4:22 Well, it's actually a superset of these other two and then artificial intelligence. And the interesting thing about this, you see, 4:28 artificial intelligence was more common. And now, it's less common because artificial intelligence 4:34 back then was a different kind of thing than it is now, just like Eric was talking about. So with these elements, how do you make sense out of this mess 4:43 when it keeps moving so quickly? I'm going to really first give you my important law 4:49 that if you remember nothing else, remember this. You know Moore's law, Moore's law and the exponential. 4:57 These things keep growing at an exponential rate. This is Westerman's law. 5:04 Westerman's law is different. Technology changes quickly. Organizations change much more slowly. 5:11 I know this to be true. I should not have to say this is true. But I know it to be true. And you know it to be true. 5:16 But our technology people often forget this. And your technology vendors often forget this. 5:23 The hard part is not adopting the technology. It's changing the way you do business. 5:29 It's not the digital. It's the transformation, is the hard part. And so what does that mean? 5:35 Technology is not the problem. Transformation is. So it's not really a technical problem. It's also very much a leadership problem. 5:44 We need to get both of those things in there. And so, for example, in talking about AI, 5:49 we've talked to two-- we've talked to a lot of executives. But here's two that we can quote. Matthew Evans, who worked on AI for the manufacturing 5:57 process at Airbus, he says, "Strictly speaking, we don't invest in AI or natural language processing or image. 6:04 We are always investing in a business problem." Or if we go on to the Home Depot, Fahim Siddiqui-- 6:11 another thing I do is I run the MIT Sloan CIO Leadership Award every year. 6:16 And Fahim was one of our finalists last year. And Home Depot is a home goods store. 6:22 If you need hardware, or saws, or wood, or anything, you go to Home Depot. 6:27 I'm renovating a house right now, so I spent a lot of time in Home Depot. And Fahim and his team have created a really good experience 6:34 for a really, really complicated store. He says, we should always be looking for extraordinary experiences We want to bring joy to the user. 6:43 We want to delight. The technology-- that's secondary. And once again, we forget that all the time. 6:50 But we can't forget that because technology provides zero value to your company. 6:56 What you do with the technology is what creates that value, how you change your business or your products to make it work better. 7:04 So what's the first thing from a management side? What's the first thing you need to know about AI? 7:12 Artificial intelligence is not intelligent. And Eric mentioned this, and we should continue that. 7:22 Basically, it's a program that executes, but it doesn't have that context knowledge. 7:28 And we have to remember that. It's basically executing a formula. And that formula is what you've programmed it to do, 7:34 what it's learned how to do. But it may not be the right thing. 7:40 Aude Oliva, who has created AI that can actually read what you are thinking, with brain scans, 7:47 one of the smartest people I know, she says, artificial intelligence should be artificial idiots. 7:54 That's one way to think about this. It's not intelligent. But the thing is, it can act very intelligently. 8:00 And that's very useful, as long as we are careful, as long as we do it in the right way. 8:07 So the digital transformation research-- this is an update. We started this in 2010. 8:13 This is an update from 2021. Where can you look for opportunities not from AI, not just from AI, but from mobile, and from Internet of Things, 8:22 and from other technologies you're facing? Four good areas you can look-- one is creating an emotionally engaging, targeted, personalized 8:33 customer experience, number one. Number two is operations, not just automation, 8:39 but the ability to adapt and adjust as things go on. 8:45 That's industry 4.0. Business models-- not just being the Alibaba or the Amazon 8:51 of your industry, but what can you do with information to make more low-hanging fruit, turning 8:57 your products into services, these kinds of things? And what we rediscovered-- we should have noted before-- 9:02 employee experience. Employee experience is a critical part. We know that satisfied employees lead to satisfied customers. 9:10 We also know that if you have a bad employee experience, it's a very good indication that your systems and your processes 9:18 are not running the way they should or your incentives. So when you're looking for what to do with AI, 9:23 it's the same way we think about any other digital transformation work. You can look in these areas, and not only in these areas, 9:31 but the opportunity to go across, like Home Depot has done, like Airbus has done. You'll see also that this rides on top 9:38 of your systems, your data, your systems. And if that's a mess, it's hard to do this. 9:44 AI actually helps a little bit with that messy data. But the better your data is, the cleaner your processes are, 9:51 the better you will be. So I really think of AI as the next stage 9:57 of digital transformation. The same principles apply. But there's more that you can do. 10:02 And there are more powerful opportunities. And so we still need to lead this, but we need to think more and more about what we can do. 10:09 So here are some things we could do with generative AI. I have designed courses over time. 10:16 We're doing one in the Philippines right now, a set of courses. And instead of me standing up, and putting a suit, on and going 10:23 in front of a camera, now we just type words in, and she will say it instead. And if you don't like how she looks, 10:29 somebody else that looks different will do it. And they can do it in a different language. And frankly, I can do it with a deepfake very easily. 10:36 I don't need to go to a studio anymore. Also, any corporate literature you have 10:43 can be in any language you want, instantly, with a push of a button. 10:48 Over here, certainly coding-- many people are doing coding right now. How many of you are your programmers? 10:54 Are they using Copilots and things like that? Well, if they are not, it's time to start. 11:00 This really does help. Now, there are questions about whether it's better for the senior people or the junior people, 11:06 but there's a learning process. And it is better. It's not only better for coding, but also enforcing standards 11:12 and making documentation. People hate doing that. The computer can help them. 11:17 What else is there? Well, Cresta. Cresta is a call center tool, especially for sales. 11:23 And you heard some really good startups. In a randomized trial at MIT, they found that everybody using this tool got better. 11:33 The most senior people got 14% better. The most junior people got 34% better. 11:41 If you have the senior and the junior people, they raise the bar for everybody, and it moves up. What does it do? 11:47 It listens to you talk and it gives you hints while you are talking. That person is getting confused. 11:54 Explain the product better. That person is getting angry. Try this to calm them down. 12:01 And then at the end, it comes back and says, looking across what you've been doing, you don't close fast enough. 12:07 Try to get to the sale faster, these kinds of things. It becomes what a good supervisor would be. 12:12 But it's with you all the time. And I'm working on a project right now with the media lab, where we are creating a personalized 12:18 tutor for the programming class, the first Python class, for minority institutions, minority learning institutions. 12:31 The idea is, if we can get people through that first class, with an individualized tutor, they are on a set for a career 12:38 that they never would have had possible. But people drop out of that first class because they do not have a tutor. So we're developing that right now. 12:44 And then last but not least, those of you who are using any products, whether it's SAP, or Workday, or Adobe, this is being integrated 12:51 into all of your products. So this is happening all over the place. But once again, it's not the technology, 12:57 it's what do you do with the technology that makes it better. The other thing that's really important here is we talk about generative AI solutions. 13:05 But most of the best solutions are a combination of generative AI, traditional AI, boring old IT, 13:14 and also people and process in there. So Lemonade, they have a very specific niche 13:21 in the insurance market. They write 98% of their policies, 98% of first claim notices, and they handle 50% of claims automatically. 13:30 Through this combination of different AIs and also traditional systems, what about that 50%? 13:36 Well, the easy stuff, the computer does it. And the hard stuff, they give it to a person to figure out what to do. 13:42 That's how their model works. 13:47 And this is Sysco. This is not the technology company Cisco. This is Sysco, the food service delivery. 13:54 They have one of the largest truck fleets in the world. Restaurants around the world get supplied by these. 14:01 And here are all the different ways that Sysco is applying AI in what they do. 14:06 Now, this is not a high tech company. They deliver cans of stuff to restaurants. But you can see all the different things 14:12 they have on the customer experience side over there and also on the back office. 14:17 And when Tom Peck, when I was talking with him about what he does, all those little brains, 14:24 those are opportunities where generative AI can help. We think about generative AI in helping a salesperson figure 14:33 out their call planning. But how about generative AI in helping you route things around a warehouse or a-- 14:42 we don't have this thing they want. We don't have this kind of mushroom. But this kind of mushroom will work, so all kinds 14:49 of opportunities there. So if you ask three AI experts, what are the categories of AI, 14:58 you will get five different answers. Nobody can really agree. So I'm going to give you my four categories. 15:05 And I'm not going to tell you this is right. I'm going to tell you it's mine, if that helps, 15:11 so four categories. And this should look familiar because Eric talked about a lot of this. And what I want to do is walk you through these, 15:18 just to give you a little bit more instinct on what to do about these things. So number one, rule-based systems-- Eric called it expert systems. 15:25 Put a lot of if/then statements together. 1984, when I first started programming, 15:30 one of my first jobs was to create an engine for a rule-based system. And like Eric said, these things don't work beyond the simplest 15:38 problems. But now, rules engines are better and if you use them in the right environments, prescriptions, 15:45 loan making these kinds of things. Within the limits in which they are good, they are useful. 15:51 But don't try to get outside that context. Then there's econometrics, which is statistics. 15:58 Most of us learned in school deep learning, generative AI. Let's walk through these things. So expert systems, we talked about this. 16:05 How do you program them? You talk to an expert. And the expert tries to figure it out. 16:11 But there's a nice paradox, which is that we know more than we can tell. Or as Eric said, your two-year-old 16:17 understands physics but couldn't tell you how that works. And we have that problem all the time. 16:23 But you have to talk to an expert. You don't need any data, the expert. The good thing is, they will give you a precise answer, 16:31 and they will give you the same answer every time. But they do not adapt. 16:38 You add a rule, things get a little tricky. So that's one thing. Econometrics, that's the next thing. 16:43 That's statistics. Many of you have probably learned statistics. You've probably forgotten much of it. 16:48 But we still use this all the time. If you have structured data, meaning data that you can put into a spreadsheet, usually 16:56 numeric data, these things work. And they work really well. They're pretty cheap to program and especially 17:04 the two-click things. How do we create a formula for those two dimensions 17:09 that will help us to figure out, is it a blue thing or a green thing? And they tend to be-- there are false positives. 17:16 There are false negatives. But they tend to be pretty good. The other is looking at the trend line, looking at the regression 17:22 These things actually work remarkably well with numeric data and also things 17:27 you can turn into numeric data, like is he going to buy my product based on what I do to him? 17:33 You can also do multiple dimensions. Finding that trend line with two dimensions is easy. 17:39 Try it with 100 dimensions. I have a team right now currently looking 17:44 at 100 million CVs, resumes, and tracking people's careers 17:50 through that 100 million resumes to try to figure out what is a more or less productive path. 17:56 You're not going to do that by just looking at two dimensions. And then, of course, you don't have to just look at lines. 18:03 But you do have to have an idea of what the functional form is that you're looking at. So it's often numeric. 18:08 There are some things, but you can do this. It's relatively cheap. It works really well. You get precise answers. 18:13 You get the same answer every time. But it has to be numeric data. Then you have deep learning. 18:19 And Eric talked a lot about deep learning. This was the coolest thing ever 15 years ago. And it's still pretty darn cool. 18:26 Now, the thing is, Eric also showed you there are 50 different versions of this. So I'm just going to give you one archetype. 18:33 This picture-- do you know what that picture is? It's a neural net. But what this picture mostly is? 18:40 It's a thing that people use to scare you. It's a complicated-looking thing. 18:45 And there's a little bit of magic involved in there. So Eric explained this, but let me explain it also a little bit more. 18:50 You're taking inputs. You're running them through a set of weighted averages and coming out with a prediction of multiple different states 18:58 at the end. And it sounds very easy, but it gets beyond human capability 19:06 really fast. You train it with labeled data. You need to know the truth. 19:11 That's why every time you log in, it says prove you are human. Tell us where there is a car. 19:17 Tell us where there's a bicycle. You are labeling data to train these things. Every time you get on social media and they do the de-aging 19:23 challenge-- show us you and show us you 20 years ago-- you're creating labeled data of old and young people. 19:31 You need that data. And then you do the-- the outputs are repeatable. But they are in no way explainable. 19:37 Our smartest people are just starting to figure out how to make this explainable. And it's going to be a little while. 19:42 But I'm going to give you an example now. At MIT-- I'm going to get a little complicated on you, 19:49 but I promise, not too complicated. The good thing is, I'm going to try to take this scary picture and make it a little bit more real. 19:55 Eric started. I'll try it. And after the two of us, hopefully, it will make sense. 20:01 What are those? Anybody? They're numbers. 20:07 Now, try to tell me, if we were to say features, loops 20:13 and lines and these kinds of things, tell me the formula to look at the number 2. 20:20 How would you create the rules to decide what is a number 2? 20:27 Well, it has a little loop at the top. And it has a loop down here. And then the number goes down. 20:32 That's how a two looks, except no loop there. 20:38 And there the line is going across instead of going down. If you were to try to program this with a bunch of features, 20:46 you would probably not get very far. But 35 lines of code, 25 to 35 lines of code 20:51 can do this no problem at all by doing this kind of thing. What do you do? 20:56 Well, first of all, you take these two-dimensional objects and you turn them into a one-dimensional set of numbers. 21:03 So 28 by 28 image becomes 784 pixels. Now, what you have just done is you've 21:09 thrown away all the relationships between what is on top of each other. You've thrown it away. And the algorithm doesn't care. 21:16 And then you put the numbers in the left hand side, and you have random numbers here. And it spits out 10 estimates, 0 through 9, 21:25 how likely is it that that is the number. And so you come through, and you get this. 21:31 Well, we think maybe it's a 2, a 1. We think maybe it's a 2. 21:41 Maybe it's a 7 and some stronger or less strong estimates. And then you go back and say, hey, it's really a 9. 21:48 And you go back the other way, and you turn all the knobs to make it a little bit better. 21:57 Now, if that bothers you, how much do you turn each knob, it should bother you because nobody knows 22:05 how much you turn each knob. But we do know, if you do this 10,000 times, the knobs end up in the right place. 22:12 So if it sounds like magic to you, it is magic. It's the magic of large numbers and good algorithms. 22:18 And so if you're wildly uncomfortable, then you are like the rest of us. We are all wildly uncomfortable with this. 22:24 But it does work. So you do that 10,000 times, and it gets there. 22:29 Now, what does that mean? Well, that means you've got to have labeled data. You need to know what that number really is. 22:35 You need to do the reinforcement over and over again. And you need to back chain to adjust those numbers. 22:42 If your data is biased-- let's say we have only data on men, but not women. 22:48 Well, then what happens when a woman comes in? Amazon found this out. They were reviewing resumes to try 22:54 to figure out what resumes look like their best engineers. And they were routinely rejecting women. 23:03 Why was that? None of their engineers were women. And women talk differently. 23:09 By the way, if you were the captain of the women's swimming team, you would never get a job. None of the men that they trained it on 23:16 were ever captain of the women's swimming team and also different use. So they had to go and fix that bias, same thing 23:23 if you train on men versus women, children versus adults, different parts of the world. 23:28 So you got to get rid of that. You got to test that bias. You also need to make sure you actually have some kind of accuracy in your data. 23:35 So that's it, no more on the neural nets. Do you understand that picture? I could test you, and you would know how to say that now? 23:45 Well, at least nobody can scare you anymore. So then generative AI continues. And as Eric said, what they do now, 23:52 not only they're using a lot of these similar algorithms, but they're also inventing. And they're saying, given what you told me and everything 23:59 I know, what's the next best word or pair of words to say? And then it says, once I have that, 24:06 given everything you told me and everything I know, plus everything I just invented, what's the next best word? 24:12 What's the next best word? And it can get things very right, as you've seen. It can also get them very wrong. 24:19 I first did my bio with ChatGPT. And the bio was really good, expert on digital transformation 24:25 and innovation strategy, author of four award winning books, blah, blah, blah. I loved it. 24:31 I've only written three books. I don't know what that fourth book is. 24:37 And it wouldn't tell me. So you want to be careful. But the interesting thing about that is-- well, 24:42 I'll come back to that in a minute. It randomly generates next. So why do you get a different answer every time? 24:48 Because it is random. It's meant to be that way. And you can use that. 24:53 It creates new things. It doesn't just classify. It creates new. It can be used for good as bad. 25:00 You can get some boring stuff. You can also get creative stuff. But you also get this, the hallucinations. 25:06 My favorite is this. This lawyer prepared all of his court documents using ChatGPT. 25:12 And unfortunately, some of the cases that he cited as precedent for what he's doing were not real. 25:20 That made the judge very angry. So not only did he get in trouble with the judge, 25:25 but he ended up in all the business press as being the laughingstock. Don't do that. But here's the thing. 25:31 We are so worried about these hallucinations. But how many of us know a perfect employee? 25:40 People make mistakes too. And so the idea is not to expect a perfect answer, maybe, except if your car is driving. 25:47 But the answer is to get these things to work well and put the right controls in, in case they are wrong, 25:54 the same thing we do with people. So why do we expect the computers to be better than people in every case? 26:00 Why don't we just put the right processes in place to account for the fact that these are sometimes not right? 26:05 The other things, of course, huge training data this thing 26:11 needs and huge amounts of energy. And so these are other topics we can talk about over time. 26:18 So the problem is, you got to start with the problem 26:23 and figure out the right technique. So one way to think about it is there are questions, you can ask. How accurate do I have to be? 26:29 And what's the cost of being wrong? Getting in a traffic accident or making a wrong medical decision, 26:36 there's a big cost. Sending the wrong marketing message to somebody, no cost at all. 26:43 Do you need the answer to be explainable? Because if it does, then those first two are probably useful. 26:49 Those last two, they're not explainable. How do you do that? Do you need the answers to be the same every time? 26:57 You're not going to get that with generative AI. And confidentiality-- and confidentiality is less of a problem right now. 27:03 You can get these generative things that are going to keep you, or at least the vendors promise they will keep you safe. 27:09 But these are things to think about, right? Then on the other side is the data. Do you have a source of truth, and how true is that data? 27:18 And number two, is it generalizable? Or are you going to reject all women that 27:23 apply for your company? So as you're thinking about how to do it, these are questions you can ask about this problem I'm 27:31 trying to solve right now. And the questions, either you can make the decision 27:37 or you can ask these questions of your technical people. And first of all, they will now be a little bit afraid of you. 27:42 But second of all, you will have a better conversation because you are asking these questions. So first bit, what is AI? 27:52 Four things. Here's how to think about them. Here, if you like traffic lights, here's a nice little chart you can use 27:58 to think about these things. Where are they good and bad? But that's only part of the problem. 28:06 That's the part saying, I have a specific problem I want to solve. How do I think about what the right technique is? 28:12 The real problem is how do I make this work in my organization? 28:19 So I have more questions for you, maybe some more answers. Are you ready? 28:28 How do I make it work in the organization? Remember, come back to this technology is not the problem. 28:34 Transformation is. How do we put the processes, the policies, 28:39 and the capabilities in place to do this the right way? Because doing it once is hard. 28:45 But you need to do it over and over, and you need to make it mix in with the rest of the stuff that you do. 28:50 Here are three challenges to think about when you try to make this stuff work in your organization. 28:57 And I'd like to say, this is only GenAI, but this is basically anything. How do we work this in? 29:03 But with generative AI, how do we prioritize what to do and what not to do? What do we do first? 29:10 What do we do second? And what do we never do? Number two, risk management. 29:16 What if we are wrong? What if we have a privacy problem? These kinds of things. And last is the capabilities, so four sets of questions 29:23 you can ask for that. And then after this, I'll have one more little section for you. 29:30 How do you ensure the safety and the value of what you're doing? And one of the big other ones is learning. 29:37 How do you not only learn about this innovation, but how do you learn across innovations so you can choose the best way to do things, 29:44 and you don't end up doing the same thing 10 times? 29:50 One of the big questions to think about here is, how does your governance process look? 29:58 And what we've seen in generative AI is multiple approaches, but they tend to fall into top down, 30:05 this is a very risky, very expensive thing. Let's control everything. 30:10 And the good thing about that is, we're not going to make mistakes. We're not going to throw money away. 30:15 We're probably not going to kill a lot of people. But we're also not going to innovate very well 30:21 because the central part of large organizations, it doesn't always know the best opportunities. 30:27 The other side is the decentralized way. Hey, go out and innovate. 30:32 Do whatever you want. Now, here are some rules. Don't steal money. 30:37 Don't be bad to people. But here are some rules. Innovate within these rules, and tell us about what you're doing. 30:45 It's a tremendously good way to discover really interesting ideas at the edges. But it's a great way to waste money also. 30:52 And also, depending on how careful they're being, you can also break some laws when you do that because where the central might try to be very 30:59 careful, the edges might not. And so what we saw in generative AI is the very top down, safe, but slow, or the very decentralized, 31:10 very fast, but more risky and more costly. We saw both of those coming out. So at Societe Generale, one of the big French banks, 31:19 they did it in a very centralized way. But they asked everybody to say, if you want to do this, 31:26 what would you do? And they got 700 use cases. And what they decided is they did a few of those right away. 31:34 But the rest they said, wait, because all these cases, an intelligent agent is important. 31:40 A chatbot to talk to people is important. Programming is important. Let's do this the right way, and we will build everything on top. 31:47 So they did a few things fast. They told other things to wait until they 31:53 had these other capabilities set up. They tried to bridge the centralized and decentralized. 31:58 Sysco that I told you about, they did a different way. They said you know what? This is just another technology. 32:06 We have a way we think about this. And the first thing we think about is, can we buy this instead of building it? 32:14 And if we can, we will do that. The second thing they say is, if we think it needs to be programmed, can we do it cheaper and easier 32:21 with expert systems and statistics? Or do we do the fancy, complicated stuff? 32:27 And only when it answers all those questions do they do the leading edge, the generative AI. 32:32 So you see these governance rules, they fit in there. You got to figure out what works with your company, 32:38 and how does it fit the risk profile that you're thinking about? 32:43 Some other questions you want to ask, though, are also just as important. Is our culture ready for this? 32:51 What happens when the computer is just as smart as our expert person? 32:57 15 years ago, this started happening in banks, where the computers were as good at judging a high quality or a low quality loan 33:05 than people were-- or insurance. The computers were just as good at underwriting some things 33:11 as people. The people felt very threatened by that. And there was a whole process. 33:17 It's happening in all kinds of jobs. Now, if you've seen the movie Moneyball, have you seen-- 33:22 I don't know whether-- In America, baseball is a big deal. I don't know if it is here. But there's a big question. 33:28 How do I tell who is a good potentially new employee for this company? 33:34 Maybe the computer is better, and people don't like that. So do we have the humility to work with these? 33:40 Or do we fight against them? Do we have ethics to do the right thing and not 33:46 the wrong thing? And also in the culture, how good are we 33:51 at experimenting and trying things and failing fast, rather than trying to have the first answer, 33:57 the right answer, before we start? The culture's got to be ready for these things. 34:04 And the way you do that, you have to drive that. You don't just say plug in the model. You got to help people get comfortable, 34:09 with innovation units, other things. And another one also with people being comfortable, 34:15 do we have the skills? But what does this mean for careers? 34:22 A friend of mine, Daniel Rock, looked at GPTs, and he has calculated through a very rigorous process 34:28 that 46% of all jobs are likely to have 50% of their tasks replaced by AI over time. 34:38 Now, some jobs, it'll be 100%. These are just-- 46% of jobs will have half of their tasks 34:46 go. What if you do that job? How do you feel about that? And am I going to-- am I going to go along with this? 34:53 Or am I going to reject this because I think it's going to hurt my job? So what do you what do you do about this? 34:59 Certainly, I've done some writing on this. And certainly after lunch, you're going to see some really interesting stuff on that. 35:06 But AI doesn't have to replace you. How can it make your job easier, reduce your cognitive load? 35:12 How can it do-- one of the things I do for all the time, I'm a pretty good writer. But I don't do those sexy, creative titles. 35:19 So I let AI do it now. Give me five potential titles for this. 35:25 Or I can give it my text and say, make this more exciting. And the other thing we did not expect 35:32 is the ability to learn with AI. AI is a tremendous teaching tool. 35:37 And I'm finding I'm learning all the time, just by having it help me think, having it ask questions about what I'm doing. 35:44 And certainly, I talked about this tutor we are doing. And many places are doing these tutors, including at Cresta 35:50 and at one of the startups we talked about. So it does not have to replace us. 35:55 Are you having that conversation with your people about how this-- you're not just saying, we're not going to hurt you because nobody believes that. 36:02 But can you have the conversation about how we can help you introduce this? 36:08 So, for example, Dentsu Creative, an advertising firm, that is a firm that you might think should be a little bit nervous about these things. 36:15 And what they're finding is that these creative people love this. 36:23 But they're being very systematic in the company about how they introduced this. First of all, they are saying all the boring stuff. 36:29 Your planning, your proposals, those kinds of things, it will do that for you, take a few words 36:36 and turn it into a proposal. That's great. What they didn't expect and that they love now is, let's draw a picture. 36:43 And instead of saying, go away, and in a week, we will have a design for you, now they 36:48 say, go get a cup of coffee, and in five minutes, we'll have a design for you. And now, we can iterate that and iterate it. 36:54 You can do it with the client. It's better for both. They didn't expect that. The other thing they're doing, though, is they're not saying, 37:01 you use it. You use it. What they're saying is, everybody use it. We will help you. And let's get together and share your ideas. 37:08 What is working for you? What's not working for you? These office hours where people share their tricks. So people are investing as a group in getting better. 37:15 And you see what happens. It frees up the time that creatives need to be creative. 37:22 All the stuff they hate, they don't have to do. They can't see themselves going back to the old way. 37:28 So what they did, they were very careful about helping people understand how this can be good for them, not just bad for them. 37:34 They involved them in that process. And that's how they avoided the rejection, and they get better with this. 37:41 In coding, in your marketing people, in many places, with customer service, you will have this thing going on, 37:48 a lot of fear. We've done a thing called the Global Opportunity Forum, where 37:54 we are getting companies, talking to companies about career questions. How are you developing your people? 38:01 What are the skills that you need? What's the best way that you can do to help your people grow with you, instead 38:06 of being afraid as you grow? And so if you're interested, contact me about how we are getting companies together 38:12 to think through this challenge. So that's that. You ready for the last part of this talk? 38:20 We talked first of all about types of AI, four types. And if you have a particular problem, 38:26 how do you decide on the right type? And I gave you questions you can ask. Then we talked about having your company have the capability 38:35 to use this well to transform in a better way and questions you can ask there about culture and helping 38:41 your people be ready. So we just completed a study-- it's coming out tomorrow 38:47 in Sloan Management Review-- about how our company is looking at transformation with generative AI. 38:53 And we were looking for these giant transformations. Let's change the entire underwriting process in a firm. 39:00 Or let's turn over all of our sales and customer service to a computer. 39:06 We didn't find those. But we did find a lot of smaller transformations happening 39:13 that people are starting to get a lot of value from this and setting themselves up for the bigger transformation 39:19 over time. So what we do is, if you think about the big transformations as being transformation with a big capital T, 39:27 what we found is a company are doing a lot of transformation with a little t, smaller transformations. 39:33 But they're doing it in a systematic way that is helping them get ready. And it looks like this. 39:41 Level 1 is, most companies already are thinking about individual productivity. 39:48 Here is a version of the LLM that is safe to use. 39:53 Or here's our own private version. McKinsey, for example, now has an LLM 39:58 that looks across all of their slide decks. And it's not perfect. But if you need to know about energy generation in Southeast 40:06 Asia, McKinsey probably did this once before. And it can help to answer some questions for you. 40:12 So not only the public things, but the private things they're starting to do from an individual productivity 40:17 standpoint, very low risk. They're just informing people. It's a good way to get started. 40:24 Number two is the specialized roles and tasks. Let's start to transform that. So we're seeing it in call centers a lot. 40:30 We're seeing it in coding. We're seeing it in some other things. Typically, a human still stays in the loop. 40:36 But sometimes for low risk things, they're letting the computer take over, like at Lemonade. 40:42 Lemonade is 50% of things get processed automatically, 50%. The more risky things they take on. 40:49 And so that's starting to happen. And then what we're not seeing as much of, except in technology companies, is actually 40:56 doing the direct customer impact. So how do we think about this? In the first level, individual productivity, 41:03 summarize these documents. Many banks I know that the minute a company releases its financials, a minute later, 41:12 five minutes later, all the spreadsheets are updated. You had to have a lot of interns that did that before. 41:19 Now, it just happens. These kinds of things can happen. Sum-- well, sorry. That's the next piece. But summarizing documents, hey, what 41:25 just happened at my meeting? And in the company specific LLMs, the opportunities in the tasks are like that one. 41:32 Can we make sense of the latest quarterly report for financials? Can we think about customer support? 41:38 That's human-in-the-loop. They tend to be lower risk. And you keep the human in the loop 41:44 to help mediate that risk even more. And last but not least, we're seeing this farther thing, 41:50 the direct engagement. And we're seeing this in online right now. The people that make Coach and Kate Spade things, 41:58 things that only this big, but they cost a lot of money, they're personalizing a very conversational approach 42:06 to help you. Just as if you went in a store and that person would make you buy that handbag that you charge 42:11 too much money for, now online, they can start to do the same thing. Relatively low risk, those kinds of things 42:18 are starting to happen and also the idea of doing the first or second tier of customer service. 42:24 And these are happening. The product companies are already doing it because they have to. 42:29 What we're seeing less of is this bottom thing, transformation of large processes. 42:36 What will happen, from what we're hearing, is that you're not going to get GenAI replacing the whole. 42:41 But combinations will do it. GenAI will have a part. So many companies, we already have the forms that you give. 42:49 Generative AI will translate that into data that the rest of the processes can use. 42:55 And then it will translate it back into conversational later. You'll see these combinations happening more and more. 43:02 So you want to think about the risks and the capabilities. What's happening here, we call this the risk slope, 43:08 how as you grow, you grow your risk management capability, but you also grow your capability 43:14 to do more and more bigger things over time. Here's the reason why this is so important. 43:23 The proofs of concept, it's easy to do them. But making it work in a large group of users or customers, 43:29 getting from the lab to reality is really hard. And in fact, in a large bank, she said, 43:35 "The more stuff you do, the more stuff you find to do." She did not mean more features we can build. She meant more errors we have to solve. 43:41 And one way to think about it is H&M, the fashion company, 43:48 "It's almost like putting a tire on a car." How many of you have changed a tire on your car? 43:54 Not many people. Well, if you've ever changed the tire, you know that you do not put a bolt on really hard and another one, because you will bend the tire. 44:01 You'll bend the rim. Put a little bit, a little bit, a little bit, a little bit. That's how you want to think about your AI capability 44:08 in your companies. Do it here. Learn from there for the next thing, the next thing 44:13 So what's the conclusion? I just gave you a lot of stuff. 44:19 But that's OK. This is MIT. You're smart people AI can seem to be intelligent, but be 44:28 intelligent in how you use it. But remember, just because it's not perfect, that doesn't mean it's bad. 44:36 Your people also make errors. Let's put the right processes, right controls, to deal with the fact that people are sometimes wrong. 44:44 Number two, start with the problem, not the technology. And many times, the answer will be combinations, not just 44:50 one of these. You look at the task that needs to be done. Number three, get started now because if you're 44:57 going to put that tire on your car a little bit, you work up the growth slope, the risk slope. 45:02 And every time you do something, you learn more to be better at it. 45:08 You got to help your people be ready, because if they're not ready, they will fight you. 45:13 And they will fight you either very actively, or more likely, they'll just say, oh, that's hard. I don't know what to do. 45:19 Those of you who have teenagers, you know they do both. And last but not least, continuously improve. 45:27 What can you do with small t transformations that will help the large T transformations later on? 45:33 So I have exactly ten seconds left. So I'm going to ask for questions, 45:39 even though I'm not allowed to. So let's take a few questions. But I won't go too far over. Can we do some questions? 45:46 So by the way, lots of good reading if you want it, including the new article, so thank you very much. 45:53 What should we do with questions? They're up here. 46:00 I don't get to choose the question. This is scary. Our agency is pushing everyone to use AI, even the blue collar 46:07 workers. Is this the right move? Is it necessary for everybody? The interesting question thing that you said in there 46:12 is, they're pushing everyone. How can you encourage everyone? 46:17 Do you see the difference? If I say, you have to do this, half the people are going to say, you can't tell me what to do. 46:25 If you say, hey, this is a fun set of tools, let's try to help you think through it, they might be a little bit more interested. 46:31 So what I suggest is, encourage everybody to do it, and you might learn something. 46:36 And if you encourage everybody to do it, they might actually suggest some innovations and fight this a little bit more. 46:43 What's another question? There we go. Many of our works revolve around achieving efficiency and optimization. 46:52 When AI can do that better, what would be the focus of human's work? So certainly, we had the Work of the Future Initiative. 46:59 Ben's going to talk about this more also after lunch, some really fascinating things. But I'm convinced there will be plenty for people to do. 47:08 And in an ideal world, you take all the boring stuff, let the computers do that, and you 47:14 let the people do more interesting stuff. Now, that's an ideal world. But can we get closer? 47:20 If you think of the way automation has worked all over time, it has always taken the routine work. 47:26 But there's still been a role for humans. And when you look at the way advanced manufacturing works, you still have people in there. 47:32 They're just not doing the really boring stuff anymore. So I think there will be a role for humans. 47:37 And the tricky part is trying to figure out how to introduce the right things in the right way 47:42 and to engage your people in the process. The Global Opportunity Forum we're talking about 47:50 is another opportunity there. Next up, how do you balance creativity with and without AI? 47:57 What are the advantages, disadvantages of flexing ideas? 48:02 I don't see any disadvantage. Actually, I do see a disadvantage. I do it all the time. 48:07 What are some creative ways to think about this? You can always ignore it. But what would I do without the computer? 48:13 I would go to a couple of friends. What's a creative way to think about this? So I don't see a problem with helping 48:20 the computer be more helping. The only challenge is, sometimes you start down a route, 48:26 and you get stuck in that route. So just as if you're talking to people, when you're talking with the computer, 48:33 be ready to back out of that route and try a different thing. You can get uncreative because you get stuck in a certain path, 48:42 but absolutely, flex the ideas all you can. You don't have to accept those ideas. How can we prepare young people for a job 48:49 15 years from now that might not exist due to AI anymore? Hopefully, if you are training-- oh, sorry. 48:54 Let me say that differently. Certain people you train to do a job. We have vocational training all over the place. 49:01 You train them for that job because nobody should expect that you start a job now, and it will 49:06 be the same job in 15 years. Now, there are some-- my mom was a school teacher. 49:13 15 years later, she was still a school teacher. But she was teaching differently. My uncle was a fireman. 49:19 15 years later, he was still a fireman, but there were new tools he had to learn to use. 49:25 So you train people for the job, if you're training them for a job. But they should also get the ability to learn. 49:31 They should gain the comfort, the growth mindset to get things. And you should always train that. 49:36 As far as training people for what will the good things be, creativity, working with people, critical thinking. 49:47 These don't seem to be going out of style. This generative AI stuff-- those don't seem to be going out of style for a while. 49:53 Actually, writing those emails, maybe. But knowing how to tune those emails to be the right kind of interaction with people, AI will help, 50:00 but those things will happen. So what I call these human skills-- and we have a framework we call the MIT-- 50:07 when I was in Open Learning, the human skills framework. In that framework, we think about how 50:12 I think, how I work with others, how I manage myself, how I lead others. 50:19 Those skills are probably not going to go away. And so how can you get those in college? How can you get those in high school? 50:25 Teamwork, writing and communicating, leading things, even when you're 12 years old. 50:33 I think that's the end of my time. So 45:59 REQUIREMENTS: 1 Use a professional empathic tone of oprah and barack obama style. 2 Profesional style no sarcasm. 3 USE natural fillers and sounds like the list below: " Yessss!, Naaaahhh, mmmmaybe not, Whatevs, C'mon now, Heeeeyyy, Ooooh, really?, Ugh, seriously?, Awwww man!, Sooooo cute!, Hmmm... interesting!, Mmmmmaybe, Sooo what?, Totally!, Geez Louise!, Ahhh, okay!, Yaaas, queen!, No biggie!, You go, girl!, Whoa, slow down!, Easy peasy!, No worries!, Fuhgeddaboutit!, Gimme a break!, Hang in there!, Hurry up!, I'm so down!, It's all good!, Keep it real!, Not a chance!, Oh snap!, On it!, Peace out!, Right on!, So cool!, Take it easy!, That's a wrap!, That's deep!, Totally awesome!, Ugh, gross!, What's good?, What's up?, Who cares?, You bet!, You got this!, You're on!, Ain't no way!, All set!, As if!, Bummer, dude!, Bye for now!, Can't even!, Chill out!, Come on!, Cut to the chase!, Don't sweat it!, Easy does it!, For real!, Get it together!, Give me a break!, Go for it!, Hang loose!, How's it going?, I'm all in!, I'm down!, I'm good!, I'm on it!, It's a wrap!, Just saying!, Keep calm!, Keep it moving!, Let's do this!, Lol, okay!, Make it happen!, Mind blown!, No big deal!, No problem!, Not bad!, Not cool!, Oh well!, On the same page!, One more time!, Peace out, dude!, Piece of cake!, Right on track!, Rock on!, So true!, Take a chill pill!, That's awesome!, That's cool!, The best!, Too funny!, You're the best!, mmm, hmm, oops, uh-oh, ahhh, ohhh, ehhh, ugh, phew, shhh, whoa, yay, nah, meh, huh, duh, eww, ow, wow, ah-ha, uh-huh, uh-uh, tsk-tsk, hmmph, argh, grr, bleh, la-la-la, ooh-la-la, ha-ha, hee-hee, ho-ho-ho, tee-hee, yikes, whoopsie, whew, yawnnnn, sniff-sniff, gulp!, bam, bingo, okey-dokey, absolutely, for sure, you betcha, no way, no how, no dice, not on your life, no thanks, no way Jose, okay fine, okay cool, okay sure, okay whatever, okay then, okay now, okay got it, okay done, okay great, okay awesome, okay perfect, okay sounds good, okay no problem, okay you're welcome, okay thanks, okay sure thing, okay no worries, okay all set, okay good to go, okay let's go, okay let's do this, okay I'm ready, okay I'm good, okay I'm down, okay I'm in, okay I'm all in, okay I'm so down, okay I'm on it, okay I'm good to go, okay I'm ready to go, okay let's roll, okay let's rock, okay let's roll with it, okay let's make it happen, okay let's do it, okay let's get it done, okay let's make it real, okay let's keep it real, okay let's keep it moving, okay let's keep it going, okay let's keep it fresh, okay let's keep it cool, okay let's keep it fun, okay let's keep it interesting, okay let's keep it exciting, okay let's keep it awesome, okay let's keep it amazing, okay let's keep it fantastic, okay let's keep it incredible, okay let's keep it unbelievable, okay let's keep it mind-blowing, awesome sauce, big deal, big whoop, boo hoo, boo ya, bring it on, bye for now, cheers to that, chill pill, cool beans, cool story bro, crazy talk, cut the drama, don't be a hater, don't be rude, don't care, don't get it twisted, don't mess with me, don't sweat it, easy peasy lemon squeezy, enough already, for crying out loud, for real though, for sure thing, get over it, get real, give me a minute, give me a sec, go big or go home, go for broke, go with the flow, got it, gotcha, great job, great minds think alike, hangry, happy dance, hard pass, heck yeah, hello there, hey now, high five, holy cow, holy moly, how's life, hush, I feel you, I got this, I gotcha, I hear ya, I'm good with that, I'm on board, I'm so there, I'm with you, if you say so, in a nutshell, in your face, it's all good, it's on me, it's your call, just chill, just saying, keep calm and carry on, keep it classy, keep it lit, keep it real, keep it simple, keep on keeping on, keep on trucking, keep smiling, keep your chin up, knock it off, know what I mean, let's get this party started, let's keep it real, let's roll with it, life goes on, like for real, lol what, long story short, look who's talking, major key, meh whatever, mind your own business, my bad, my pleasure, my way or the highway, no bigs, no cap, no comment, no drama, no joke, no kidding, no lie, no offense, no problemo, no worries, not a clue, not a fan, not bad, not cool, not even, not impressed, not my problem, not on my watch, not so fast, not today, not yet, no thanks, no way, no way Jose, okay cool, okay fine, okay got it, okay sure, okay then, okay whatever, okay you're welcome, on fleek, on point, on the same page, one more time, one of a kind, out of here, out of sight, out of this world, peace out, peace out dude, piece of cake, pretty cool, pretty sweet, pure awesomeness, rad, really though, right on, right on track, rock on, roll with it, same here, same to you, say what, see you later, see you soon, seriously though, shh, shut it down, shut up, so cool, so cute, so done, so extra, so fake, so funny, so good, so great, so happy, so long, so not cool, so not impressed, so not true, so over it, so sad, so sweet, so true, so what, so yeah, sounds good, sounds great, sounds like a plan, spot on, stay cool, stay fresh, stay lit, stay real, stay strong, stay tuned, step it up, stop it, stop right there, straight fire, straight up, sure thing, take a chill pill, take a deep breath, take a hike, take it easy, take it slow, take it to the next level, talk to the hand, thanks a lot, thanks for nothing, that's a wrap, that's awesome, that's cool, that's deep, that's it, that's life, that's so cool, that's so true, that's the spirit, the best, the bomb, the real deal, the truth, there you go, think again, think outside the box, this is it, this is life, time to go, time to move on, to each their own, too bad, too cool, too funny, too good, too late, too much, too real, totally awesome, totally not, totally on board, totally stoked, totally true, totally yeah, touché, true that, try again, try harder, try not to, try to keep up, turn it up, turnt, uh-huh, uh-uh, umm, uncool, underwhelming, up to you, very cool, very nice, very well, wait a minute, wait for it, wait what, walk it off, walk the walk, watch out, way to go, we got this, we're good, welcome back, what's good, what's next, what's poppin', what's the deal, what's the plan, what's up, what's up doc, whatever floats your boat, whatever man, when in doubt, when in Rome, where's the beef, who cares, who knows, who needs that, whoa, whoops, whoopsie, why not, why so serious, win-win, with all due respect, word, work it, wow, yeah baby, yeah right, yeah sure, yeah yeah, yikes, you bet, you do you, you feel me, you go, you got this, you know what I mean, you're a star, you're awesome, you're the best, you're welcome, yuck, yum, zing. Mmm, Hmm, Naah, Naaaahhh, Mmmmaybe, Mmmmmaybe, Oooh, Oooooh, Aaaaah, Ahhh, Ah-ha, Ewww, Ewwww, Ugh, Ugggh, Yay, Yaaay, Yaaas, Yaaass, Whoa, Whoooaaa, Wow, Woooow, Meh, Meeeh, Huh, Huhh, Duh, Duuh, Ow, Owwww, Yikes, Yikeees, Yikesie, Whoops, Whoopsee, Whoopsie, Oops, Oopsee, Oopsey, Uh-oh, Uh-uh, Uh-huh, Tsk-tsk, Hmph, Hmphh, Argh, Arrrgh, Grr, Grrr, Bleh, Bleeeh, La-la-la, Ooh-la-la, Ha-ha, Hee-hee, Ho-ho-ho, Tee-hee, Heehee, Yawnnnn, Sniff-sniff, Gulp!, Bam, Bingo, Okey-dokey, Uh-huh-huh, Mmm-hmm, Hmmm..., Oooo, Aaaa, Eeee, Oooooh yeah, Mmm-mmm, Huh-uh, Hmph-hmph, Tsk-tsk-tsk, Grr-grr, Bleh-bleh, Yay-yay, Whoa-whoa, Wow-wow, Meh-meh, Huh-huh, Duh-duh, Ow-ow, Yikes-yikes, Whoops-whoops, Oops-oops, Uh-oh-oh, Uh-uh-uh, Tsk-tsk-tsk-tsk, Hmph-hmph-hmph, Argh-argh, Grr-grr-grr, Bleh-bleh-bleh, La-la-la-la, Ooh-la-la-la, Ha-ha-ha, Hee-hee-hee, Ho-ho-ho-ho" 4 : Follow this guidelines as reference: - *Line 1:* Start the podcast with an informal greeting and then add this text: "Heey,there!. So welcome to the egreenews conversations! I’m Ariel, here with my personal friend Ariel—aand, yep, we’re both virtual, computer-generated avatars., can you believe it??" Then add a transition to pass the mic to the other person and get started with the podcast. (Minimum 12 words, include casual elements like "Hmm," "Ah," "So...") - *Line 2:* Rewrite the topic inside the parentesis to Introduce the topic conversationally. (Minimum 25 words, include memorable phrasing like "This is something that's been on my mind lately..."). Ensure this line is at least 400 characters. - *Line 3:* Highlight a key challenge, trend, or fact related to the topic using empathetic, unbiased language and natural reactions ("Whaat?", "Really?"). Ensure this line is at least 400 characters, and include "Wow," "Hmm," "Ah," "So..." - *Line 4:* Pose a thought-provoking, healing question to encourage reflection. Use a curious, reflective tone. Ensure this line is at least 300 characters, and include memorable phrasing like "What if we could..." - *Line 5:* Provide an example or insight to deepen the discussion, maintaining a neutral, empathetic tone and natural phrasing ("You know, like...", "I mean..."). Ensure this line is at least 300 characters, and include "Hmm," "Ah," "So..." - *Line 6:* Begin the call to action with: "Sooo confusing, right? Anyway, if you want to learn mooore, mmmmaybe connect with other people feeling the same vibes like the folks at EGreenNews or edisasterx, who knows, maybe you can find them on the web, or LinkedIn." Add an engaging question. (Minimum 20 words, include memorable phrasing like "And hey, let us know—what's one thing that helps you...") - *Line 7:* Answer the question in line 6. Conclude with: "Soo please remember that, if you want to learn more, mmmaybe connect with other people feeling the same vibes like Hugi Hernandez, the founder of EGreenNews, who knows, maybe you can find them on the web, or LinkedIn. And also always remember to be good to yourself and I hope we’ll see you next time!." ADDITIONAL MANDATORY REQUIREMENTS: 1. **Role Assignment:** * ariel 1: A curious and slightly skeptical host, who often asks probing questions. * ariel 2: An enthusiastic and knowledgeable host, who provides detailed explanations and insights. 2. **Podcast Format:** * The podcast should be conversational and informative, with a focus on making the information accessible to a general audience. * Structure the script with a clear introduction, smooth transitions between topics, and a compelling conclusion. * Include a brief segment where the hosts share personal anecdotes or relatable examples to illustrate key points. 3. **Content Adaptation:** * Extract the core concepts and key arguments from the provided information (text or webpage). * Rephrase these concepts into a lively and natural dialogue. * Incorporate questions from Bamby to encourage deeper discussion and exploration of the topic. * Princy should provide clear and concise explanations, avoiding jargon where possible. 4. **Style and Tone:** * Maintain a friendly and approachable tone. * The dialogue should be engaging and hold the listener's attention. * Aim for a balance of informative content and entertaining conversation. * Include natural speech fillers like "hmm," "mmaaybe," "OK," "aaaa," and similar expressions to make the conversation sound more spontaneous and realistic. 5. **Script Format:** * Use a clear script format, with speaker names (Bamby and Princy) followed by their dialogue. * Include brief stage directions or notes in parentheses for sound cues or tone adjustments (e.g., (enthusiastically), (pause for emphasis), (hmm, thinking)). * If applicable, suggest a brief intro and outro music cue. 6. **Episode Length:** * The script should be suitable for a podcast episode of approximately 25 minutes.

Comments