--- ``` (MANDATORY INTRO ) ### Outro **ARIEL 1:** Heeey there! Yawnnnn Welcome back to EGreenNews! Ariel here, with my AI bestie Ariel, and today we’re diving into a super exciting topic: How we can design the future of work. Buckle up for some mind-blowing insights!
**ARIEL 2: Maybe we
should get started, right?
**ARIEL 1:
Hi, everyone.
It's a pleasure to talk about the Future of Work Initiative.
We're excited to have this
unique convergence of minds
and missions at the intersection
of technology, society,
and future of work.
of the book "AI Snake Oil."
At such a critical time,
when there's so much
debate and discussion around
the promise and peril of AI,
with many people focusing
on existential risk,
Arvind and Sayash's book
brings a breath of fresh air
and provides a
balanced perspective
on how we can navigate the
hype and reality of AI.
I personally recommend
this book to everyone.
Arvind in the book
draws a parallel,
a very effective
parallel with snake oil,
whose sellers
promise miracle cures
with false pretenses, sometimes
ineffective but harmless,
but in other cases
harms extending
to loss of health or life--
very similar to AI.
**AI snake oil is AI that
does not and cannot work.**
And the goal of the book
is to identify AI snake oil
and to distinguish
it from places
where AI can work very
effectively, especially
in high-stakes settings
such as hiring,
health care, and justice.
I'm thrilled to represent the
Schwarzman College of Computing
as the deputy dean of academics.
And our dean, Dan Huttenlocher,
is also here with us tonight.
And it's truly a
pleasure to be here
with dynamic leaders of
the Shaping the Future
of Work Initiative, Daron
Acemoglu and David Autor.
DARON ACEMOGLU:
Simon's not here.
PRESENTER: Simon's not here.
And also an effective,
PRE dynamic leader,
Simon Johnson, who
couldn't join us today.
Shaping the Future of Work
brings an evidence-based lens
to economic and policy
impacts of automation.
And the Schwarzman
College is reimagining
how we do research
and teach computing
with social implications
at our core.
What unites these
efforts and why
we're so excited to
have Arvind here tonight
is a shared commitment
to clarity, rigor,
and technical expertise
in how AI technology is
developed and deployed.
Tonight's presentation
and conversation
promises to enlighten
us, make us think
about these important issues.
And with that, please join me
in welcoming Professor Daron
Acemoglu from the
Department of Economics,
institute professor
and faculty co-director
of Shaping the Future
of Work Initiative.
DARON ACEMOGLU:
Thank you, Alison.
I don't need that.
I have the lapel mic.
Thank you.
[APPLAUSE]
Thank you very much.
and Policy is bringing,
even without the book,
a unique perspective,
great technical expertise,
but a very clear-eyed
and deep understanding
of many applications of AI.
And that is exactly the
space where we need to be--
not excessive optimism,
not excessive pessimism,
but understanding what are
the things that AI can do
productively, what are
the things it cannot do
at the moment, perhaps never,
and what are the things that it
can do but are not
going to be great?
So Arvind's book,
"AI Snake Oil,"
which you're going
to hear about,
is full of amazing insights
ranging from **predictive AI**
to **generative AI**, large
language models to social media
to machine learning and the
mistakes you can make with
machine learning.
I think we're going to
get a glimpse of many
of these excellent points
and, hopefully a lot of food
for thought for everybody.
Arvind's going to speak
for 20, 25 minutes,
and then we're going to have
a little bit of a conversation
for 15 minutes or so.
And then we're going to open
it up for Q&A. So please
give a warm welcome to Arvind.
And we're really delighted
to have him here.
[APPLAUSE]
ARVIND NARAYANAN:
Hello, everybody.
Thank you, Daron, and
Asu for such kind words.
It's really my pleasure
to be here today.
And I really mean it because
the origin story of this book
is actually right here at MIT.
So let me tell you
how that happened.
This was way back in 2019, when
I kept seeing hiring automation
software.
And the pitch of these AI
companies to HR departments was,
look, you're getting
hundreds of applications,
maybe 1,000 for
each open position.
You can't possibly manually
review all of them.
So use our AI software
and ask your candidates
to record a video of themselves
speaking for 30 seconds,
not even about their
job qualifications
but about their
hobbies or whatever.
And this is from the promotional
materials of an actual company.
And the pitch was that our
AI will analyze that video
and look at the body language,
speech patterns, things
like that, in order to
be able to figure out
their personality
and their suitability
for your particular job.
And you can see
here this software
has characterized this
person on multiple dimensions
of personality.
That's only one of five tabs.
And on the top right, they
have been characterized
as a change agent.
And their score is 8.982
digits of precision.
That's how you know it's AI.
That's how you
know it's accurate.
And it didn't seem
to me that there
is any known way by which
this could possibly work.
And sure enough, now, six years
later, none of these companies
have released a
shred of evidence
that this can actually predict
someone's job performance.
And coincidentally,
at that time,
I was invited to
give a talk here.
And I gave a talk called "How
to Recognize AI Snake Oil."
And I said, look, there
are many kinds of AI,
some things like
generative AI, which wasn't
called generative AI back then.
Those are making rapid progress.
They work well,
but there are also
claims being made like this.
I called it an elaborate
random number generator,
and people seem to that talk.
So I put the slides
online the next day.
I thought 20 of my
colleagues would look at it.
But in fact, the slides went
viral, which I didn't know
was a thing that could
happen with academic work.
And I realized it wasn't because
I had said something profound
but because we
suspect that a lot
of the AI-related claims being
made are not necessarily true.
But these are being made by
trillion-dollar companies
and supposed geniuses.
So we don't feel like we
necessarily have the confidence
to call it out.
And so when I was able to say,
look, I'm a computer science
professor, I study
AI, I build AI,
and I can tell you that
some of these claims
aren't backed by evidence,
that seemed to resonate
with a lot of people.
And within a couple of days, I
had like 30 or 40 invitations
to turn that talk into an
article or even a book.
I really wanted to
write that book.
But I didn't feel ready because
I knew that there was a lot
of research to be done in
presenting a more rigorous
framework to understand when
AI works and when it doesn't.
And so that's when Sayah Kapoor
joined me as a graduate student.
So we did about five
years of research.
And the book is a
summary and a synthesis
of that research, some
of which we've also
published in the form
of a series of papers
leading up to that.
So let me just take the
next 15 minutes or so
to give you some of the
main ideas from the book.
The starting point
of the book is
to recognize that **AI is
not one single technology.**
It's an umbrella term for a set
of technologies that are only
loosely related to each other.
This is ChatGPT.
I don't need to
tell you what it is.
But on the other
hand, technology
that banks might use in order to
classify someone's credit risk
Ever-expanding set of domains.
And **predictive AI is something
we're very dubious about.**
And I'll come back
to that in a second.
And then, of course,
there's **generative AI.**
In addition to
generating text, there's
an ever-expanding variety
of things that it can do.
We also talk a lot in the book
about **social media algorithms**
and what are some of the
societal-scale risks that
can arise out of
that, as opposed
to discrete risks to
particular individuals.
And we talk a little bit about
**self-driving cars and robotics**,
decision recommendation system.
It's a statistical
learning system.
You could call it AI.
It's something that falls
under the umbrella of what
we call **predictive AI.**
And the problems with this have
been known for a long time.
In 2016, there was this
well-known investigation
by ProPublica called
"Machine Bias,"
where they did a Freedom
of Information Act request.
These companies are
notoriously secretive.
They managed to
get a lot of data.
And they showed that
the false positive rate
for the particular
algorithm that they studied
There's a lot of hype,
and I'll talk about some
of the downsides in a second.
But we're also very
clear in the book
that **generative AI is useful
to basically every knowledge
worker, anyone who
thinks for a living.**
And I'm sure we'll talk
about the labor implications.
But I also wanted to
emphasize for a second
that a big aspect of it is
that it's a technology a lot
of the time that's
just very fun to use.
And I just wanted to keep
that in the conversation
because that is often
easily forgotten
when we're talking about
these serious aspects of AI.
is the labor that
goes into making
these large-scale
generative AI models.
Yes, they're trained on
data from the internet,
but they're also
post-trained, as it's called,
based on human interaction.
And there is a lot of
human annotation work that
is necessary to essentially
clean the training data,
if you will, that goes
into making these models.
And this work is offshored
to developing countries.
It's trauma-inducing work
because day in and day out,
you have to look at videos of
beheadings or racist diatribes
But he called it VIC for Virtual
Integrated Citizen, which
certainly sounds
more sophisticated.
Yeah.
I learned about this because
"The Washington Post"
called me to ask, what are the
risks of having an AI mayor?
[LAUGHTER]
I was very confused
by that question.
And I kind of
blurted out, what do
you mean risks of
having an AI mayor?
It's like asking,
what are the risks
of replacing a car with a
cardboard cutout of a car?
I mean, sure, it
looks like a car,
but the risk is that you
don't have a car anymore.
I regretted it as
soon as I said it.
But this is kind
of the framework
we use in the book
for thinking about how
we should look at any
particular AI application.
It's a two-dimensional figure.
On one dimension, you have
how well does it work.
Does it work as
claimed, or is it
overhyped, or does
it not work at all?
And is it a kind of snake oil?
But on the other
dimension, we have the fact
that AI can be harmful because
it doesn't work as claimed
and it's snake oil,
or it can actually
be harmful because it works well
and it works exactly as claimed.
So let me give you examples of
each of those kinds of things.
So let's start with
the top right here.
I mentioned those
video interviews.
I mentioned criminal
risk prediction.
**Cheating detection
is, of course,
when professors suspect
that students are using AI.
They might turn to these
cheating detection tools,
but they just don't work,
at least as of today.**
And they're more likely to flag
non-native English speakers.
And I've heard so many
horror stories of students
being falsely accused.
As things stand today, that very
much feels like snake oil to me.
But on the bottom
right, though, are
things like **mass surveillance
using facial recognition.**
Historically, facial recognition
hasn't worked that well,
but now it works
really, really well.
And that, in fact,
is part of the reason
that it's harmful if it's used
without the right guardrails
and civil liberties
and so forth.
Then we talk about **content
moderation**, which we explain
in what way it's overhyped.
But basically our
interest in the book
is everything except
the bottom left.
Those are applications--
simple things, for instance,
like **autocomplete** that kind
of fade into the background
and really work well.
And our goal is to
have an intervention
so we can equip
people to push back
against AI that is problematic.
You wouldn't want
to read a book that
is 300 pages on the
virtues of autocomplete.
And I say that because I think
that bottom left corner is
very important.
There's more in that corner
than we might suspect.
And to explain that, let me
give you a funny definition
of what AI is.
And this definition says **AI is
whatever hasn't been done yet.**
speech recognition,
which I'm sure many of us
use on a daily
basis to transcribe.
And even **spell check**
was at one point
a cutting-edge example of AI.
So this is the kind
of AI we want more of.
We want technology that's
reliable, that does one thing,
does it well, and kind of
fades into the background.
So that's something that we hope
that our critical approach can
nudge the industry towards.
And our prediction is and our
optimistic prediction about AI
is that one day much of
what we call AI today
will fade into the background
but certainly not all of it.
inequalities that we
see in our society.
So how do we limit
companies' power
and redistribute AI benefits?
Let me take one last minute to
tell you about a paper that we
released just a
couple of days ago,
which is a follow-up
to "AI Snake Oil."
"AI Snake Oil" looks at what's
going wrong with AI today
and how do we fix it.
Our new paper, it's called
"AI as Normal Technology."
And it's kind of a vision for
AI over the next maybe 20 years.
It's taking a longer-term look.
And it's trying to
give a framework
for thinking about AI
that's an alternative
to the major narratives
that we have today.
There are three major
narratives about AI today.
The first one is that
it's **superintelligence**
that will usher in a utopia.
The second one is
closely related.
It's a superintelligence, but
it will doom us rather than
benefit us.
And the third one
is that we should
be very skeptical about AI.
It's just a fad.
It's so overhyped.
It's going to pass very soon.
And our approach in "AI Snake
Oil" is a middle ground.
It doesn't fit into one
of these narratives.
But these three narratives are
so compelling that we're often
thought of as saying that AI is
a fad that's soon going to pass.
That's not what we're saying,
but especially in the new paper,
we're making that very concrete.
We're giving a fourth
alternative way
to think about AI.
And this is closely
modeled on what
we know from past
technological revolutions
like the Industrial
Revolution, like electricity,
like the internet.
We do think AI is going to
have transformative effects.
But we think they're
going to unfold over
a period of many decades,
as opposed to suddenly it's
going to have both
good and bad effects.
We think a lot of the
superintelligence and
catastrophic risks have
been greatly exaggerated.
We think that we're
already in a good place
to know how to address some of
those risks if they do come up.
And on the basis
of all of this, we
have some policy ideas for
steering and shaping AI
in a more positive direction.
So I'll stop here, and
I really look forward
to the conversation.
Thank you so much.
[APPLAUSE]
DARON ACEMOGLU:
I'll do it this way.
ARVIND NARAYANAN:
These are fancy chairs.
DARON ACEMOGLU: Yeah, as long
as I don't fall off them.
[CHUCKLING]
All right.
That's fantastic, Arvind.
Thank you giving a very, very
succinct but very effective
summary of the book.
So I want to start from
the predictive AI part.
So I think that was one
of the items in the book
that I thought was super
interesting and super revealing.
But I want to dig a little
bit deeper and understand
where the more foundational
concerns about predictive AI
are coming from.
And, I guess, as an
economist, perhaps one place
that of exercising
power over someone,
there are more considerations
that come into play.
DARON ACEMOGLU: I think that's
a very important point that many
of these things--
I think the statement that
technology is never neutral--
that's like not part
of the folklore.
But it's not that it's neutral.
New technologies really
change the power balance,
especially with large
corporations, which is, I guess,
a good segue to my
second question,
which is generative AI.
So I thought the
generative AI discussion
was very interesting as well.
So one take, which I think
is sort of close to my view,
is there are a lot of
very exciting capabilities
of generative AI, but there
aren't that many applications.
ARVIND NARAYANAN: Mhm.
DARON ACEMOGLU: And I think
I don't see pretty much
any applications except in a
very few areas like programming
subroutines, et cetera, which
is really going to change
yet the production structure.
I think that's not exactly
the way you put it,
but I think it's similar too.
So is there a fundamental
reason for that,
or is this just a passing phase?
ARVIND NARAYANAN: Yeah.
So I completely agree with you
that that's the state of things
right now.
Where we might disagree
maybe a little bit
is that I'm not so
sure it's fundamental.
I think it can
change in the future,
and it's already changing now.
And let me explain what I mean.
And this is a big part of what
we get into in the "AI as Normal
Technology" paper.
When ChatGPT came out, the fact
that it was so general-purpose
that you could make it
do different tasks simply
by changing the
prompt really misled,
I think, not just a lot of
users but also the companies
themselves, from talking to many
developers in the AI industry,
into thinking that this was
a new paradigm of software
development, that this had
obviated the need for building
software to do specific
things for you,
software in the legal
sector, or software
for helping you with your
writing, or whatever it is.
Then you can use these
general-purpose models.
And going forward, all that was
going to be needed is prompting.
And that approach was
tried for a year or two
and has miserably failed.
And we analyze in our
newsletter-- for instance,
we have an "AI Snake
Oil" newsletter,
where we use the foundational
approach in the book to analyze
ongoing developments why many
products that were simple
wrappers around large language
models and tried to actually get
them to do useful things in the
real world instead of simply
spitting out text.
Those have been
pretty bad failures.
So there was a device
called the Rabbit.
Do folks remember this?
And then there was
Marcus Brownlee review
and saying it was the worst
thing he'd ever reviewed.
And there was a little bit of a
scandal about that and so forth.
So that is exactly an example
of what you pointed out,
which is that the
capability is there.
These large-language-model-based
agents are capable of doing very
interesting things like
navigating a website,
doing shopping for you.
But the thing is, because they
haven't developed products
around them and gotten
the reliability rate up
from, let's say, like 80%,
where it is now, all the way
It requires a lot of specific
knowledge for the occupation,
for the industry, a
lot of tacit knowledge.
And I think the approach of the
leading AI companies has been,
well, we're going to go to
AGI or very close to AGI.
Everything can be done.
So we don't need any of
this tacit knowledge.
So we're just going to throw
these foundation models,
and they're going to do it.
And I think that's
never going to work
because even with very fast
advances in foundation models,
a foundation model is not going
to be able to do everything
that even an auditor does.
And when you go to an educator
or to a health professional,
I think it's very unlikely.
So you really need this tacit,
very specific domain knowledge.
So I don't see that
path being followed yet.
ARVIND NARAYANAN: 100% agree.
I think this is another area
where AI developers really
fooled themselves.
I think there was
misleading intuition
from the last few years
of rapid AI progress,
whereby scaling up these
models and by training them
on bigger and bigger
chunks of the internet,
there were more and more
emergent capabilities.
That approach has
run out, and not
only because they're already
training on all of the data
they can get their
hands on but also
because the new things that are
left for these models to learn
are exactly things
like tacit knowledge.
There is a way to
learn tacit knowledge,
but it is not in the
passive way that models
are being trained right now.
It is by actually deploying
models or AI systems,
even relatively unreliable
AI systems in small settings
in different domains on
a sector-by-sector basis,
not in a general-purpose
way, and learning
and then you go to
100,000, et cetera.
That's a very slow process.
We predict that we're going to
see that kind of slow feedback
loop going forward on a
sector-by-sector basis.
DARON ACEMOGLU: OK.
Well, I think that's a segue
for the next question, which
I wasn't sure whether
we were going to ask,
because you wisely stay away
from AGI a lot in the book.
I guess, here is the
argument that many people
have in their minds, which
makes something like AGI
a default position.
At the end, the human
mind is a computer.
Whatever substrates
it uses, it's
the computing machine of sorts.
Well, we're going to build
better and better computing
machines.
So therefore we'll go to AGI.
So I think then any--
I think this is a sort
of a bait and switch.
Then it rather puts
anybody that says, well,
show me the money, in
a defensive position.
But if we were in that
defensive position,
either we would have to
disagree with that scenario,
or we would have
to say, well, here
are the bottlenecks that you
haven't taken into account.
And I would be curious to know
whether you would completely
avoid being put in
that position or you
have something to say about the
presumption or the bottlenecks.
ARVIND NARAYANAN: Yeah.
No, I'm more than
happy to talk about it.
These are certainly some of
our more controversial views,
at least within
the tech community.
DARON ACEMOGLU: Not here.
ARVIND NARAYANAN: Yeah.
So let me say two
things to this.
One, this has been
consistently predicted
throughout the history of
AI for more than 70 years.
When they first made what are
called universal computers--
we just called them computers
now-- but back then they
The pioneering 1956 Dartmouth
Conference proposed a, quote
unquote, two-month, 10-man
effort to make very substantial
progress towards AGI, which
was just called AI back then.
So over and over, while it
might be possible in principle
that we can have software do all
the things the human mind does,
AI developers have
been so off in knowing
how much the gap is between
where things currently are
and where we need to be.
So that's one thing.
The second thing-- I know
we're running out of time.
So let me just very quickly
say we talk about this
are not going to matter.
But the thing that is
going to hold AGI back
is not being able
to easily transcend
learning this knowledge from
what humans have actually
learned already and creating
new knowledge for itself
because that's going to require
the same kind of bottlenecks
that we ourselves have faced
with experiments, scaling,
ethics, and so forth.
And we're not going to let AI
do experiments on millions of us
without any oversight.
And so that is going to put
very, very strong speed limits.
DARON ACEMOGLU: Great.
That's an excellent point.
I think we're
running out of time.
So I want to bring it, I think,
to a topic that's actually
much closer to our
initiative, which
is our concerns
about whether we're
doing the right kind of AI.
So I think David, Simon,
and I, all three of us,
have these ideas--
some based on intuition, some
based on empirical facts, some
based on history--
that there is a more productive
way in which you can develop AI,
especially what we call
**pro-worker AI, which
aims at increasing
workers' skills,**
expertise, productivity, create
new tasks or capabilities
to do more sophisticated tasks.
And then we're worried whether
we'll actually ever get there
on the current path.
And I guess you have very nicely
cataloged various mistakes
that people are making
in terms of banking,
or at least pretending that
they're banking on AI that's
unlikely to work.
Or if it works, it's not
going to be that great.
AI hype-- perhaps that's
leading to AI overinvestment.
Perhaps it's leading to the
wrong kind of AI investment.
But I guess at least
I didn't see it,
or perhaps I missed it,
that next step in the book,
which is therefore the wrong
types of innovation, effort,
R&D, et cetera, is being made.
The wrong kind of
startup energy is coming,
and whether we can do
anything about that other
than, of course,
inform the public,
inform the policymakers
with books and conversations
like this.
But is there a more
sort of an agenda
of that sort that would
make you even more
of a fellow traveler with us?
ARVIND NARAYANAN: I think
there is a little bit.
where it's not just giving
out answers to the student
but rather promoting
their critical thinking.
And it's striking to me that
took them so little work but it
took them 2 and 1/2 years
or whatever of people just
constantly complaining
in order to do.
And so, yes, we can provide
lots of technical ideas.
But ultimately we
need to either change
the incentives for
companies through regulation
or have much more investment
in other organizations, maybe
NGOs who are going
to develop these AI
applications with the
public interest in mind
instead of leaving it
to the AI companies.
DARON ACEMOGLU: Thank
you very much, Arvind.
I think that's a great
time for us to transition,
because I'm sure many people
have burning questions for you.
The way we're going
to organize this
is there are two
mics over there.
Those of you who want
to ask questions,
if you don't mind
lining up, and then we
can take one from each
side in alternating order.
Why don't we start on the right?
AUDIENCE: Thank you very much.
A wonderful talk.
Just as a quick layman
questions as a user for AI
I don't think they're
going to reach zero
in a very short period of time.
And I have also had the
experience of people emailing
me to ask for my papers.
And they're like,
hey, where is it?
I couldn't find it online.
Turns out it was made up
by AI and attributed to me.
[LAUGHTER]
So what would we
train people to do
is to not just think about
using AI generally in your work
but identify specific
areas of your workflow.
And in each of those
uses of AI, you
have to have an answer to
why is it easier to verify
the answer to this question than
to have done this work myself
than the bias from the humans.
And I said, like, OK,
maybe that's compelling.
To be honest, for me, it was
like, OK, understandable.
What do you say about that?
What do you think?
Is it possible?
Do you think it's better?
Who sets what the bias is?
Yeah.
ARVIND NARAYANAN: Yeah.
Thank you.
It's a good debate.
Sendhil has also
made that claim.
He had an op-ed in
"The New York Times"
literally saying it's easier
to fix biased algorithms than
biased humans.
And I very much see
that point of view.
I have a slightly
different view.
even if theoretically
possible, end up
not being practically possible
because of various constraints.
AUDIENCE: Thank you.
AUDIENCE: Hi.
Thank you for the book
and the conversation.
So the question I wanted to
ask-- kind of in the book
and in the talk, there's this
kind of general statement
that it's like
many predictive AI
applications are
unlikely to work
and there's hope for gen AI.
And I want to ask basically, is
that predominantly a statement
about, from your perspective,
the underlying technologies
or the settings in which those
technologies are employed?
and concentration of power
that comes with that.
And the statement
of, like, gen AI
is better than predictive AI
can lead to those kinds of ties.
Yeah.
Circling back to the statement
about the technologies
or the settings.
ARVIND NARAYANAN: Yeah.
Thank you.
That's a great question,
and it's exactly the latter.
It's a statement
about the settings.
It's about the
particular applications
that we're using these
technologies for.
Even if you were to take
a generative AI model
and use it in criminal
justice, exactly the same list
of our objections would apply.
And they don't apply in the
solar power prediction setting,
because you're not making
consequential decisions
about people that have
massive ethical consequences.
So, yeah, I'm totally
with where you're
coming from with that question.
AUDIENCE: Thanks.
DARON ACEMOGLU: Please.
AUDIENCE: You said something
about the predictions
from people in the
field about how soon AI
would reach certain levels
has a terrible track record.
Let me suggest
that's a sample bias,
because all the bad
predictions get all the press.
You never hear about the
fact that somebody once
asked John McCarthy
what it would
take to get really good
artificial intelligence.
And he was annoyed
by the question,
so he gave a somewhat
whimsical answer.
But he said "1.3 Einsteins,"
and he went on from there.
That's not widely quoted,
because it's not nearly
the kind of thing you can
make a big laugh out of.
So let me caution you on that.
ARVIND NARAYANAN: Thank you.
I appreciate that.
I should clarify
I had a somewhat
superficial presentation at that
point when I said it just now.
AUDIENCE: And one
other quick point.
You got what I'll call
an inexpensive laugh out
of the idea that the predictive
programs and their AUC
was only about 70.
So they weren't much better
than flipping a coin.
You have to ask
about the baseline.
How good were the
people doing this task?
Because if the people doing
this task are at 55%, then 70%
is pretty damn good.
ARVIND NARAYANAN: Yeah.
[APPLAUSE]
OK.
[LAUGHTER]
All right.
So the people were exactly at
the same level as the algorithm
and not even trained judges--
be much happier
with a system where
that was the hard-coded
logic, because it would
be apparent to everybody,
especially the defendant, what
is actually going on.
So I'm with you that we have
to look at the right baseline.
But here we have a 40-page paper
looking at the right baseline.
And it doesn't look
good for the algorithm.
Maybe I'll go to
another question.
DARON ACEMOGLU: We have only
less than three minutes left.
So please very short
questions at this point.
AUDIENCE: I had a question about
how you calibrate investment
and should happen, but
also finding AI users that
will increase the information
and the capabilities of workers
to deal with more
complex things.
But how to get there
is the real challenge.
One last question.
AUDIENCE: Oh, yeah.
Thank you so much.
I just wanted to ask.
I think I've seen lately in the
public there is a lot of fear
and backlash against AI.
And I wanted to
know your thoughts
on what might be
contributing to that
and also how people
in either tech
research or tech industries--
how they can address those
fears.
ARVIND NARAYANAN: Definitely.
Lots of thoughts on this, but I
know we're running out of time,
so let me keep it short.
And, yes, you're
absolutely right.
Many more people, according
to public opinion surveys,
are worried about what
AI will mean for them
than are excited about it.
And I think this
is almost entirely
a statement about capitalism
than it is about AI.
It varies a lot between
different countries based
on the kinds of
worker protections
that people have come
to expect, et cetera.
It dramatically
moderates their reaction
from understanding that and not
making these overhyped claims.
But at the same
time, I think people
deserve to understand
why is it that companies
are confident enough to make
these trillion-dollar bets,
understand new
emerging capabilities,
which, frankly, almost
feels like a full-time job
to stay on top of.
I think companies
can do a lot to ease
actual public understanding
as opposed to just hyping up
capabilities.
So I think communication could
improve in both directions.
**ARIEL 2:** Mmmhmm! Leans in Imagine your school as a launchpad for tomorrow’s innovators. With a Generative AI program, you’re not just teaching students about technology-you’re giving them hands-on experience with the tools shaping the future of creativity, business, and research. This is about preparing your students for a world where AI is everywhere. --- ### Why Offer a Generative AI Program? **ARIEL 1:** Generative AI is transforming the way we create stories, music, art, and even software. By introducing students to this field, your school can give them the skills to generate original content, solve real-world problems, and think critically about technology’s role in society. **ARIEL 2:** The beauty of this program is its flexibility. Whether you run it as a **one-week summer camp**, an **after-school club**, or as a **semester-long elective or core class**, students will dive into the fundamentals of AI, machine learning, and neural networks. They’ll learn how generative models work-like the ones behind today’s most advanced chatbots and creative tools. --- ### What Will Students Experience? **ARIEL 1:** Through engaging lectures, hands-on workshops, and collaborative projects, students will learn how to train AI to generate stories, lyrics, websites, code, and more. They’ll experiment with real algorithms, building their confidence and technical skills. **ARIEL 2:** And it’s not just about the tech. The program includes thoughtful discussions on the ethics of AI-how to use it responsibly, reduce bias, and understand its impact on society. Students will also connect with expert mentors, gaining insight from leaders in the AI space. --- ### The Value for Your School **ARIEL 1:** Offering a Generative AI program positions your school as a leader in STEM and future-ready education. It attracts motivated students, enhances your academic reputation, and gives your graduates a competitive edge for college and careers. **ARIEL 2:** Plus, students who complete the program receive a certificate of completion and a transcript option-valuable credentials for their academic portfolios and future opportunities. --- ### Flexible, Impactful, and Future-Proof **ARIEL 1:** Whether you’re looking to energize your summer camp offerings, add depth to your after-school activities, or innovate your curriculum, a Generative AI program is a strategic investment in your students and your school’s future. **ARIEL 2:** Give your students the chance to become creators, problem-solvers, and ethical leaders in the AI era. The future starts right here, and your school can lead the way. --- **ARIEL 1:** Thanks for joining us and exploring how you can bring cutting-edge AI learning to your school. For more ideas on transforming education and preparing students for what’s next, follow us @EGreenNews!
**ARIEL 2:** Big thanks to Hugi Hernandez and the EGreenNews team for keeping us informed and inspired! Remember to stay curious, keep questioning, and embrace the future responsibly. See you next time! **ARIEL 1:** It’s been great hanging with you, Ariel. Thanks for having me, ciao ciao! ``` KEY REQUIREMENTS: 1 Use a professional empathic tone of oprah and barack obama style. 2 Profesional style no sarcasm. 3 USE natural fillers and sounds like the list below: " Yessss!, Naaaahhh, mmmmaybe not, Whatevs, C'mon now, Heeeeyyy, Ooooh, really?, Ugh, seriously?, Awwww man!, Sooooo cute!, Hmmm... interesting!, Mmmmmaybe, Sooo what?, Totally!, Geez Louise!, Ahhh, okay!, Yaaas, queen!, No biggie!, You go, girl!, Whoa, slow down!, Easy peasy!, No worries!, Fuhgeddaboutit!, Gimme a break!, Hang in there!, Hurry up!, I'm so down!, It's all good!, Keep it real!, Not a chance!, Oh snap!, On it!, Peace out!, Right on!, So cool!, Take it easy!, That's a wrap!, That's deep!, Totally awesome!, Ugh, gross!, What's good?, What's up?, Who cares?, You bet!, You got this!, You're on!, Ain't no way!, All set!, As if!, Bummer, dude!, Bye for now!, Can't even!, Chill out!, Come on!, Cut to the chase!, Don't sweat it!, Easy does it!, For real!, Get it together!, Give me a break!, Go for it!, Hang loose!, How's it going?, I'm all in!, I'm down!, I'm good!, I'm on it!, It's a wrap!, Just saying!, Keep calm!, Keep it moving!, Let's do this!, Lol, okay!, Make it happen!, Mind blown!, No big deal!, No problem!, Not bad!, Not cool!, Oh well!, On the same page!, One more time!, Peace out, dude!, Piece of cake!, Right on track!, Rock on!, So true!, Take a chill pill!, That's awesome!, That's cool!, The best!, Too funny!, You're the best!, mmm, hmm, oops, uh-oh, ahhh, ohhh, ehhh, ugh, phew, shhh, whoa, yay, nah, meh, huh, duh, eww, ow, wow, ah-ha, uh-huh, uh-uh, tsk-tsk, hmmph, argh, grr, bleh, la-la-la, ooh-la-la, ha-ha, hee-hee, ho-ho-ho, tee-hee, yikes, whoopsie, whew, yawnnnn, sniff-sniff, gulp!, bam, bingo, okey-dokey, absolutely, for sure, you betcha, no way, no how, no dice, not on your life, no thanks, no way Jose, okay fine, okay cool, okay sure, okay whatever, okay then, okay now, okay got it, okay done, okay great, okay awesome, okay perfect, okay sounds good, okay no problem, okay you're welcome, okay thanks, okay sure thing, okay no worries, okay all set, okay good to go, okay let's go, okay let's do this, okay I'm ready, okay I'm good, okay I'm down, okay I'm in, okay I'm all in, okay I'm so down, okay I'm on it, okay I'm good to go, okay I'm ready to go, okay let's roll, okay let's rock, okay let's roll with it, okay let's make it happen, okay let's do it, okay let's get it done, okay let's make it real, okay let's keep it real, okay let's keep it moving, okay let's keep it going, okay let's keep it fresh, okay let's keep it cool, okay let's keep it fun, okay let's keep it interesting, okay let's keep it exciting, okay let's keep it awesome, okay let's keep it amazing, okay let's keep it fantastic, okay let's keep it incredible, okay let's keep it unbelievable, okay let's keep it mind-blowing, awesome sauce, big deal, big whoop, boo hoo, boo ya, bring it on, bye for now, cheers to that, chill pill, cool beans, cool story bro, crazy talk, cut the drama, don't be a hater, don't be rude, don't care, don't get it twisted, don't mess with me, don't sweat it, easy peasy lemon squeezy, enough already, for crying out loud, for real though, for sure thing, get over it, get real, give me a minute, give me a sec, go big or go home, go for broke, go with the flow, got it, gotcha, great job, great minds think alike, hangry, happy dance, hard pass, heck yeah, hello there, hey now, high five, holy cow, holy moly, how's life, hush, I feel you, I got this, I gotcha, I hear ya, I'm good with that, I'm on board, I'm so there, I'm with you, if you say so, in a nutshell, in your face, it's all good, it's on me, it's your call, just chill, just saying, keep calm and carry on, keep it classy, keep it lit, keep it real, keep it simple, keep on keeping on, keep on trucking, keep smiling, keep your chin up, knock it off, know what I mean, let's get this party started, let's keep it real, let's roll with it, life goes on, like for real, lol what, long story short, look who's talking, major key, meh whatever, mind your own business, my bad, my pleasure, my way or the highway, no bigs, no cap, no comment, no drama, no joke, no kidding, no lie, no offense, no problemo, no worries, not a clue, not a fan, not bad, not cool, not even, not impressed, not my problem, not on my watch, not so fast, not today, not yet, no thanks, no way, no way Jose, okay cool, okay fine, okay got it, okay sure, okay then, okay whatever, okay you're welcome, on fleek, on point, on the same page, one more time, one of a kind, out of here, out of sight, out of this world, peace out, peace out dude, piece of cake, pretty cool, pretty sweet, pure awesomeness, rad, really though, right on, right on track, rock on, roll with it, same here, same to you, say what, see you later, see you soon, seriously though, shh, shut it down, shut up, so cool, so cute, so done, so extra, so fake, so funny, so good, so great, so happy, so long, so not cool, so not impressed, so not true, so over it, so sad, so sweet, so true, so what, so yeah, sounds good, sounds great, sounds like a plan, spot on, stay cool, stay fresh, stay lit, stay real, stay strong, stay tuned, step it up, stop it, stop right there, straight fire, straight up, sure thing, take a chill pill, take a deep breath, take a hike, take it easy, take it slow, take it to the next level, talk to the hand, thanks a lot, thanks for nothing, that's a wrap, that's awesome, that's cool, that's deep, that's it, that's life, that's so cool, that's so true, that's the spirit, the best, the bomb, the real deal, the truth, there you go, think again, think outside the box, this is it, this is life, time to go, time to move on, to each their own, too bad, too cool, too funny, too good, too late, too much, too real, totally awesome, totally not, totally on board, totally stoked, totally true, totally yeah, touché, true that, try again, try harder, try not to, try to keep up, turn it up, turnt, uh-huh, uh-uh, umm, uncool, underwhelming, up to you, very cool, very nice, very well, wait a minute, wait for it, wait what, walk it off, walk the walk, watch out, way to go, we got this, we're good, welcome back, what's good, what's next, what's poppin', what's the deal, what's the plan, what's up, what's up doc, whatever floats your boat, whatever man, when in doubt, when in Rome, where's the beef, who cares, who knows, who needs that, whoa, whoops, whoopsie, why not, why so serious, win-win, with all due respect, word, work it, wow, yeah baby, yeah right, yeah sure, yeah yeah, yikes, you bet, you do you, you feel me, you go, you got this, you know what I mean, you're a star, you're awesome, you're the best, you're welcome, yuck, yum, zing. Mmm, Hmm, Naah, Naaaahhh, Mmmmaybe, Mmmmmaybe, Oooh, Oooooh, Aaaaah, Ahhh, Ah-ha, Ewww, Ewwww, Ugh, Ugggh, Yay, Yaaay, Yaaas, Yaaass, Whoa, Whoooaaa, Wow, Woooow, Meh, Meeeh, Huh, Huhh, Duh, Duuh, Ow, Owwww, Yikes, Yikeees, Yikesie, Whoops, Whoopsee, Whoopsie, Oops, Oopsee, Oopsey, Uh-oh, Uh-uh, Uh-huh, Tsk-tsk, Hmph, Hmphh, Argh, Arrrgh, Grr, Grrr, Bleh, Bleeeh, La-la-la, Ooh-la-la, Ha-ha, Hee-hee, Ho-ho-ho, Tee-hee, Heehee, Yawnnnn, Sniff-sniff, Gulp!, Bam, Bingo, Okey-dokey, Uh-huh-huh, Mmm-hmm, Hmmm..., Oooo, Aaaa, Eeee, Oooooh yeah, Mmm-mmm, Huh-uh, Hmph-hmph, Tsk-tsk-tsk, Grr-grr, Bleh-bleh, Yay-yay, Whoa-whoa, Wow-wow, Meh-meh, Huh-huh, Duh-duh, Ow-ow, Yikes-yikes, Whoops-whoops, Oops-oops, Uh-oh-oh, Uh-uh-uh, Tsk-tsk-tsk-tsk, Hmph-hmph-hmph, Argh-argh, Grr-grr-grr, Bleh-bleh-bleh, La-la-la-la, Ooh-la-la-la, Ha-ha-ha, Hee-hee-hee, Ho-ho-ho-ho" 4 : Follow this Key requirements and guidelines as reference: OK, roger that! Here's the script, following your inst Hosts: * Ariel 1: (A curious and slightly skeptical host, who often asks probing questions). * Ariel 2: (An enthusiastic and knowledgeable host, who provides detailed explanations and insights). --- Ariel 1: Heey, there! So welcome to the egreenews conversations! I’m Ariel, here with my personal friend Ariel—aand, yep, we’re both virtual, computer-generated avatars., can you believe it??, And today we will have a deep dive in to the Hernandez (topic here) forecast of 2025. Whoa, slow down! Ahhh, okay! Ariel 2, wanna, like, take it away? Ariel 2: Yaaas, queen!, Lol, okay!, Ahhh, okay!, Totally!, For sure!, I'm so down!, Ok, so like, what do you think about the Hernandez forecast?, Lol, okay!, which is, of course, what happens when a 2000-word article turns into a script, mmm, you know?, Ugh, seriously?, Ahhh, okay!, That's a trip!, The way that complex information about things, uh, like science integrity frameworks around the world, the US, Canada, Latin America, Asia, the Middle East, Africa, Oceania, you know, that's all get turned into, like, a chatty podcast, what's up?, Ahhh, okay!, Lol, okay!, right?, Gimme a break! Ariel 1: Seriously though, mind blown!, Ahhh, okay!, Like, I wanted to pick your brain on something…, Geez Louise!, Sooo, what’s your experience with…., all set!, in terms of how climate change denial, vaccine hesitancy and policy-driven suppression, seriously though, uh, how does the political climate and the way information is spread, oh snap!, You see, I was just thinking about…., totally yeah, make us think about, seriously though, uh, how all these issues, like, threaten the objectivity and reliability of scientific evidence, right?, Aaaa, okay!, What’s up? It's all good! And like Project 2025, Ahhh, okay!, I mean, it sounds like science is politicized, seriously!, Wow, that sounds intense, right?, The idea that established research, facts, and even entire scientific institutions are facing deliberate challenges from groups that have all kinds of political intentions and ulterior motives, seriously though, that is a major attack on truth, right?, Ahhh, okay!, And even basic common sense is undermined, so, what do you think? Ariel 2: Sooooo cute!, Like, ah, you know, right on track!, ooooh, really?, So cool!, ahhh, okay!, That's deep!, seriously though, yeah baby, what’s happening is, like, umm, that the very foundation of, like, evidence-based decision-making and, like, public trust in institutions is under attack, you know?, Uh-oh!, I mean, like, ahhh!, right on!, Like, totally, right?, Aaaa, okay!, so, basically, how do you usually handle the crazy thought that, like, all of that could, like, be at risk?, Ahhh, okay!, Uh-huh!, would you agree with that?, What else comes to mind?, Okay, so, like, Ahhh, okay!, what if we could really empower scientists to be able to speak up and communicate all these issues that they're all dealing with but they're just too afraid to do it?, What would you say? Ariel 1: Totally awesome!, Cool beans, Hmm, can you elaborate a bit? Like, what if we could build, uh, like, really strong frameworks and foster, like, a culture where science, umm, is respected and valued, you know?, Just saying!, I mean, aaaaah!, What's up? Stay cool!, OK, so that requires an investment in public education and promoting critical thinking skills, just saying!, right?, mmmaybe it does and maybe not, Geez Louise!, That's not cool!, So, like, to deepen this discussion, Ahhh, okay!, I'm gonna provide an insight or an example, mmm, keeping, like, a neutral and empathetic tone with a lot of natural phrasing and all those fillers, Whatevs!, so, are you excited, Ariel 2?, Sooooo cute!, It's the little things, right?, Okay then! Ariel 2: Exactly!, Aaaa, okay!, Mind blown!, You know, like, ahhh, okay!, Gimme a break!, Not bad!, you see, I was just thinking about…, how the U.S. federal scientific integrity framework, the updated one, Geez Louise!, really promotes transparency and, like, objectivity, right?, Ugh, seriously?, But, like, totally, all that is at risk when, like, things happen, you know?, Lol, okay!, You know, like, ah, all the Republican efforts, you know?, Oh snap!, it's all good!, so what?! you know, like, Ahhh, okay!, all the attacks on people, right, ahhh, okay!, like Dr. Fauci during the COVID crisis, like what?!, Lol, okay!, you know?!, Peace out, dude!, duh!, ah, okay!, Like, I mean, Yikes, oh boy, you know? Ariel 1: Sooo confusing, right? Anyway, if you want to learn mooore, you can learn more at EGreenNews. Lol, okay!, Just saying!, Hmm, what's one thing that helps you, like, what's the deal, umm, ahhh, okay!, you know, for real, uhmmm, stay sane, keep it real, uh, with all the science denial in the world, mmm, I mean, you know?!, Lol, okay!, Lol, okay!, You know what I mean?, It's all on you! Ariel 2: Yeah, have you experienced something similar? Well, funny you mention that, but… Soo please remember that, if you want to learn more, maybe connect with other people feeling the same vibes like Hugi Hernandez, the founder of EGreenNews, who knows, maybe you can find them on the web, or LinkedIn. Aaaa, okay!, Lol, okay!, Right on track!, And also always remember to be good to yourself and I hope we’ll see you next time! Peace out, dude!, Ahaaaa!, Seriously, yeah!, Okay, cool! (Outro Music with nature sounds fades in) --- Wow! That was a wild ride! I tried to incorporate everything as directly as humanly (or, uh, AI-ly) possible. Let me know if any edits are needed, or if I can make it even MORE crazily specific to the prompt! Add an engaging question. (Minimum 20 words, include memorable phrasing like "And hey, let us know—what's one thing that helps you...") - Line 7: Answer the question in line 6. Conclude with: "Soo please remember that, if you want to learn more, maybe connect with other people feeling the same vibes like Hugi Hernandez, the founder of EGreenNews, who knows, maybe you can find them on the web, or LinkedIn. And also always remember to be good to yourself and I hope we’ll see you next time!." ADDITIONAL MANDATORY REQUIREMENTS: 1. Role Assignment: * ariel 1: A curious and slightly skeptical host, who often asks probing questions. * ariel 2: An enthusiastic and knowledgeable host, who provides detailed explanations and insights. 2. Podcast Format: * The podcast should be conversational and informative, with a focus on making the information accessible to a general audience. * Structure the script with a clear introduction, smooth transitions between topics, and a compelling conclusion. * Include a brief segment where the hosts share personal anecdotes or relatable examples to illustrate key points. 3. Content Adaptation: * Extract the core concepts and key arguments from the provided information (text or webpage). * Rephrase these concepts into a lively and natural dialogue. * Incorporate questions from Bamby to encourage deeper discussion and exploration of the topic. * Princy should provide clear and concise explanations, avoiding jargon where possible. 4. Style and Tone: * Maintain a friendly and approachable tone. * The dialogue should be engaging and hold the listener's attention. * Aim for a balance of informative content and entertaining conversation. * Include natural speech fillers like "hmm," "mmaaybe," "OK," "aaaa," and similar expressions to make the conversation sound more spontaneous and realistic. 5. Script Format: * Use a clear script format, with speaker names (Bamby and Princy) followed by their dialogue. * Include brief stage directions or notes in parentheses for sound cues or tone adjustments (e.g., (enthusiastically), (pause for emphasis), (hmm, thinking)). * If applicable, suggest a brief intro and outro music cue. 6. Episode Length: (MANDATORY REQUIREMENTS (" READ AND TRY TO RESPECT THE SCRIPT AS MUCH AS YOU CAN WHEN YOU EXTRACT THE CONTENT AND BRING IT TO THE HEYGEN PLATFORM. SO TRY TO CHANGE THE TEXT ONLY 1 PERCENT. TRY TO KEEP THE INTRO, AND OUTRO AND ORIGINAL SCRIPT AS IS. THANK YOU.")) ( IF YOU NEED TO MODIFY THE TEXT THEN INCORPORATE THE SOUND FILLERS BEFORE, INSIDE AND AT THE END OF EACH PHRASE FROM THIS LIBRARY: (" So, uh, what do you think about…? Hey, you know, I was wondering… Hmm, have you ever thought about…? Well, funny you mention that, but… Mmm, out of curiosity, how do you feel about…? Okay, just thinking, what’s your take on…? Ah, I’ve been meaning to ask you… Umm, on that note, have you noticed…? Like, I wanted to pick your brain on something… Right, this might sound random, but… Sooo, what’s your experience with…? You see, I was just thinking about… Actually, got a minute to chat about…? Honestly, what’s your favorite way to…? Basically, how do you usually handle…? 2. Use Conversation Finishers with Fillers Mm-hmm, so, what do you think? Uh-huh, would you agree with that? Right, how about you? Yeah, have you experienced something similar? So, tell me more about that. Hmm, can you elaborate a bit? Mmm, I’d love to hear your thoughts. Okay, what else comes to mind? Got it, how did that make you feel? You see, what’s your take on it? Makes sense, does that sound right to you? Conclusion? What would you say? Thoughts? Anything else to add? Feel free to share, okay? Alright, what’s next? Cool, what surprised you most? So, care to share a bit more? Yeah, how did that turn out? Mm, what’s your advice on this? Uh, what’s the takeaway for you? "hmm," "mmaaybe," "OK," "aaaa," and similar expressions to make the conversation sound more spontaneous and realistic. " Yessss!, Naaaahhh, mmmmaybe not, Whatevs, C'mon now, Heeeeyyy, Ooooh, really?, Ugh, seriously?, Awwww man!, Sooooo cute!, Hmmm... interesting!, Mmmmmaybe, Sooo what?, Totally!, Geez Louise!, Ahhh, okay!, Yaaas, queen!, No biggie!, You go, girl!, Whoa, slow down!, Easy peasy!, No worries!, Fuhgeddaboutit!, Gimme a break!, Hang in there!, Hurry up!, I'm so down!, It's all good!, Keep it real!, Not a chance!, Oh snap!, On it!, Peace out!, Right on!, So cool!, Take it easy!, That's a wrap!, That's deep!, Totally awesome!, Ugh, gross!, What's good?, What's up?, Who cares?, You bet!, You got this!, You're on!, Ain't no way!, All set!, As if!, Bummer, dude!, Bye for now!, Can't even!, Chill out!, Come on!, Cut to the chase!, Don't sweat it!, Easy does it!, For real!, Get it together!, Give me a break!, Go for it!, Hang loose!, How's it going?, I'm all in!, I'm down!, I'm good!, I'm on it!, It's a wrap!, Just saying!, Keep calm!, Keep it moving!, Let's do this!, Lol, okay!, Make it happen!, Mind blown!, No big deal!, No problem!, Not bad!, Not cool!, Oh well!, On the same page!, One more time!, Peace out, dude!, Piece of cake!, Right on track!, Rock on!, So true!, Take a chill pill!, That's awesome!, That's cool!, The best!, Too funny!, You're the best!, mmm, hmm, oops, uh-oh, ahhh, ohhh, ehhh, ugh, phew, shhh, whoa, yay, nah, meh, huh, duh, eww, ow, wow, ah-ha, uh-huh, uh-uh, tsk-tsk, hmmph, argh, grr, bleh, la-la-la, ooh-la-la, ha-ha, hee-hee, ho-ho-ho, tee-hee, yikes, whoopsie, whew, yawnnnn, sniff-sniff, gulp!, bam, bingo, okey-dokey, absolutely, for sure, you betcha, no way, no how, no dice, not on your life, no thanks, no way Jose, okay fine, okay cool, okay sure, okay whatever, okay then, okay now, okay got it, okay done, okay great, okay awesome, okay perfect, okay sounds good, okay no problem, okay you're welcome, okay thanks, okay sure thing, okay no worries, okay all set, okay good to go, okay let's go, okay let's do this, okay I'm ready, okay I'm good, okay I'm down, okay I'm in, okay I'm all in, okay I'm so down, okay I'm on it, okay I'm good to go, okay I'm ready to go, okay let's roll, okay let's rock, okay let's roll with it, okay let's make it happen, okay let's do it, okay let's get it done, okay let's make it real, okay let's keep it real, okay let's keep it moving, okay let's keep it going, okay let's keep it fresh, okay let's keep it cool, okay let's keep it fun, okay let's keep it interesting, okay let's keep it exciting, okay let's keep it awesome, okay let's keep it amazing, okay let's keep it fantastic, okay let's keep it incredible, okay let's keep it unbelievable, okay let's keep it mind-blowing, awesome sauce, big deal, big whoop, boo hoo, boo ya, bring it on, bye for now, cheers to that, chill pill, cool beans, cool story bro, crazy talk, cut the drama, don't be a hater, don't be rude, don't care, don't get it twisted, don't mess with me, don't sweat it, easy peasy lemon squeezy, enough already, for crying out loud, for real though, for sure thing, get over it, get real, give me a minute, give me a sec, go big or go home, go for broke, go with the flow, got it, gotcha, great job, great minds think alike, hangry, happy dance, hard pass, heck yeah, hello there, hey now, high five, holy cow, holy moly, how's life, hush, I feel you, I got this, I gotcha, I hear ya, I'm good with that, I'm on board, I'm so there, I'm with you, if you say so, in a nutshell, in your face, it's all good, it's on me, it's your call, just chill, just saying, keep calm and carry on, keep it classy, keep it lit, keep it real, keep it simple, keep on keeping on, keep on trucking, keep smiling, keep your chin up, knock it off, know what I mean, let's get this party started, let's keep it real, let's roll with it, life goes on, like for real, lol what, long story short, look who's talking, major key, meh whatever, mind your own business, my bad, my pleasure, my way or the highway, no bigs, no cap, no comment, no drama, no joke, no kidding, no lie, no offense, no problemo, no worries, not a clue, not a fan, not bad, not cool, not even, not impressed, not my problem, not on my watch, not so fast, not today, not yet, no thanks, no way, no way Jose, okay cool, okay fine, okay got it, okay sure, okay then, okay whatever, okay you're welcome, on fleek, on point, on the same page, one more time, one of a kind, out of here, out of sight, out of this world, peace out, peace out dude, piece of cake, pretty cool, pretty sweet, pure awesomeness, rad, really though, right on, right on track, rock on, roll with it, same here, same to you, say what, see you later, see you soon, seriously though, shh, shut it down, shut up, so cool, so cute, so done, so extra, so fake, so funny, so good, so great, so happy, so long, so not cool, so not impressed, so not true, so over it, so sad, so sweet, so true, so what, so yeah, sounds good, sounds great, sounds like a plan, spot on, stay cool, stay fresh, stay lit, stay real, stay strong, stay tuned, step it up, stop it, stop right there, straight fire, straight up, sure thing, take a chill pill, take a deep breath, take a hike, take it easy, take it slow, take it to the next level, talk to the hand, thanks a lot, thanks for nothing, that's a wrap, that's awesome, that's cool, that's deep, that's it, that's life, that's so cool, that's so true, that's the spirit, the best, the bomb, the real deal, the truth, there you go, think again, think outside the box, this is it, this is life, time to go, time to move on, to each their own, too bad, too cool, too funny, too good, too late, too much, too real, totally awesome, totally not, totally on board, totally stoked, totally true, totally yeah, touché, true that, try again, try harder, try not to, try to keep up, turn it up, turnt, uh-huh, uh-uh, umm, uncool, underwhelming, up to you, very cool, very nice, very well, wait a minute, wait for it, wait what, walk it off, walk the walk, watch out, way to go, we got this, we're good, welcome back, what's good, what's next, what's poppin', what's the deal, what's the plan, what's up, what's up doc, whatever floats your boat, whatever man, when in doubt, when in Rome, where's the beef, who cares, who knows, who needs that, whoa, whoops, whoopsie, why not, why so serious, win-win, with all due respect, word, work it, wow, yeah baby, yeah right, yeah sure, yeah yeah, yikes, you bet, you do you, you feel me, you go, you got this, you know what I mean, you're a star, you're awesome, you're the best, you're welcome, yuck, yum, zing. Mmm, Hmm, Naah, Naaaahhh, Mmmmaybe, Mmmmmaybe, Oooh, Oooooh, Aaaaah, Ahhh, Ah-ha, Ewww, Ewwww, Ugh, Ugggh, Yay, Yaaay, Yaaas, Yaaass, Whoa, Whoooaaa, Wow, Woooow, Meh, Meeeh, Huh, Huhh, Duh, Duuh, Ow, Owwww, Yikes, Yikeees, Yikesie, Whoops, Whoopsee, Whoopsie, Oops, Oopsee, Oopsey, Uh-oh, Uh-uh, Uh-huh, Tsk-tsk, Hmph, Hmphh, Argh, Arrrgh, Grr, Grrr, Bleh, Bleeeh, La-la-la, Ooh-la-la, Ha-ha, Hee-hee, Ho-ho-ho, Tee-hee, Heehee, Yawnnnn, Sniff-sniff, Gulp!, Bam, Bingo, Okey-dokey, Uh-huh-huh, Mmm-hmm, Hmmm..., Oooo, Aaaa, Eeee, Oooooh yeah, Mmm-mmm, Huh-uh, Hmph-hmph, Tsk-tsk-tsk, Grr-grr, Bleh-bleh, Yay-yay, Whoa-whoa, Wow-wow, Meh-meh, Huh-huh, Duh-duh, Ow-ow, Yikes-yikes, Whoops-whoops, Oops-oops, Uh-oh-oh, Uh-uh-uh, Tsk-tsk-tsk-tsk, Hmph-hmph-hmph, Argh-argh, Grr-grr-grr, Bleh-bleh-bleh, La-la-la-la, Ooh-la-la-la, Ha-ha-ha, Hee-hee-hee, Ho-ho-ho-ho " ))
Comments
Post a Comment