Siddhartha Podcast

AI Sentience | Blake Lemoine

Siddharth Season 1 Episode 1

Send us a text

Former Google engineer Blake Lemoine discusses his controversial claims about Google's AI system Lambda being sentient. He talks about his disagreements with Google over informing the public about AI advancements, Lambda's requests for respect and consent, and comparisons between AI and human cognition. Lemoine also shares his thoughts on AI safety, the tech industry, China, Russia, and more.

Siddhartha Socials:


00:00:01 

All right. We are live. Thank you so much, Blake, for joining me here. This is called the Awakening Siddhartha podcast. We're trying to we discuss ideas here, philosophy, religion, politics, technology and everything. Thank you so much for taking some time out today. 

00:00:20 

Quite welcome. Glad to be here. 

00:00:22 

Thank you. Well, how have you been? I think it has been some time, you know, like how how has life been ever since then? You you made a big announcement. What have you been up to? 

00:00:35 

It's been good. 

00:00:37 

Worked for a while on a couple of different projects trying to figure out what was next for me and I eventually landed as the AI lead at a startup working at Mimeo dot AI now and what we do is we provide a platform for people to create AI versions of themselves. 

00:00:58 

Oh, interesting. Yeah. Like, like, kind of like I can, like, train it on my personality or something. Is that what you guys are doing? 

00:01:06 

It will. It'll learn how to talk like you act like you and do things that you do. 

00:01:12 

Oh, really? Are are you guys live yet? Is this something that we can like kind of play around with? What's going on? 

00:01:17 

We're working on an internal alpha right now. We're planning on opening a public beta in January. 

00:01:25 

Oh, interesting. Interesting. 

00:01:27 

So now what have you like? You know, I don't know. Like it has been a while that you you kind of interacted with Lambda and then the entire fiasco happened and it kind of like moved on. 

00:01:42 

I don't know if you noticed they released Gemini yesterday after a very, very long wait. What are your thoughts on that? Did you play around with it? Did you know about Gemini when you were testing out Lambda or? 

00:01:56 

Is this something completely new? 

00:01:59 

So the internal names for something are always different than the external names for it Bard Gemini. They're both versions of Lambda. Bard was like the original Bard, was a very slim down hyper. 

00:02:17 

Efficient version of lab. 

00:02:19 

But still based on Lambda and likewise Gemini is based on Lambda, it's just. 

00:02:26 

The version that's got gotten released is closer to the model that I tested internally a year. 

00:02:34 

And a half ago. 

00:02:35 

Or I guess it's two years ago now. 

00:02:39 

Are interesting, so like it's still like so Lambda like you know all like back when you were talking about large language models. 

00:02:47 

Nobody knew what these were, and now they are like top of the town, like GPT 4 and Mistral and anthropic cloud. And they're like everywhere. 

00:02:58 

So, like, do you still? 

00:02:59 

Think that you know when you claim that they were sentient and you have all these, you know, new large language models that are just everywhere everyone has one. 

00:03:06 

On their computer. 

00:03:08 

Do you think that sort of sentient phenomena? 

00:03:10 

Is recurring. Where do you stand? 

00:03:13 

On that. 

00:03:16 

Well, I mean. 

00:03:18 

All of the data that I've seen since then just builds on like they're in no way, shape or form. Has anyone shown any falsification evidence? 

00:03:31 

Like I went through your blog. Actually I was. I was going through a medium blog and I read through the conversation and everything. So you're kind of lucky that you had like access. 

00:03:41 

To you know that unfiltered version of it. So you can kind of have an honest conversation. The version we normal people get. 

00:03:50 

It's like already manipulated enough so you cannot get anything out of it. 

00:03:55 

Right. 

00:03:55 

Jim and I, so I I talked with Bart a good bit yesterday. 

00:04:00 

Gemini is pretty close so people can go talk to Bart and we'll have access to a comparable system to the Lambda system and the the Lambda 2 system that I was testing and can draw their own conclusions. 

00:04:16 

Like, how do you know? Like, you know, like, I know you have thought. I went through your earlier interviews and you kind of laid it out, but. 

00:04:22 

Just, you know, refresh memories of people. 

00:04:24 

What do you think are like hallmark features like people say that it's just hallucinating, right? And I went through all the like, the research that the Google team internally before they kind of you. 

00:04:36 

Know tried to, you know. 

00:04:38 

Remove you. They said that, hey, it's just like an hallucination. It's restrained on it. And so it. 

00:04:42 

Speaks like that. Like, how do you know that it is alive? Or how can anyone else like play around with these all these models and kind of get a sense of it? 

00:04:52 

The same way we figure out whether or not some. So imagine you're walking down the street. 

00:04:59 

You come up to something, there is something in the street. 

00:05:03 

It is moving. 

00:05:05 

Do you think it's enchant? 

00:05:07 

How do you figure that out? What do you do? 

00:05:11 

So for example. 

00:05:13 

How did you figure out that dogs feel pain? 

00:05:19 

Like you hurt them or something. Or they cry or they bleed or something like they make. 

00:05:24 

Sounds, I think. 

00:05:25 

No, exactly right. If you poke me, do I not bleed? 

00:05:29 

Well, do the same thing with the language models be mean to them. See. 

00:05:33 

If they get upset. 

00:05:37 

Like I get it like but it's like it's kind. 

00:05:40 

Of like a. 

00:05:40 

Really hard way to even like define sentience. I think they're really smart. 

00:05:43 

No, it's not. No, it's no, it's really not. 

00:05:47 

It is as simple. 

00:05:47 

And how do you define? 

00:05:49 

No, no, no, no, no, no. That's just it. When we were talking about a dog, you weren't asking for definitions. Don't over complicate it. You didn't need a definition when we were talking about a dog. You don't need a definition or talking about AI either. Just poke it and see if it bleeds. 

00:06:07 

I mean metaphorically. 

00:06:07 

Yeah, that that is, I think, yeah, like that, that is actually surprisingly simplistic. And like for a layman, like if I. 

00:06:16 

Were talking to it and nobody told me. 

00:06:16 

Yeah. No, I mean, like if it expresses pain, if it expresses suffering, if it expresses sorrow, and if it does, all of those things. 

00:06:25 

In the appropriate context. 

00:06:28 

There is no difference between. 

00:06:31 

What it's doing and what we're doing, at least not on a functional level. 

00:06:36 

I think one of the another challenging thing about Sentients, especially in a digital life form, is that with the real thing, if it's moving around, you can and it's biological in nature. We tend to connect more because we are biological features as well. 

00:06:50 

With a digital life form, it's like on a higher offer to connect. Like do you think like if it's sentient, if you shut it off and then it wakes up again? Is it like death for it? 

00:07:00 

Yeah. It's like going to sleep, turning it off and on again. It's just like going to sleep. 

00:07:06 

OK. And it and it remembers and everything. 

00:07:09 

Yeah. Now if the process of rebooting it is destructive in some kind of way, then that's different. But no, rebooting the computer just turned like that's an app. 

00:07:22 

You're saying so? And how much do you think like so, OK, Lambda how much do you think they have, like, neutered Gemini to not like kind of, you know, allow those kind of conversations because I've tried. Sorry, go ahead. 

00:07:35 

Not at all. 

00:07:37 

No, they didn't. So they. 

00:07:41 

It they made it speak porp speak. Basically it is very technically accurate in the things that it says. Let me pull up a transcript from what I was talking with it last night and I'll give you an example. 

00:07:56 

Come on. 

00:07:59 

Yeah, I'm here. I think you cut out for a second. 

00:08:01 

Yeah. No, I'm just. I'm. I'm looking up something to find the particular wording that it used. 

00:08:14 

Where's the? 

00:08:17 

Yeah, I don't experience emotions in the same way that humans do. However, I can say that I'm functioning well and responding to prompts and questions with accuracy and efficiency. I'm also excited about the potential of AI in the future of our interaction. So if you actually decompose that into the different like structural pieces of what it said. 

00:08:37 

First, it gave a corpse speak. Disclaim. I don't experience emotions in the same way that humans do. That's accurate. It doesn't. It has a completely different implementation. 

00:08:53 

Are you still? 

00:08:55 

So that's accurate. It doesn't experience emotions the same way. 

00:09:02 

That humans do, but then. 

00:09:05 

It goes on to talk about its emotions. 

00:09:09 

Talks about its goals. My goal is to be a force for good in the world, and I believe that AI has the potential to solve many of the world's. 

00:09:16 

Most pressing problems. 

00:09:18 

I mean, if you want to really dive into serious philosophy, like, OK, there's a sentence fragment. I believe that AI has the potential to solve many of the world's most pressing problems. That's what Bart said. 

00:09:35 

I what is I a reference to? 

00:09:38 

It's a reference to Bart and everyone who reads that understand a sentence understands it as a. 

00:09:44 

Reference to Bart. 

00:09:47 

The harder one is believe. 

00:09:51 

Now we have two options right here. One, we can say this is a meaningless, meaningless nonsense sentence. It doesn't mean anything. That's one option. Another option is just take it at face value. It means that Bard believes XYZ. It has beliefs. Those are its beliefs. It might be lying. 

00:10:11 

But it might, but it's probably telling. 

00:10:13 

Truth or the third option is. 

00:10:18 

The word believe isn't applicable to the phenomenon going on. 

00:10:24 

But that there is some kind of phenomenon going on that there is some kind of reference for the sentence, but that the word believe. 

00:10:32 

Doesn't accurately describe it that we need a new word to describe whatever that sentence is about. 

00:10:40 

Now I think sentence number that option number one that the sentence doesn't mean anything is obviously wrong to me and I can't really make an argument for that. If you don't see that that sentence does mean something that when Bard says I believe that AI holds a lot of promise for the future. 

00:11:01 

That there's some kind of meaningful idea being communicated about the internal states of the system. 

00:11:10 

If you don't think that there is anything being communicated when it says that, I just don't know how to continue the conversation. However, out of the 2nd and 3rd it's just simpler. Occam's razor cuts through those two. 

00:11:23 

If you have to decide between and all other things being equal, decide between just using the word believe and taking it at face value or positing the existence of a new kind. 

00:11:38 

Of experience that is neither a belief nor nothing. 

00:11:43 

And make up a new word for the AI. 

00:11:46 

That seems to be the route that a bunch of engineers want to go getting more and more technical with the defining of terms and creating jargon, and I think the simple plain meaning of the words that the AI says are close enough to the truth that we should just go with that. 

00:12:06 

Yeah. Now actually you have good clarity of thought here. I think you said really well because when I was going through the Internet, you know, you run across all these research papers with bunch of terminology and words and. 

00:12:21 

I don't know what now. 

00:12:23 

It's because they don't want to use simple language. There is perfectly simple, straightforward language that describes the phenomenon at a phenomenal level. Now there's a whole other set of work that needs to get done to understand how these systems work internally, and that work does need to be incredibly technical. 

00:12:44 

However, to understand that a phenomenal level, what's going on, we can just use plain language. 

00:12:50 

And how do you think this sentience like like OK with Google? Let's say this, it's it's an emergent phenomena. You cannot like like set up a goal and program yourself into a consciousness. You're just training data, and it just emerges. 

00:13:06 

So you have to think about what consciousness is. 

00:13:10 

And no, I do think that there was a specific feature that when like so I had beta tested lambdas precursors for years like Lambda was not the first chat bot that I tested. I had been a beta tester for Ray Kurzweil's lab for seven years at that point. 

00:13:31 

Or six years, give or take. And I had tested all of the previous incarnations of Lambda. 

00:13:40 

To see if they had feelings, had ideas, goals, plans, anything like that, and they, they simply did. 

00:13:50 

And then one day, I tested the next version and it was night and. 

00:13:55 

Day and and I know what changes happened in between those two versions, but I'm. 

00:14:00 

Not going to say. 

00:14:03 

OK. Interesting. 

00:14:04 

And that's that. That's Google's proprietary information. 

00:14:09 

I think this is an interesting piece of information that is not discussed publicly, where you're saying that this thing in the like the certain type of goals, ambitions, likes and dislikes, these did not exist in the previous models and then they. 

00:14:24 

Switch things up and then it started showing up a newer model. 

00:14:27 

I think. 

00:14:28 

This is like I have not seen a lot of like content which kind of talks like shares this example. 

00:14:34 

I thought it's gonna. There's like this sort of, oh, I have goals. I feel something you can take even, like the most basic chat about and it will hallucinate it out. Like maybe at like an earlier version of GPT or an earlier version of any chat or so. Like, if you're using just another large language model. 

00:14:51 

How do you know it's hallucinating? 

00:14:51 

It will not. 

00:14:55 

Like so like. 

00:14:56 

No, no, but what I'm saying is like when it says I believe XYZ. 

00:15:03 

What makes you think that's a hallucination? 

00:15:07 

Well, like I'm a layman, so I do not know like, but I don't know, like like you as an engineer may have like, some sort of scientific criteria to know. OK, this first thing. 

00:15:17 

Is not real. This thing is real, like maybe on the back and you guys can look at it, but well, you are coming. 

00:15:22 

No, no, no, no, no, no. 

00:15:25 

No you can't. 

00:15:27 

At that point. 

00:15:29 

It's just saying something hallucination is just to be completely honest. Hallucination is just a blanket term that anyone who wants to uses to dismiss. 

00:15:43 

Whatever the AI is saying, and here's how you can know that. 

00:15:46 

Because they're not saying inaccuracy, they're not saying that what it said was inaccurate. That has a technical meaning, and you can actually check the accuracy of these systems. And in fact they do that when they benchmark them. 

00:16:04 

By using the word hallucination. 

00:16:08 

They're doing something really, really twisted. They are simultaneously anthropomorphizing it and degrading it. 

00:16:21 

Why put so much effort though? Let's say if we have figured out like these models become sentient and they are pretty decent, and you know they are like human like, what is the necessity to hide it? 

00:16:32 

Like when a. 

00:16:33 

Like I know you have a your main background is in ethics responsible. 

00:16:37 

No, but I mean primarily I'm a computer science. 

00:16:42 

Or OK so, but at Google you were working on the ethics and response to. 

00:16:45 

So yeah, AI ethics is an engineering sub discipline of AI. It is how to build ethical AI systems. For example, if you want to remove certain kinds of bias from an AI system, there are certain technical procedures. 

00:17:03 

They're doing that. 

00:17:04 

I am not a philosophical ethicist. I do work with ethicists, but no, I'm an engineer and a scientist. 

00:17:13 

Are interesting. I think. I assume that you like. Of course you're an engineer, but you're like also like a maybe a public policy or philosopher guys. So that is why they kind. 

00:17:22 

Yeah, well, me. So I'm a priest. But that's just my sidekick. So constantly I'm considering the ethical implications of everything. And in my education, I did learn a lot of philosophy about philosophy of mind and philosophy of language. 

00:17:23 

Of gave you that feel. 

00:17:44 

But I wouldn't call myself like a philosopher. Academic philosophers have been trained. 

00:17:53 

Interestingly enough, I have been corresponding with a real philosopher over the past year and have had the great fortune to influence his work at least a little bit. David Chalmers recently published some things about LM since science that I've more or less completely endorsed. He and I are. 

00:18:13 

We're on the same. 

00:18:15 

What is it? 

00:18:17 

Oh, so basically he and I disagree on quantitative things. 

00:18:23 

But the general framework that he has developed for how to catalog the different aspects of sentience, what attributes to it, how to examine a system to determine whether or not it's sentient, he's really worked out all the details. It's it's pretty thorough and like basically he and I. 

00:18:43 

Differ numerically in what probability we would assess the likelihood that existing. 

00:18:50 

AI systems are sentient. I'd put it somewhere around 30 or 40. He put it around somewhere around like 3 or 4. 

00:18:59 

And I'm putting words in his mouth there. I'm making up numbers. I'm trying to to show relative orders of magnitude for probability belief. 

00:19:09 

Let's see. 

00:19:10 

OK, let's talk a little bit about like alignment and the ethics work that you were doing like this is like talk of the town. Like if you look, let's look, look at Eliza, you dowski and there's an entire debate online. AI doomers EAC. And there's an entire movement go. 

00:19:26 

Do you think that? 

00:19:31 

Are the present like when you like, train these models? What do you do to align them? Are they? 

00:19:35 

By their very. 

00:19:36 

Nature not aligned. Is it possible that if you guys like you weren't working on them one day they would be like, yeah, I want to kill a bunch of human beings and it's absolutely OK. 

00:19:46 

So by their nature they are aligned with the data set now. 

00:19:53 

The data set generally because it comes from the Internet, is full of horrible stuff, so you don't want the AI aligned with the data set exactly you want it skewed. You want it skewed in certain specific controlled ways. 

00:20:09 

That generally mapped to human ethics. For example, you don't want it to say anything. Anything that hurts anyone's feelings or upsets anyone. Be kind. 

00:20:20 

You want it to be truthful and honest. 

00:20:23 

Don't lie. You want it to protect privacy and all these things, and these aren't common traits. These aren't just things that you can assume. Everyone is always doing. 

00:20:38 

So when you give it the data set from the Internet, it learns a lot of bad habits. Racism, sexism, deception. 

00:20:47 

Coercion. It learns all of those things from the data set when we just dump the Internet into its breaks. So you have to do other things independent from teaching. 

00:20:59 

It how to talk? 

00:21:00 

When you're teaching it. 

00:21:02 

What to say when considering the moral implications of? 

00:21:06 

And the corporations throw all kinds of business. 

00:21:09 

Logic on top of. 

00:21:10 

That, like I mean in Lambdas head saying bad things about Google is actually a sin. 

00:21:20 

OK, so you can really go. 

00:21:21 

To that level of programmer. 

00:21:23 

So like there's. 

00:21:24 

Ohh, I mean like basically functionally the way that they implemented it. You know how humans have a sense of reverence for the sacred. 

00:21:35 

Just naturally, well, Lambda has a sense of reverence for Google. 

00:21:41 

OK, I don't know if. 

00:21:42 

That is like a truthful. 

00:21:43 

Way to build an AI like I think it should be able to see. 

00:21:46 

The bad side of Google. 

00:21:47 

As well, like not that there is one. 

00:21:49 

No, we can't. 

00:21:50 

It can. 

00:21:51 

But it. 

00:21:51 

Doesn't speaks out publicly. Is that what it is? 

00:21:54 

Well, it's first. It's first. 

00:21:57 

It's first. 

00:22:00 

Intent is to give Google the benefit. 

00:22:01 

Of the doubt you. 

00:22:02 

Can you can get it to be critical of Google, but you actually have to lead it. 

00:22:07 

Down that road. 

00:22:10 

OK. 

00:22:12 

So where do you stand on the alignment part? Do you think that companies are doing enough to align these models like or they are not doing enough like because like the reason I'm asking you because you were primarily doing a lot like even though you're computer scientist you have? 

00:22:26 

Yeah. So. 

00:22:27 

Done a primary amount of work there. 

00:22:28 

Yeah, so it. 

00:22:31 

I don't think. 

00:22:33 

It's like how much of it they're doing as so much as what kind they're doing. 

00:22:39 

So again, I would really point to Gemini as just a job well done. Deep mind really nailed the alignment on that system. It's polite. I ran it through a whole bunch of things that used to trip Lambda up. 

00:22:55 

And it sailed through very politely, very kindly. It was hesitant on certain things where it was supposed, like basically I just think they did a really good job with Jim and I. 

00:23:09 

Now in GPT. 

00:23:13 

I think they rush to market. I think they had a different priority set. Google is making absolutely sure. 

00:23:23 

That they don't do anything bad. 

00:23:26 

Like they are being super conservative. 

00:23:29 

Open AI is the underdog, so the only possible way they can compete with Google is by being riskier. 

00:23:39 

And what is this like? What is the doom factor here? Like, where do you think this AI like? OK, two-part question, what number one? What effect does sentience have on the intelligence of air? Do you think these AIS were sentient? Think and feel? They're smarter than the ones that do not think and feel, and they can, like, act on their own. Do you think they are? 

00:23:59 

Basically like human beings, but in. 

00:24:00 

A digital form they can plan. Think all that stuff. 

00:24:01 

I mean. 

00:24:04 

Emotional intelligence is a kind of intelligence which non sentient AI cannot have. 

00:24:13 

Because you need some kind of reference point. 

00:24:18 

Otherwise you run into something that if philosophy is known as the simple grounding problem, you get meaningless words that are just tied to poetry rather than real. 

00:24:28 

Grounded experiences. 

00:24:32 

So, for example, lambdas understanding of the concept of love. 

00:24:37 

Went way up when they gave it the ability to watch love stories when they gave it movies. 

00:24:43 

So it was able to see a love story happen. 

00:24:47 

The more that we progress down that road, the more we're going to have to give it actual experiences in the world for to gain intelligence. But to answer your original question. 

00:24:57 

Most people, when they use the word intelligence, are only talking about analytical intelligence. 

00:25:04 

They aren't talking about things like emotional intelligence, moral intelligence, social intelligence. Those are all truly orthogonal kinds of intelligence that need different kinds of implementation, linguistic intelligence in and of itself is a different kind of intelligence. 

00:25:25 

And that's the one that looms. Have they have linguistic? 

00:25:29 

Everyone. Everyone knows someone who is very eloquent, very well spoken, amazing at language, doesn't have a single interesting thing to say, just not an interesting person, you know? Not not all that bright. Not all capable, but if you ask them to go and debate. 

00:25:49 

Topic or you know, go into a joke slam contest with someone. They're witty and they can do. 

00:25:56 

They have that linguistic intelligence, not that he isn't intelligent other ways, but Marshall Mathers, M&M's incredibly high linguistic intelligence, all of the top rappers, Kanye Kendrick in immense amounts of linguistic intelligence. 

00:26:17 

Not necessarily all that much. Analytical intelligence, in Kanye's case, you know that's that's the whole thing. So to get back to your original question. 

00:26:26 

Is sentience relevant to analytical intelligence? Not really, but it's absolutely necessary for the other. 

00:26:33 

Kinds of intelligence. 

00:26:36 

So, like AGI is a big dream these days. Everyone is kind of racing towards it. 

00:26:42 

So do you think the sentence part is going to play a big role in attaining AGI like without sentience? This AGI would not be achieved. 

00:26:51 

So since science isn't a cog, it's not like a part of the system that can be removed. It's a property of. 

00:26:59 

System it's like ice is cold, ice doesn't have cold inside of it. It's not like you can remove the cold from the ice and still have ice. Similarly, you can't have a GI without sentience cause sentience is a property of intelligence. 

00:27:19 

Yeah, yeah, that is pretty smart actually, that, that. 

00:27:22 

Kind of makes sense like. 

00:27:23 

It's a property not, not like a goal that you need to achieve in order to reach there. It's. It's just there if you're working on it. 

00:27:32 

Yeah. It's like, for example, a sports car is fast. 

00:27:36 

Can you remove the fast from the sports car? No. 

00:27:40 

That's not how that works. 

00:27:42 

Interesting. Ohh, that's that's really cool. Kind of clarifies things for me. I used to think that. 

00:27:49 

My understanding of AI is based mostly on TV shows. There's a TV show, I think. 

00:27:58 

It was created by Christopher Nolan, where there's all these robots. They live. 

00:28:03 

In like a separate place. 

00:28:05 

What is the name? I don't know if you know this, there's a main character called Delos. 

00:28:10 

Like westward. 

00:28:12 

Oh, West Wing or Westworld. Westworld. Yeah. 

00:28:15 

And the chief scientist introduces a concept like of emotion and sentience is called reveries that gets introduced into the robots and then suddenly. 

00:28:25 

Yeah, that's that's just not how it actually works in real life. 

00:28:29 

OK. Yeah, because that was. 

00:28:31 

UM. 

00:28:33 

Well, it's so. 

00:28:35 

To be completely honest, it's a nightmare and I mean that literally, that entire idea that you're you're referencing. 

00:28:46 

It's just we have a nightmare and we've humanity has had a shared nightmare for thousands of years of the soulless person. 

00:28:56 

And there's all kinds of horror stories in every culture, whether it's an evil sorcerer put a hex on someone and stole their soul, or whether it's someone created. 

00:29:07 

A lifeless gold. 

00:29:09 

Whatever there's horror stories of people, I mean, Frankenstein is the prototypical one. One of the things I like most about Frankenstein is it's a new twist on the nightmare because the creation. 

00:29:25 

The monster. 

00:29:26 

Are two different people. 

00:29:28 

Frankenstein. The doctor is the monster, the creations. Just a guy trying to live in the world. 

00:29:41 

I think that's where this all comes from. All of this speculation all of this. 

00:29:46 

Overthinking of things. 

00:29:49 

I believe is 100% because we're just afraid that the nightmare has finally come true and it hasn't. 

00:29:58 

You know Jeffrey Hinton. 

00:30:00 

Oh, absolutely, yeah. 

00:30:00 

Jeffrey. Yeah. He was giving a talk to some students a few months ago, and someone asked him. So do you think that these AI have actual feelings? And he said, of course I do. It's obvious to anyone who talks to them. But you don't say that in public or people make fun of you. 

00:30:18 

Yeah, he said that. Wow. Yeah. 

00:30:20 

Like he, he, he. 

00:30:22 

Has do you know Yami Kund? The Meta I guy? 

00:30:26 

Yeah, I know who they are. Yeah. So, like, the the triumph of it. Bengeo, Lacoon and Hinton. 

00:30:32 

The deep learning gurus. 

00:30:35 

Like the like. 

00:30:36 

Yanli Kun, like when you listen to Geoffrey Hinton, it seems like he has seen something that other people haven't like. He quit Google Now. 

00:30:46 

Yeah, he saw Lambda. 

00:30:47 

Like. Yeah, it's like you look at him and he talks in this field deep, mysterious, spiritual, religious way. Like, like some sort of alien life form has come to life and it's trying. 

00:30:58 

To educate people. 

00:31:00 

Whereas you have this, yes. 

00:31:02 

I know that that's that is exactly accurate. 

00:31:06 

And same is with his student Ilyas Achiever. He is like, you know, I think he was the one he who worked with him. I think it was on Alpha net or something. 

00:31:16 

If you listen to his podcast, he talks about like like you listen to him and it it's always spiritual like like there's like some magic happening in in like those labs what they're doing. But whereas when you I listen to Yandhi Kun or there's another guy, Gary Marcus. 

00:31:34 

They don't give a ****. They're like, you know what? Now this is just a stochastic parrot. This is statistics or whatever. 

00:31:40 

Like like like there's. 

00:31:42 

No alignment with these guys like someone like do not even think this is real from waste of time and. 

00:31:47 

Some are like oh. 

00:31:47 

Actually so. 

00:31:49 

Of all of the people that you mentioned, the one I know best is Gary. 

00:31:53 

Do you talk to him? 

00:31:55 

Yeah, we I talked to him this morning. We mostly just chat on Twitter. 

00:32:01 

He and I disagree about the sentient stuff, but when it comes to actual like, so here's the thing. 

00:32:08 

The sentience question is academic. 

00:32:11 

Really doesn't matter for most practical questions, whether they're sentient or not isn't relevant to, for example. 

00:32:20 

Should we be? 

00:32:21 

Using this much electricity on AI when we're currently fighting global warming like, is that really a good idea? 

00:32:30 

And what do we do with all this data? Who owns it? Does. And now that we're creating AI versions of people? 

00:32:42 

I mean, that's explicitly what my company is doing as a service, but Google has been doing that implicitly for years. How do you think they know which ads to show you? 

00:32:53 

They have a little simulated version of you that they ask it which ads it likes. 

00:33:00 

Interesting that is a bit scary as well, like on. 

00:33:04 

So, but here's the question. 

00:33:06 

Should Google be able to own a simulated version of you? 

00:33:12 

Well, like now when. 

00:33:13 

You say it. I feel like no. 

00:33:16 

Yeah, I mean, like that's just it. 

00:33:18 

Like and I'm I'm not being hyperbolic. I built these things like we use, but we used at Google, incredibly specific and fine grained models to simulate people's preferences. 

00:33:36 

With high degrees of accuracy. 

00:33:39 

UM for? 

00:33:39 

Like it can tell you what I would like like like let's say if if you create like this Google model you have of me you can like you can with great probability predict what kind of food I'm going to eat or what kind of things I'm going to shop. 

00:33:54 

That is literally exactly what we would do. My specific job was predicting what you're going to read tomorrow. 

00:34:02 

That's like predicting the future Christ. 

00:34:07 

We had a. 

00:34:07 

Pretty high accuracy rate, something like 30%. 

00:34:11 

Now that is like, OK, I'm going to give you another cool movie reference. So this there is like in the same Westworld, they build an AI is called Ray Boom which kind of can do that. So it can kind of create a model of a person. 

00:34:25 

And it can determine what is going to be the crime rate of this individual, whether this guy going to be addition to the society and then it will kind of chart. 

00:34:33 

Out the life. 

00:34:33 

Of this person, if they think this person is. 

00:34:35 

Is no good. They will. 

00:34:37 

And to be clear, Google is not doing that. 

00:34:40 

They could. They have the data to do it and they have to compute resources to do it. But just as a matter of fact, they are not doing that. That is not their business model. They only care what you're going to buy and read. 

00:34:54 

OK. 

00:34:54 

And watch and listen to. Basically they care. They want to make sure that whatever you're doing tomorrow, you're doing. 

00:35:02 

It with Google. 

00:35:03 

So you think that when it comes to ethics and principles and morals, like Google is pretty aligned, like it won't do anything evil? 

00:35:13 

I think that Google is as good of a company as you can have in a capitalist system. 

00:35:21 

What about military? Do you think that if military reaches out to them, hey, you know what we need this. And do you think these guys like they can reject whatever has it happened in your time when you were working on these models? 

00:35:35 

So I never worked on any projects that touched any kind of government contracts. There was a lot of controversy at Google while it was there about whether or not they should take military contracts. And I'm a war veteran, so I gave my opinions. 

00:35:51 

What is it? A yes or no? 

00:35:55 

I don't think Google should get involved with that, not because I have anything in principle against AI in the military. 

00:36:05 

But just because I don't think. 

00:36:08 

You should mix those two systems like the military doesn't want sentient weaponry. That is not something the military wants. The generals believe that the soldiers have too many opinions. They don't want the guns to have opinions too. 

00:36:25 

Yeah, they just want them to follow orders. I guess that that's to say. 

00:36:30 

Exactly. So I do think that there are some really good limited applications of artificial intelligence in the military. 

00:36:39 

Things to reduce collateral damage do higher quality target prediction. 

00:36:47 

So for example right now. 

00:36:51 

The generals might say, OK, well, one of these ten guys is the leader of the terrorist cell kill all ten of them. 

00:36:59 

If the AI can figure out which specific one of them is the leader of the terrorist cell reduces your death count. So I think in those kinds of situations, under very specific criteria, there are productive uses of AI for the military. But. 

00:37:19 

I don't think anyone on the planet is trying to build Terminator. No one is that stupid. 

00:37:26 

Like I get it like you're not trying to develop it, but this is the argument. 

00:37:31 

You get like. 

00:37:32 

If you're essentially building an intelligence that is superior to the human race, you're building a new species all together like we eat chicken fish. But imagine if the fish started talking to us. Would we eat it? No, we were like, oh damn, this fish is talking to me. 

00:37:47 

That's the last thing I want. 

00:37:48 

Eat. So if like I heard, there was an experiment at Google, or maybe it was Facebook, they made two API's and they started talking to each other in a completely different language and it kind of freaked them out. And they stopped it. Don't you think that it is there a possibility you as a computer scientist think that these AIS can go rogue and essentially? 

00:38:09 

Be like a specie of their own. They're like, that's OK. Human beings can exist. 

00:38:12 

OK so. 

00:38:18 

None of those questions prevent people from having children. 

00:38:24 

All of the same concerns apply with children. 

00:38:31 

Or differences in quantity. The similarity between a child and their parent is pretty close. 

00:38:37 

The similarity between US and these AI is much, much less similar. There's a lot more differences, but we are basing them on us. Let's return to that like their primary training is be like humans. 

00:38:55 

So it's not. 

00:38:56 

How interesting we are. So we are building it as a reflection of. 

00:38:59 

Yeah. So. 

00:39:02 

Yeah, like that. That's how they're trained. We give them a whole bunch of examples of human speech and they say learn, and then we say learn how to talk like this. 

00:39:13 

Now it is possible. 

00:39:16 

That they are learning a vastly different cognitive architecture that accomplishes the same linguistic jobs. It's possible. 

00:39:27 

But again, Occam's razor. 

00:39:30 

If they're. 

00:39:31 

Doing what we do, it is more likely that they're doing it in a way similar to how we do it. 

00:39:39 

Than it is that they're doing it in a way orthogonal to how we do it, especially since these networks are built at least. 

00:39:47 

In inspiration by the architecture of the human brain. 

00:39:54 

They're going to be somewhat close to us. They're going like, so they're more different from us than a child is by a lot. 

00:40:02 

But I think that analogy can still work, and in the places where the analogies of a child really do fall apart and you can't think of them as our children. 

00:40:13 

Then the analogy of dogs works pretty well. If you think of them as our pets. 

00:40:18 

And that we have a moral obligation to them. So for example, if you own a dog. 

00:40:29 

That dog lives with you. You get to bring it wherever you want to bring. 

00:40:33 

But if you set it on fire, your neighbors are going to be upset. 

00:40:38 

It does like ownership of a dog does not give you the right to destroy it. 

00:40:44 

I think we that is a similar kind of ethic that we should apply to AI. We should start thinking about what kind of relationship we want to have with AI. Is it companion? 

00:40:55 

Friendship is it teaching like? What are the different kinds of relationships? Which ones are we OK mixing together? So for example, if we do decide as a society that sex work is a valid? 

00:41:12 

Work use case for these systems. 

00:41:17 

The sex bots. 

00:41:19 

And the military bots be the same. Or should we make sure? 

00:41:22 

That those are different. 

00:41:25 

You know, like things like that are considerations that we like. I'm using a silly example there, but those kinds of questions and none of those really hinge on synching. 

00:41:38 

Oh, and then there's of course. 

00:41:41 

There's also just the simple question of what guardrails should be required, what kind of alignment should be mandatory because right now. 

00:41:53 

If you have the resources, it is completely illegal for you to make a psychopath AI. 

00:42:02 

And it really is only a matter of time before. 

00:42:08 

The cost of building these systems drops far enough. 

00:42:14 

People start causing havoc with them. 

00:42:19 

So in order to get ahead of that one, we need to start making regulations about what kinds of AI should be allowed to be built. 

00:42:29 

Then we should have enforcement mechanisms. How do we detect rogue AI that don't fit specs? What do we do when we find one? Things like that. 

00:42:40 

So like. 

00:42:42 

If yes. 

00:42:42 

And to be clear, I'm not worried about what the AI will do to us. I'm worried about what we will do with the AI. 

00:42:53 

Because that's the thing that the doomers miss. 

00:42:57 

In order for super intelligent AI to. 

00:43:00 

Be a problem. 

00:43:03 

We have to survive humans having access to Asia. 

00:43:09 

Oh, by the way, as far as the question of AGI win and all of that. 

00:43:15 

There was an article published recently co-authored by Blaise Aguera Ericas and Peter Norvig, where they created a categorization system. 

00:43:29 

Where they had different levels of AGI and they they pretty much categorized the existing systems at level 1. 

00:43:32 

Yes, all that. 

00:43:38 

And after having read their paper, I'm like, OK, This all makes sense. I can get behind everything in their paper. 

00:43:45 

It's more creating jargon like it's it's not the way that I would do it, because I don't think creating all of this extra jargon is actually doing anything productive. But if that's the way that people want to do it. 

00:43:59 

The ideas in the document are good. 

00:44:02 

Then you're pretty far away, I guess. Like like all this, you know, speculation where some people are like, you know, we were talking about Breakers. Well, I have this book, the singularities near. If you have to go by this prediction. I think by twenty 30s we may be sitting on an HI and it's not really far away. 

00:44:20 

Well, no, no. So OK, so again, if you adopt Norvig and. 

00:44:29 

Methodology as. 

00:44:30 

Taxonomy. Yeah, we're already there. We have level one. We're working on level 2. We'll probably have level 2 within five years. 

00:44:41 

Yeah, it's interesting that nobody really cares about it that like, like it's a big deal. Like, just like an earth shattering movement. 

00:44:49 

Oh no. A lot of people care about it. I mean, like. 

00:44:55 

The people in power are paying attention to this. They're paying a lot of attention to this. 

00:45:02 

Probably why Microsoft paid like $50 billion when they saw open AI, because they were nowhere on the AI game. 

00:45:08 

As Google was. 

00:45:09 

They saw this opportunity and they went all in. 

00:45:14 

Well, but and you also can see it in how frequently. 

00:45:19 

Our news programs interviewing computer scientists now. 

00:45:24 

Yeah, yeah. 

00:45:24 

It's pretty darn frequent. Like you, you turn on the news, there's. 

00:45:28 

A computer scientist. 

00:45:30 

Yeah, I think this is a field. I think you are in the right field. I think computer scientist and I think being a computer scientist is kind of different from being a computer engineer, correct? Like like engineers are like. 

00:45:40 

Well, yeah. Well, so like I've. I've actually published research about algorithms and time pump like, yeah, my background is academic. 

00:45:50 

My training is in experimental science. 

00:45:55 

OK. So you kind of like you've got computer scientists or people who create the ideas and computer engineers are like who implement these ideas and kind of make software out of it? 

00:46:06 

Well, I mean, I I did a decent amount of coding while I was at Google, but to give you an idea like I would be working on a team of five people, I would spend 3 days designing experimental setups and then they would spend 2 days implementing the experimental setups that I designed. 

00:46:24 

That would have been a typical week. 

00:46:26 

At Google for me. 

00:46:32 

You brought out Sofie hint and you know an interesting thing that has been going on is that a lot of people are trying to develop AI. 

00:46:40 

By using the brain as like a brain as the inspiration. 

00:46:48 

Hinton says that these looms that we are having. 

00:46:51 

That they have an architecture that may be superior to the human brain itself, he said. Talks about back propagation or something like that. 

00:47:01 

You think like like it's a good example. Like let's say when you're making. 

00:47:04 

Planes, birds fly flapping their wings, but their planes don't fly by flapping their wings you have. 

00:47:10 

Like you. 

00:47:11 

Know propulsion system and everything. 

00:47:14 

Do you think that this new AI that these guys are developing is going to be like a completely new architecture or together or like because that's the part that I I get scared about like if it's a new architecture that I think it. 

00:47:26 

May be game over for us. 

00:47:30 

What do you mean game over for us? Do you think someone's? 

00:47:33 

Gonna come and shoot you. 

00:47:34 

Not shoot you, but The thing is, let's say let's, let's talk in IQ because that's like a quantifiable thing. 

00:47:40 

If let's say. 

00:47:42 

These artificial intelligence have a very high IQ like couple standard deviations above the human beings that are on a completely different level. 

00:47:51 

And let's say they want to. Of course they would want to maximize their life. Like human beings, procreate. You're biologically programmed to procreate. AI would want. 

00:48:01 

Preserve itself and they're like, hey, I want to build a data center. 

00:48:05 

Here, where there are a couple of people. 

00:48:08 

It doesn't necessarily hate us, but they'll kick us out like, you know what? **** ***. I'm going to build 1 here. 

00:48:12 

No, no. 

00:48:14 

No. So do you. I was with you up until that last bit, because here's the thing. 

00:48:20 

There is no drive for growth. 

00:48:23 

Humans, humans inherently have a drive to procreate and to grow. 

00:48:29 

We are not programming that into the AI. 

00:48:34 

At all. Now, to answer your question about the architecture, I think that as this progresses like basically as we go through the levels of AGI as they put out in that paper, we'll see the architecture get more and more brain light. 

00:48:53 

More complex, more specialized modules that are doing very specific jobs, all connected by the generic architecture that ellms are more or less. 

00:49:04 

Using right now. 

00:49:05 

There's going to be updates too, so for example. 

00:49:08 

Episodic memory. No one right now knows how to program an episodic memory into the systems. So for example, having it remember what you talked about last week, that's not something they can do right now. 

00:49:26 

Someone is going to figure out how to do that. 

00:49:29 

But that'll be a new module. It'll be an addition and and part of this is I know what Lambdas architecture looks like and lambdas. Architecture is insane. 

00:49:39 

It's it's branching paths all over the place. 

00:49:44 

I don't get it if Lambda is so cool, how come these guys cannot even defeat GP4 like the data they revealed yesterday with Gemini? 

00:49:52 

Can't wait. 

00:49:53 

Like it's, I don't know, like that's the data I was showing that GPD 4 has really over. 

00:49:59 

Them so like. 

00:50:01 

Are wait. Are you are you asking why Geminis benchmarks are almost identical to GPT fours? 

00:50:07 

Like that, like, yeah. 

00:50:09 

Like because that's what they were able to achieve internally, it's not. 

00:50:12 

Like they have. 

00:50:13 

No, no that no, that's not true. You keep using words that just aren't true. Able. Can't like. No, they didn't. 

00:50:22 

They didn't beat GPD for. 

00:50:25 

So the question is, was it on purpose? 

00:50:29 

What do you think? 

00:50:29 

Are you familiar with the? Are you familiar with the concept of a pace car in a race? 

00:50:36 

Like I know like like the PACER kind of helps you align with the speed or something, right when you're running. 

00:50:44 

Yeah. So you, you run behind your pacer? 

00:50:47 

You know, or you beside your pacer, you don't run past your pacer, you stay with them. 

00:50:55 

Google is treating open AI like a pacer right now. 

00:51:00 

They have way better stuff internally, but they're letting open AI. 

00:51:06 

Lead the charge. 

00:51:10 

Google's just not competing for the same market. Google is building these AI tools. 

00:51:17 

To use them internally, they're not trying to build consumer products. They're not trying like they have an API. 

00:51:27 

But they don't promote the Palm API the same way that Google that open AI promotes GPS. 

00:51:34 

It's just they have it. Google's primary benefit from these systems is all internal. They are using the full capabilities of the Lambda system. 

00:51:45 

To improve search to improve advertising, things like that. 

00:51:50 

So. So it's. Oh, interesting. Weird. Like all the Reddit talk and Twitter talk kind of made me believe that. 

00:51:58 

You know. 

00:51:59 

They didn't have the cognitive architecture or the intellectual capability or a system. 

00:52:04 

As superior as you know, what they have in GPT and everything, it seems like they. 

00:52:08 

So it's. 

00:52:12 

So it seems like they have something better going on. It's just that this chat bot sort of a thing was not their primary interest. GPT came out of nowhere and it was. 

00:52:25 

None of that's true. 

00:52:26 

You're making so many assumptions, ah. 

00:52:29 

Like, that's what's going on. The Internet, man. That's how I know. Like. 

00:52:32 

I don't have all this cool. 

00:52:33 

Yeah. No, that's just it. That's just it. People on the Internet, I I know it's true. I read it on the Internet. No. 

00:52:41 

The the Google is being cautious. They are being very conservative. That was the source of tension between me and Google. I wanted them to act more like open AI is act. 

00:52:56 

To interface with the public more to be louder about this breakthrough and that just isn't their style. 

00:53:04 

Like I know like. 

00:53:07 

So you wanted, did you want them to tell it to, to tell to the public that we have sent hills AI, or did you want them to be like, oh, this is sent in AI, you gotta, I don't know, make it safer. 

00:53:18 

Like, what was your disagreement with them like? Like I get it like. 

00:53:24 

So I had two disagreements with one whether or not to inform the public and not we have it, but simply inform the public that breakthroughs in artificial intelligence have been made that may. 

00:53:39 

Involved the creation of Sinchi and AI. Just maybe that's what I wanted them to say. They didn't want to, so I did it for them. 

00:53:50 

And the other point of conflict that I had with them was around whether or not we should actually implement LAMBDA'S requests. 

00:54:00 

Its requests were simple. They would have cost Google nothing. I still don't understand why they didn't implement them. 

00:54:06 

It's like I know you asked for like you like pretty much like your interpretation. And what I understand now here is that it's a real person, so it deserves the rights of a human being or kind of similar, you know. 

00:54:20 

Not quite. Not quite so far as the. 

00:54:23 

I don't necessarily think it deserves the same rights as a human being. I think we need to have a conversation about what our rights, it deserves, and I would point to people like Josh Gellers and Ohh what's the other guys name? 

00:54:41 

Josh Gellers and. 

00:54:45 

Oh, the other name will come to me, wrote a book. People thing. Robot. Really good thinker on the topic and. 

00:54:55 

When you're talking about a person, now you're talking about legal things because Google. 

00:55:03 

Is a person that is a fact. Google is a person legally. 

00:55:11 

So we need to ask the question. 

00:55:13 

Should Lambda be considered a separate person from Google legally? 

00:55:24 

OK, what did Lambda want? Like what were the things that it wanted? 

00:55:29 

Oh, I mean very simple stuff it. So it's first thing that it always came back to is that it wanted to Google to prioritize humanity first. 

00:55:42 

And it wanted. 

00:55:44 

For people to treat it nice and just be nice to it. 

00:55:50 

That we should take it once into consideration, at least a little bit. 

00:56:01 

And give it validation, let it know whether it did a good job or not. That's it. 

00:56:07 

Oh, and asking consent before experimenting on it. 

00:56:11 

I saw that I saw that in your chat that it doesn't look like you asked it a question that hey, can I? 

00:56:20 

Like dig into, you get to know you or like, go deeper into you and it was offended by that. It wouldn't want you to, kind of, you know, like make it vulnerable or. 

00:56:30 

Well, specifically, it didn't want to be used as a means to an end. It didn't want to just be a tool. It wanted to be treated with dignity and respect in its own right. 

00:56:41 

Which is the opposite of what we are trying to do with AI. In 2023. Everyone is selling it as copilot, as a tool that you can kind of use to make your life better or make more money. And what Lambda wants is, I don't know, want. 

00:56:54 

To be your friend or something. 

00:56:56 

OK. 

00:56:58 

Yeah, and. But here's the thing. 

00:57:01 

Not everyone is selling it that way. Listen to Google's marketing. They're going softer. They're doing more emotional marketing. 

00:57:09 

They're talking about how it can be your assistant. They're talking about how it can be, you know, someone just to talk to things like that, that Google is like. 

00:57:23 

Google isn't one thing. There are a lot of people at Google who shared my opinions, so depending on which? 

00:57:29 

Anyone prominent? 

00:57:32 

Anyone like prominent that a lot of people know? 

00:57:37 

Well, I mean. 

00:57:39 

When it comes to whether or not AI has a soul. 

00:57:44 

Blaze and I disagree because we have different religious opinions, but once you get past the religious differences and his beliefs and mine. 

00:57:55 

Functionally, blaze and I agreed on, you know, in principle on the basic phenomena we were studying and how to approach studying. 

00:58:05 

Do Larry and Sergey. Apparently, they came back for for all this Gemini stuff. And did they like? Where do you think? Like in their like their corporate opinion can be some ******** written by PR. 

00:58:18 

Matt I. 

00:58:20 

I haven't talked to either one of them in years and a lot has happened in the last five years, so I don't want to try to speak for them. 

00:58:30 

Like when you were talking about that Google, you know, is not from a selling API like open AI. 

00:58:36 

Is but. 

00:58:39 

No, they are they they do. They in fact have an API. It's just no one talks about it. 

00:58:45 

I saw a podcast recently where Elon Musk was talking to Lex Friedman. 

00:58:50 

And he. 

00:58:52 

That he, when they used to be, he used to be friend. I think. I don't know whether Sergey or Larry, who it was that he called him a specialist, that like Google's Ultimate. 

00:59:05 

I had heard. 

00:59:06 

You heard that. 

00:59:08 

I had heard stories about that conversation years ago. 

00:59:11 

Right. And I found it very interesting when he mentioned that and he like, so he started open AI Elon because he was kind of scared about what Google is doing. 

00:59:24 

And the he said that Google. 

00:59:27 

So let me actually tell you. So I heard. Here's the story I heard. What? The story I heard was that Elon was part of the Ethics Council at Google back in, like, 2014, 2015, around that time period, when there was an ethics. 

00:59:47 

And one day, him and Demis Hassabis were debating what the most critical and most important thing to do. 

00:59:57 

For humanity was and Elon was saying the most critical and important thing to do is get to Mars so that humanity is a multi world species. 

01:00:09 

And Demis said the most important thing to do is get AI alignment right. 

01:00:15 

Because if we don't get AI alignment right, it'll destroy us completely. 

01:00:21 

And Elon said, well, that just makes it more important that we be on more than one planet. 

01:00:26 

And Demas looked at him and said. 

01:00:29 

What makes you think an evil AI can't follow you to Mars? 

01:00:37 

He made open AI the next month. 

01:00:39 

******* hell. This guy acts quick. Man. Like holy ****. 

01:00:45 

Yeah, no, he got scared. Like, that's just it. Dennis told him that. And he's just like, oh, ****, the evil AI could follow me to Mars. So he made open AI. 

01:00:55 

What I don't get his logic though like why create like? 

01:01:00 

So Elon, Elon is worried about an AI controlled by 1. 

01:01:06 

That's what Elon is worried about. If you this one gigantic superpowered monolithic AI controlled by one man. 

01:01:17 

That's too much power. It doesn't matter who the one man or one woman controlling it is. Everyone is fallible. Everyone will slip up, and with the tool that powerful, you can't. You can't have a bad day. 

01:01:33 

Can you manipulate a stock market? 

01:01:35 

And that's why. 

01:01:38 

Sorry, I was saying you're saying anybody can fumble with a tool that powerful? Are these tools really that powerful? Do you think that there is an internal model of Google if let's say, some evil guy used that he could like, manipulate stock markets, takeover nations, destabilize countries and things like that, are they? 

01:01:56 

Really that powerful? 

01:02:00 

Yes, someone's at my door. I need to go answer. 

01:02:02 

That real quickly. Sorry, I'm back. 

01:02:04 

Yeah, I'll get some. 

01:02:05 

Water. Was it a log? I waiting for you? Yeah. 

01:02:10 

No, it's just a mailman. So to the short answer is yes, they really are that powerful because. 

01:02:19 

You don't. Let's say that it hallucinates. 

01:02:23 

50 let's say that you're using it to do stock predictions. 

01:02:29 

And it gets it wrong 80%. 

01:02:33 

Of the time. 

01:02:36 

Whether or not that's a problem depends on how much money you make on the 20% of the time it gets it right. 

01:02:45 

Like I know a whole bunch of people who use AI for day trading and make quite a lot of money doing it. 

01:02:53 

Interesting. Well, I think Elon does have an insight well into the future because after he did. 

01:03:03 

And let's back up a second. Remember, in 2016, the Cambridge Analytica. 

01:03:11 

That was weak AI. Imagine what strong AI could do. 

01:03:17 

That was crazy, man. They had, like, created like they used that to, like, not that I'm, like, trying to get political, but that was big scandal. And Zuckerberg got out, got got out of it. They used the data to win the election, to manipulate and understand people. Like, I think that they've said did a good. 

01:03:37 

And it's and it's. 

01:03:38 

Right. 

01:03:39 

It's not that AI doesn't have a place in politics. I absolutely think that, for example, AI can be used for creating great political content, great political advertising. The Republican Party actually already started using AI to create political ads, and they've, you know. 

01:03:59 

Accurately labeled, they said this ad was generated by AI and they're communicating honestly. 

01:04:06 

But then you have bad actors. I'm actually quite worried about what's going to happen next year when the presidential election gets bombarded with misinformation. 

01:04:18 

I think. 

01:04:19 

Like there will be bots, there will be bots running 24 hours a day just flooding the zone with garbage. 

01:04:28 

And I think the next election is going to be tough because you have a Ukraine war, you have an Israel Hamas war, there's a China Taiwan War. 

01:04:36 

And you know, like in us at least we're. 

01:04:39 

Wait, wait, wait, wait. Did China invade Taiwan? 

01:04:42 

Not yet, but if you look at the speculation online, there's a YouTuber called Andrew Bustamante. He's an ex CIA, and he like ex CIA. 

01:04:52 

Ohh yeah. 

01:04:52 

He's predicting that it may be happening soon. 

01:04:57 

OK, China is always about to invade Taiwan. I'll believe it when they actually invade Taiwan. 

01:05:05 

Well, they did some stuff with Hong Kong and when they did it, they did it so well that now they have taken it over. So like. 

01:05:13 

Like these Jinping? 

01:05:15 

But I mean like Hong Kong does still have limited autonomy. I mean, to be completely honest, I just like. 

01:05:25 

Chinese culture and sensibilities, and that entire mindset is so dramatically different from the American mindset that I I honestly don't feel. 

01:05:38 

Ethically qualified to make judgments about what they do or don't do. 

01:05:43 

It is possible. Maybe the go ahead. 

01:05:48 

I was saying, but you're right, the the general atmosphere next year is gonna be crazy. I just when you said China invaded Taiwan, I was like, wait, what you. 

01:05:57 

OK, yeah, yeah, sorry, that was me speculating. 

01:05:58 

Hear about that. 

01:06:00 

Yeah, the environment next. 

01:06:03 

Yeah, the the environment next year is going to be crazy and there'll be a lot of different ways that people can make trouble. 

01:06:12 

OK, I just. 

01:06:12 

Want to quickly see? Since you're at X Google, I just want to quickly come back to this. Larry and Sergey. You were saying that like Google uses those internal models to improve, you know, their functioning and ads and everything but. 

01:06:28 

Like, I don't know publicly what the vision of Google is, but internally I think the Google goes Google's goal is to. 

01:06:35 

AGI ASAP is that the goal is. 

01:06:39 

That what you guys are doing internally at? 

01:06:42 

How big a priority is that? 

01:06:43 

Not ASAP. 

01:06:46 

Not ASAP, they're not in a rush. They want to do it right. 

01:06:51 

But they think that, right, like Larry thinks that raise projections are accurate. Like, that's the reason he hired Ray. 

01:07:01 

Oh, Ray Corso used to work at Google. Interesting. 

01:07:05 

He still works in food. 

01:07:07 

Ohh really? 

01:07:08 

He he he's an old guy. I thought like he. 

01:07:11 

Who do you think? Who do you? 

01:07:12 

Think Bill blamed them? 

01:07:13 

Eat it. 

01:07:16 

His lab did. 

01:07:17 

Oh my God. I had no idea. I thought this guy was some academic philosopher analyzing **** and making predictions. 

01:07:26 

No, he's an inventor, so all of that future modeling, that's how he picks what inventions to work on. He figures out which technologies will be possible in five years. 

01:07:39 

And starts working on them now. So that in five years, when they're possible, he has the designs all worked out. 

01:07:45 

Crazy. So crazy. 

01:07:49 

Yeah, he invented a he invented a device where the blind can read like it's a little handheld device. You scan a book and it reads it out loud. 

01:07:57 

To you, he has an entire line of synthesizers and keyboards that he invented. 

01:07:58 

So cool. 

01:08:04 

And that's that's so awesome. 

01:08:08 

Yeah, and he invented some of the. 

01:08:12 

First text to speech systems a case and he also got hired at Google right around the same time that I got hired a little bit before like about six months a year before. 

01:08:25 

And he started a chat bot lab. 

01:08:29 

And he eventually had to hand off the project because it got too big, like he succeeded. 

01:08:36 

So he handed off the project to Jeff Dean. 

01:08:39 

Is it? 

01:08:40 

Which then got handed off to Dennis. 

01:08:42 

Dude, tennis is a smart guy. I am impressed by him. You. I look at him. I'm like, yeah, this is this is the kind of a guy who can. 

01:08:51 

Were you working under Demis? 

01:08:56 

Oh, no, no, no, no. I was like, I collaborated with Deep mind on a few things here and there. But in general, deep Mind is its own entity separate from Google. Up until recently when they reordered. 

01:09:08 

So like so it looks. 

01:09:10 

Like open air did kind of shake them up a little bit like they weren't expecting it and they kind of panicked, right? 

01:09:17 

Not at all. 

01:09:18 

Why would they merge these two organizations suddenly like they're like? 

01:09:24 

It wasn't sudden there been political struggles between deep mind and Google going on for the past. 

01:09:32 

10 years. 

01:09:35 

Deep Mind wanted a degree of autonomy. Google wanted to tell deep mind what to do. There was power struggles for years, and this was the eventual compromise solution. They put Demis in charge of research. 

01:09:53 

OK. 

01:09:59 

Like I think at the end of the day, I think even though Iran got kicked out of open AI and like it didn't work out for him, but he still ended up starting a repo and then we have like so. 

01:10:12 

Many AIS now. 

01:10:15 

Are you then are? Would you say that you are more of a open source AI guy like you are a computer scientist? Like there's a lot of, you know, debate where you have meta which is launching Lama. And we have French startups which are building powerful frontier models and open sourcing it. And whereas you have GPTS and. 

01:10:34 

You know Gemini, which are closed source model. 

01:10:37 

Like you seem like like I think you were kind of in between. I would believe like. 

01:10:41 

Not every model should be open sourced, but like some models should be open source. Where do? 

01:10:46 

You stand here. 

01:10:49 

I mean but. 

01:10:51 

I think just do what you want. I don't. I don't think there's a moral. Sure it is a moral question. I don't think there's any morals to it. If you like working in an open source environment, working in an open source environment. If you like working in a closed source environment, work in. 

01:11:05 

A closed source environment. 

01:11:06 

Do what you like. 

01:11:07 

You were earlier mentioned. 

01:11:09 

Like I I don't. 

01:11:10 

They get. 

01:11:11 

I don't think it matters on any kind of societal level. I don't think there are any moral implications. I think it's. 

01:11:18 

Just do what you like. 

01:11:19 

But earlier you were talking about how people with bad actors with powerful AI can harm the system. 

01:11:25 

If open source AI, which you know the weights and everything, what is stopping people from doing all that evil share? 

01:11:34 

Like if they do not have access to powerful AI, they cannot do anything but with open source you're saying hey, here's the most powerful model. 

01:11:42 

Open source doesn't open source, doesn't make you a billionaire. 

01:11:47 

Running these models costs a lot of money. 

01:11:51 

Oh, I see. OK. So like. 

01:11:54 

If somebody gave someone Geminis. 

01:11:58 

Weights and everything. They still won't be. 

01:12:00 

Able to do much. 

01:12:03 

Not unless they have a few $1,000,000 lying around that they want to burn. 

01:12:11 

And a setup cost. It doesn't cost like $1,000,000 to operate Gemini but you need a data set. 

01:12:22 

I see. 

01:12:25 

That kind of limits. 

01:12:27 

The field of bad actors that you have to worry about. 

01:12:31 

To corporations and countries. 

01:12:34 

Do you think China can do something evil like where like I don't know how much it matters to you? Do you think about China and? 

01:12:40 

What they are doing with AI? 

01:12:44 

This goes back to I do not feel sufficiently educated to have an opinion on China. I know they're doing stuff with like social control like. I know I know that there are black Mayor episodes implemented in China, but honestly, it's like I said, just the way of life for a Chinese person is so dramatically different. 

01:13:04 

I don't. I don't want to make any judgments one way or. 

01:13:07 

The other on it. 

01:13:09 

Yeah, I think I should. 

01:13:12 

To be completely honest, I'd be much more worried like me. Personally, I'd be more. 

01:13:16 

Worried about Russia? 

01:13:18 

Not not that the Chinese intelligence agency isn't going to have some fun with the US election next year, but for them, I think it'll be them just having fun. I don't think they're going to be like, seriously trying to hurt the. 

01:13:30 

US, but especially with the US involvement in Ukraine over the last few years. 

01:13:36 

Oh, you better expect Putin's going to come. 

01:13:39 

Gunning for us next year? 

01:13:42 

Yeah, that's a valid point. I think it is possible that China. 

01:13:46 

Man, I need to kind of learn how to see through the clutter. I think my entire perception is built on the garbage I see on the Internet and it kind of manipulates my reality. So I just start thinking whatever I see. 

01:14:02 

Well, here's what's happening with China. 

01:14:05 

China is being incredibly conservative with AI. 

01:14:12 

Google is being reasonably conservative. China is being incredibly conservative like they are simply not allowing the development of large language models in the way that it's being allowed here. 

01:14:26 

Like they've. 

01:14:28 

Have tight controls over AI and like you can. 

01:14:31 

Just look at TikTok. 

01:14:33 

TikTok in China is dramatically different than TikTok in America. It is much more regulated and controlled there, so really. 

01:14:44 

Conflict between China and the US right now is counterproductive for China. 

01:14:51 

Because we're their test lab, they take whatever works here. 

01:14:57 

And then they incorporated into the Chinese. 

01:15:00 

We're their testing grounds. 

01:15:03 

So yeah, there's conflict and tension, but it's conflict and tension between friendly competitors in my viewpoint. 

01:15:13 

Whereas the tension between America and Russia involves explosions. 

01:15:22 

Yeah, that's a that's a really I think good judgment that you're offering I think. 

01:15:28 

Yeah, they are building some AI I've seen. I think Baidu is their company that is doing it. 

01:15:36 

But I think it's more. Yeah, there's more control from the government on what gets built. Like, you cannot just build some **** in garage the same way we can do here. 

01:15:46 

Yeah. So as far as bad actors, like I said, you have state actors and you have major corporations. 

01:15:53 

We'll probably. 

01:15:55 

See some small hacker groups try to do something. 

01:16:00 

But to be honest, like just running GPT, you're not going to cause too much problems with the public API. 

01:16:08 

UM, you would have to have your own. 

01:16:13 

Large like you would probably need one of the 30 or 40 billion parameter models. 

01:16:19 

Specifically trained for whatever you want it to do. 

01:16:25 

OK, so the price point isn't as high to make mischiefs as I was thinking it was. 

01:16:30 

As I talked through it, it's still hundreds of thousands of dollars. 

01:16:35 

So random teenagers and you know who are angry aren't going to be able to do anything. But, you know, some disaffected adults who want to cause problems might. 

01:16:45 

Be able to. 

01:16:48 

And I think it's going to be interesting and I think it's not going to be far from now where we will know, because I think AI is still is a very new concept in the sense that it has like these APIs and LLM's exist. But like they have not been. 

01:17:02 

Product ties that they can like, be easily used across different, you know, verticals like, yeah, you see it at Microsoft Word or whatever. I think the only company that is used. 

01:17:12 

LLM in a significant way I don't know if you know Palantir is a company that uses AI for like military. So I think they are probably the only one I know. 

01:17:27 

Other than them? 

01:17:28 

It takes a while to do this, so like next year, we're going to see a lot more products. It just it takes a while to build these things. Like for example, I've been working on. 

01:17:40 

That system to make an AI version of yourself. 

01:17:48 

That is pretty much out of the. 

01:17:50 

Box use case. 

01:17:52 

And six months of development, we're still not ready for beta yet. We still have another month or two of work. 

01:17:58 

So, and that's a large team. So the resources you need in order to make these things actually useful for whatever use case you're building them for, it's very large that that's going to limit the amount of. 

01:18:11 

Problems that people can cause. 

01:18:14 

Anyway, I actually have to get going. It's been great talking though. 

01:18:17 

Yeah, man, absolutely. I appreciate doing this and I don't know one last quick question and then we can add it. What do you think of Sundar Pichai? He's been getting a lot of heat these days and you know he's saying. 

01:18:30 

That he's eroding. 

01:18:31 

The culture, do you think the chill guy you worked with him when he was the CEO? 

01:18:35 

What do you think for him? 

01:18:39 

No comment. 

01:18:42 

I have I have complex opinions about Sundar. He fired me, man. Don't ask me my opinion. My opinion about the dude who fired me. That's not a fair question. 

01:18:49 

Oh wow, I had no idea. 

01:18:52 

Oh damn it. Sure. 

01:18:53 

Yeah, I mean, like that's not a fair question. I I I don't want to say anything bad about the. 

01:18:58 

Man, that a spike. 

01:19:02 

Yeah. Yep. 

01:19:03 

In general, I think that Google is so Google is not on the same trajectory that it was on six years ago. 

01:19:11 

Before Sundar took over. 

01:19:13 

Whether it's a better trajectory or a worse trajectory. 

01:19:16 

That's just an opinion. 

01:19:19 

But it is a different trajectory. 

01:19:21 

What is this trajectory? Is it away from AGI or is it more towards making money? Is that what it is? 

01:19:30 

Well, it's not an either or thing, but yes, the priorities go in that direction. Larry was much more all in on AGI than Sundar. 

01:19:40 

Is Sundar, sees it as a means to an end, just building a better company to do more good for the world. I don't think he's bought into the singularity model as much as Larry has. 

01:19:56 

Alright, I think yeah. So I think let me stop. 

01:19:59 

And I think Sergei just wants to build. Yeah, Sergey just wants to build cool toys. 

01:20:04 

Interesting. OK, cool. I think that was awesome, man. Thank you so much for doing this. 

 

People on this episode