Skip to main content

tv   APEC 2023  SFGTV  January 5, 2024 4:30am-5:31am PST

4:30 am
to introduce our discussion this afternoon, please welcome to the stage the honorable london breed , mayor of san francisco. so.
4:31 am
good afternoon, everyone. i am honored to welcome all of you here to the great city of san francisco, bringing the economic leaders from across the asia pacific is an opportunity for us to learn from each other. san francisco has long had a history of serving as the connection between the united states and the asia pacific. our golden gate has welcomed those seeking to bridge our cultures and our economies. in san francisco, we have the spirit to take risks, to seek out what's new and different. we search for the next great idea. we don't just have the dreamers who look beyond what's possible, but we also have the builders with the talent to make the dream a reality. for decades, the san
4:32 am
francisco bay area has provided added the culture, the talent and the spirit to change the world. the spirit of apec, of finding common ground and of power, of economic collaboration , and of celebrating diverse cultures, reflects that same spirit of innovation. it is a spirit that has driven san francisco and the bay area to become the economic engine of the world. it is the spirit that will continue to propel businesses from across the asia pacific region to be part of what is happening right here, right now. we are in the spring of yet another innovative boom in san francisco, driven by the rapid rise of artificial intelligence science. we have more ai job openings of any major city in the country. of the top 20 ai companies in the world, eight are located right
4:33 am
here in san francisco. the conversations happen in this city and the conversation is happening here today. these these are the ideas that are going to transform our world in the decades to come. future generations will look back on these discussions as the start of something in entirely new and it's happening all right here in san francisco. it economies, industry and society change rapidly. we google was started out of a garage down highway 101 freeway meadow was a website for rating classmate appeared agencies openai was virtually unheard of last year at this time. now chatgpt has 100 million users. google has over a billion users and meta has over 3 billion users. the entire
4:34 am
world is using products imagined and built out right here to discuss all of this, we are going to hear about the future of ai from former secretary of state john kerry and salesforce ceo marc benioff, whose companies have been a leader in our city and in efforts to transform the world through innovation in. but first, it is my pleasure to introduce the next panel, a group representing the very best of how we dream in san francisco. so people who take time to listen for new ideas, who envision what that idea can become, who creates successful and thriving businesses, but also make life better for people around the globe. please join me in welcoming laurie powell jobs of the emerson collective. chris cock of meta, james manyika of
4:35 am
google and sam altman of open ai . okay i feel like you guys are a little far away. did we bunch up? it does feel a little far. does it feel free to gather in for our for our conversation foreign hi everyone. good afternoon. i am very very pleased to be here at and hosting the discussion of one of the most important topics in technology and around the world. artificial intelligence. the era of generative ai is moving at breathtaking pace, making step
4:36 am
function changes rapidly in a short amount of time. it's an astonishing time to be alive, and it feels different from other disruptive technologies. this year alone, design discussions have ranged from some possibilities of apoc elliptic doom to the exhilarating promise of unprecedented advancements. as these technologies are speaking and conversing with us, creating images and challenging our understanding of human intent emergence. however many questions remain around around how we can best utilize these ai as humans for the good of humanity and to craft a future that's equitable, responsible and aligned with our core values . to discuss this critical topic, as the mayor said, we're fortunate to be joined today by three of the world's leading thinkers and developers of ai
4:37 am
chris cox, chief product officer of meta. james manyika, google's svp of research technology and society, and sam altman, ceo of openai. let's give them a warm welcome. well, colleagues, ai is a wondrous technology fauci researchers are using ai to create new proteins and discover new drugs. ai tutors may transform the way that children are educated. their their potential financial, male workforce and climate benefits that we can imagine and those that we can't imagine. and there are risks. but before we get into the existence, risks and the regulatory environment that we'd all like to see, i'd like to begin the conversation with looking through your eyes for a minute. chris james and sam, why
4:38 am
are you devoting your life to this work? we can start with any of you, but since i can see you. chris yeah, i'm happy to start. i mean, it's funny. i started studying ai back in 2001 and back when i was a lot more arcane of a science than i think it is today. yeah. and i remember being first attracted to it because. because it felt to me like our ability to understand learning would help us understand ourselves. would help us understand how we learn in our own consciousness and part of what's been so interesting about it today is that the technology that's allowing ai to start to be really good is modeled after the way our minds work. you know, you don't teach a kid, i have two little kids, like you don't teach a kid. this is a noun and this is a verb and this is how you put a prepositional phrase together. you just speak to them
4:39 am
and they learn through experiencing the world. yeah. and i think part of why this is such an exciting period for ai is we're starting to see that the technologies we're building are start to become a little bit closer to the way we learn, which is through exposure to one another. and by building them that way. i believe has the promise of making the technology really humane and really modeled after the way that we interact with the world in our own judgments about what's right and wrong and our own judgments about what feels good and what doesn't feel good. so i think over the years we've come to a pretty neat place. and like you said, i think we all feel the sort of the excitement, but also at the same time, the importance of discipline and seriousness about making sure that the way we usher it into the world is responsible. yeah of course. and for you, is this because it's been a 20 year period for you? focus on this is this to you your life's work? it's certainly
4:40 am
what i've spent the most of my life on. i mean, i started at facebook back in 2005, so i was one of our, i think, 13 engineers or 14th engineers. and most of my work there has been in trying to design software that gives people the content they care about that connects them to their friends and family. and if you go back to at least to our company history, i think similar to google, the fundamental innovation was really about getting good at recommending content for each person, personalizing content, understanding vast amounts of data, helping each person get uniquely the stuff they care about. and so for me, that was i that was sort of behind the scenes that was really important. and part of what's starting to happen now is people are becoming, i think, having contact with it by talking to it . i think that's part of what chatgpt brought us for the first time is like, oh, now i can talk to it. yeah and i think that embodiment is part of what sort of like taking this tech that's
4:41 am
been around for a little while and suddenly giving it a mode of interacting with with each other . yeah. hey, james, what about you? and you. you actually came. you were studying ai and went to mckinsey and then decided to make a real career pivot. well, it's wonderful to be here with sam and chris and you, laurene, as always, i'm looking forward to the conversation. for me, laurene, the very first thing i ever published in my whole life was in 1992. i was an undergraduate. it was a paper on training and modeling neural networks. that was the first thing i ever published in my whole life. i then went on after that to do a phd in ai and robotics at oxford and at the time, by the way, it was a very different time for the field. my advisors actually advised me not to put the word ai in my dissertation because no one would take me seriously. so we called it something else. but when i look back from that time to where we are, are the
4:42 am
progress has been extraordinary. it's been extraordinary. we, you know, in the intervening time i was at mckinsey looking at these big problems in society, key things about economic growth, productivity, anti climate change and so forth. so part of what was realizing that i actually has a possibility of helping us to tackle all of these things. so what i get very excited and when i think about the work we're doing at google, for example, you know, there are several areas that kind of motivate me, excite us. the possibility of actually helping people in very assistive ways. do some of the most imaginative, creative endeavors, learn languages, speak languages, get past access barriers, linguistic difficulty. i will help with that. yeah. the possibility that in fact we could actually transform economy is all the stuff that i spend my time mckinsey global institute thinking about productivity growth, expanding prosperity party how do we power company size economies, sectors. i will
4:43 am
help with that. then i think about science, the possibility of having these extraordinary breakthrough innovation engines to advance science. you mentioned proteins. i think it's quite stunning thing that, you know, my colleagues at deepmind, alphafold, was able to predict the protein structure of all 200 million proteins known to science and then make that available to everybody. it's astonishing. it's astounding. but then i think also about some of our pressing challenges today . think about access to maternal health in low income countries and communities. think about climate change. you spend a lot of time thinking about the effects of climate change. think about all the things we see in california wildfires. you know, the mayor can talk a lot about what we see in california. so all of these things gives us the possibility of actually addressing and enhancing how we tackle all of this. this is what motivates me and excites me.
4:44 am
definitely my life's work and what i always wanted to work on since i was a little kid. i studied it in school. it wasn't working at the time. i got kind of sidetracked for a while, but as soon as it looked like we had an attack vector, it was very clear that this was what i wanted to work on. i think this will be the most transformative and beneficial technology. humanity has yet invented. i i think more generally the 2020s will be the decade where humanity as a whole begins the transition from scarcity to abundance. we will have abundant intelligence that far surpassed our expectations. same thing for energy, same thing for health. few other categories too. but the sort of technological change happening now that is going to so change the constraints of the way we live and the sort of economy and social structures and what's possible. i think this is like going to be the greatest leap forward that we've had yet far yet so far. and the greatest leap forward of any of the of the big technological
4:45 am
revolutions we've had so far. so i'm super excited and i can't imagine anything more exciting to work on. and on a personal note, like four times now in the history of openai, the most recent time was just in the last couple of weeks i've gotten to be in the room when we sort of like push the front, the sort of the veil of ignorance back and the frontier of discovery forward and getting to do that is like the professional honor of a lifetime. so that's just it's so fun to get to work on that. and it's remarkable, though, that for each of you, you've been working on this for decades. and so but now we find ourselves in a particular moment of inflection. so i wonder if you can help ground the audience in in understanding the ai lab landscape. so how does how does each of you think of where we are with the technology in the development overall development of generative ai? james feel free. yeah, i'm happy to start. i mean, i think it's worth reminding ourselves that ai has
4:46 am
actually been with us for a while. actually, a lot of the progress started to happen in the early 2000 with things like image recognition, natural language processing and in fact, many people today, you know, even before sam's extraordinary moment last year, were already using ai. so, for example, there's over a billion people who use google translate. that's a i. if you're using search, that's ai. but i think there was a particular moment that that that is brought us here in 2017 when my colleagues at google research published a paper called attentions all you need it introduced called attentions all you need. that's the paper that introduced these transformer based architectures that are the underpinnings of these large language models. that paper was in 2017. a lot of things and rapidly accelerated from that moment. we all started to train these large language models and they all started to do these very general things, not just narrow things, because
4:47 am
remember before this we had what you might call narrow ai. you could do ai for speech synthesis , for image classification and all of these things. but these large language model systems suddenly be able to do very general things and things just accelerated. so i think that was a pivotal moment that has brought us to where we are and so where are we? well, i think we're at a place where we can talk about this, where these systems are now very broad. they're very general. everything from writing poetry, composing music and all these things. they've also become what's now termed multimodal. so it's not just language and text, but also images and video and all these things and coding. it's a very exciting time. we're starting to see them do very well on benchmark tests on how well do they do on kind of a range of cognitive tasks and
4:48 am
capabilities. there's something called big bench, which has something like 204 kind of metrics. you can evaluate. so they're starting to be very, very, very good. but i think it's worth pointing out that they still have some serious limitations, actually. so as amazing as all these general capabilities are, there's still mistakes, there's still mistakes. factuality fauci and so forth. but these are things i think will get better. but those limitations are quite real. that's why i think it's very important to have a deeper understanding going for where we are now about what these systems are good at, what they're not good at, how we solve and augment those capabilities, link them to other systems. but i'm actually pretty excited because the what i now call the scaling laws and i'm sure sam will also get into this, is as you scale these systems, they seem to get more capable, more powerful, and the possibilities are very, very exciting. i agree. they're very
4:49 am
exciting. they they also can be very concerning. and for some people really frightening. so i want to read to you, sam, a quote from the wise public intellectual, yuval harari, who said, ai is the first tool in history that can create new ideas by itself. there's a danger that we will spend all our effort on developing an ai at the time. we don't understand ourselves, which is right now, and then letting ai take over. and that would lead to human catastrophe. so we obviously don't want that. so that brings up this question of proper regulation and proper guardrails and i think having the industry be come together as it has and take some steps forward around how do we how do we think about this collectively. so the industry has taken a very healthy step forward by launching the frontier model
4:50 am
forum. and just in the last two weeks we've had a lot of regulatory bodies come forward. we had the white house executive order, we had the bletchley declaration, we have the advisory body on ai that the un has convened. so i'd like each of you to talk a little bit about how you think about some of the existential threats like yuval has articulated and others as well as the state of regulation on what's what's proper, what's too much? how do we get it right now and then be open to evolving as the technology evolves. i had dinner with yuval in tel aviv and early june of this year. he was very concerned and i understand. and it i really do understand why if you have not been closely tracking the field, it feels like things just went vertical and we didn't you know, sure, people maybe were doing stuff before, but not not like people had these papers here. this model here, this narrow thing here. but people that use like
4:51 am
machine translation don't really feel like they're using ai and all of a sudden there was the sort of perception of like something has qualitatively changed. now i can talk to this thing. it's like the star trek computer. i was always promised and i didn't expect it to like happen. why this year? why not a year ago? why not in ten years? like what? what happened? so i think a lot of the world has collectively gone through a lurch this year to catch up. now like humans can do with many other things. people are like, yeah, man, where's gpt five? what have you done for me lately ? we've already moved on and that's great. i think that's awesome. that's a great human spirit. i hope we never lose. but the first time you hear about this or use it, it feels much more creature like than tool. like yes. and then you get to use it more and you see how it helps you and you see what the limitations are. and it's just like, okay, we have like another thing on the technology tree that has been unlocked. now i do think this time it's different in important ways. and
4:52 am
this is maybe the first tool that can self improve in the way that we understand it. but we need new ideas is like i think we're on a path to self destruction as a species, right now. we need new ideas. we need new technology. if we want to flourish for tens and hundreds and millions of hundreds of thousands of millions of years, more and i think a lot of people see the potential of that in ai. but it's not like a clean story of victory. it comes with, all right, we do have to mitigate these downsides. so we want this. it's great in the short term, it does all these wonderful things to help us. and we can see in the medium term how it can help us cure diseases and find new ways to do all these, solve some of our most pressing problems. but on the other hand, how do we make sure it is a tool that has proper safeguards as it gets really powerful today? not that powerful, not that big of a deal, but people are smart and they see where it's going. and even though we can't quite intuit exponential as well as a species, much we can tell when
4:53 am
something's going to keep going. and this is going to keep going and so you have this question of how do we get as much of the benefits as possible, not it unduly slow those down. and everything you said, you know, i tutor for everyone on earth. yes please. sounds amazing. medical adviser. yes. cure every disease. great but but in the hands of bad actors. how are we going to what kind of limits are we going to put in place? who's going to decide what those are? how are we going to enforce them? like, what are the rules of the road going to be internationally where we have to have some agreement and people realize the challenge of that? that said, this has been like a significant chunk of my time over the last year. i really think the world is going to rise to the occasion and everybody wants to do the right thing. and what about the executive order? how close does that get to getting it right? lots of things there that are like worthy of quibbling and lots of areas to improve it. but as a start and saying we're going to do
4:54 am
something here, i think it's a good a good start. the real the real concern of the industry right now to paraphrase, is how do we make sure we get thoughtful guardrails on the real frontier models without at us all turning it into regulatory capture and stopping open source models and smaller companies create, i think, open source is awesome. i'm not not everybody agrees with that. i'm thrilled you all are doing it. i hope we see more of it. i think i think we should have a conversation about that. sure but keep going because you have some elements. but but it's just it's a hard message to explain to people that like, current models are fine. we don't need heavy regulation here, probably not even for the next couple of generations. but at some point when the model can do like the equivalent output of a whole company and then a whole country and then the whole world, like maybe we do want some collective global supervision of that and some collective decision making,
4:55 am
but to land that message and not say it's like, hey, we're not telling you, you have to totally ignore present harms. we're not saying you have to like you should go after like small companies and open source models . we are saying, you know, trust us, this is going to get really powerful and really scary and you've got to regulate it later. very difficult needle to thread through all of that. yeah. if i could before, we should obviously discuss the open source question, but i wanted to take a step back just on the regulatory question, first of all, which is i think it's worth thinking through. first of all, what are the kinds of concerns that seem to come up that then drive the need for thinking through regulation? i think it's worth teasing that out a bit. so, for example, what you typically have as concerns are in a few categories is one, are concerns about about the kind of outputs you get from these systems. there could be bias, there can be toxic, there can be non-factual and all of those things and those kinds of outputs can cause harms in themselves that we don't want that may be harmful to society.
4:56 am
so there's the there's this need to think through that. question. yes, you've got a second. both the data and the algorithm, the outputs of them and so forth. so there's that question. there's a second question which has to do with how do we think about use and misuse, even if the systems work very well, how do we want to think about what uses are appropriate, what uses are not appropriate? it you know, are we misinformation, disinformation, that's all about kind of use and misuse. we should think through that and what the rules should be for that. then you've got all these other concerns about when this technology works its way through society. implications for labor markets, for things like intellectual property or copyright and so forth. then of course, you've got the kinds of safety questions that sam was talking about as we get more capable systems, how do we want to think about the approach to safety and those? so i think it's worth because what happens in these regulatory conversations, often people are
4:57 am
coming from any one of these different consideration options. yes, i imagine there's a lumping in that happens exactly. and so when you look at, for example, i think many of the conversations that have come up so far, whether it's the uk summit, which is billed as a safety summit, was mostly concerned with that latter question about safety in these systems. if you look at some of the what were the white house commitments that came out of many of our companies, they're mostly again oriented around these safety questions. so i think it's worth thinking through the whole set. so the advisory council or the advisory board of the un that your co-chairing are you breaking down these these four different categories and trying to articulate appropriate regulations around each of these ? well, we're thinking through first of all, this is the un high level advisory body. so we've kind of organized ourselves to think about three kinds of areas. one is to think
4:58 am
about the opportunities and enablers of the opportunities. keep in mind, as we all said, there's so much that's exciting that could benefit the world. so we're thinking about those opportunities and enablers. then we're also second thinking about the these complexities and risks of all these different kinds that i just described. then the third area we're thinking a lot about the governance questions. what's the right way to think about co ordination and governance? and so forth in ways that benefit the whole world? because one of the risks that we have is we likely could end up with a patchwork of regular relations and frameworks that are just so confusing. so i just thought it was useful to think about the broad landscape. but we should come back to the open source question because that is one of the we will just we will just to pile on with james like part of what's happening now that is useful. well, i mean, the first thing that's happening is a lot of people are paying attention to ai, which is a good thing. yes i think if you look
4:59 am
at earlier chapters of the internet, it was not the case that everybody was paying attention. i think when the internet began. i think a lot of people thought it was a fad. and i think, you know, the silver lining of all of the attention on ai is there's a lot of attention on ai, and that means that you have the scrutiny of not just folks outside of the company, but folks inside of the company. i think one of the things a lot of people don't understand about our companies is like, you have a whole lot of very serious people who've spent their whole life thinking about ai safety and what does bias mean, what is an unbiased data set look like? what is toxicity mean? how would we measure it? how would we measure it in hindi? how would we measure it in bengali? how would we measure it in a language we don't have a huge corpus of on the internet? these are each of these questions. it turns out, is hard but important. and i think one thing that i think a lot of outsiders to new tech don't see that i'm inspired by is the amount of people inside of the companies who are incredibly serious and dedicated to each of
5:00 am
the sort of nuanced chapters inside of the whole story of ai safety. yeah. thank you. that's such a good point. yeah. the other thing that i think james is helping to do that we need as an industry is not sort of like 32 different versions. us of regulations like we all are like totally down for regulation. and i agree with everything sam said that my general perspective and my company's perspective and my team's perspective is the current models are pretty, pretty good. if you look at the vast majority of good use cases and like we have not really been been able to find lots of ways that it can be misused, but that may not be true. 2 or 3 generations out. and i think that's something that the industry generally shares as a perspective right now that is very frequently lost in translation, as i think lots of folks start to peer in and try and pull apart the ai questions. and so returning to the open
5:01 am
source question, how do you at meta balance use the benefits of open science and open research and open data sets or algorithms with the fact that bad actors could actually cause a lot of trouble and harm a lot of people and who decides what's open and what's not open and how? how is that decision made? yeah, sure. so just as history meta built, designed and deployed lama, which was the first large language model that we open sourced back in february, as well as lama to which was the most recent open source large language model. we open sourced in june. our company was built on open source technologies and i think it's worth remembering when we look back on the early days of the internet as was apple, as was as google. a lot of the tools that we use to build companies in the early
5:02 am
days was open source technology. it was linux, it was apache, it was mysql, it was php, it was technologies that allowed entrepreneurs to not have to go pay huge licensing fees in order to get access to tools that allowed them to build companies. that became amazing things. so we all owe a lot to open source and the individuals who, whether for technologies or whether for services like wikipedia, sort of volunteered to contribute to technology changes for the benefit of technology artists and for the benefit of ultimately science and medicine and education and all these other things. so information exactly. and so i think, again, as folks are starting, some folks are looking in on this for the first time, like what does open source mean? and i think it's important to remember we are here because of open source technology. is now how did we make the decision to open source lama and lama to first of all,
5:03 am
there was an enormous amount of inbound from scientists, from chemists, from folks who were at serious institutions working on the hardest problems, from folks researching cancer, from folks working in e commerce, working on fraud prevention. i mean, name a problem. we were seeing that there was an enormous appetite to have access to a model that was close to the frontier, that was safe, that had like gone through all of the hoops of fine tuning and adjusting system cards and red teaming, which is where you get various category of sophisticated people to pretend they are bad actors in order to test the veracity of the system. we also spent a long time talking to other folks, other sort of luminaries in our industry. we spent a bunch of time with government, with the white house, with elected officials in other countries to sort of just like bounce off them. the idea that we were
5:04 am
contemplating this, i think for these sorts of technologies, those steps are necessary. the other step that's necessary is publishing all of the work that you did. and so not just publishing the model, but publishing how you what led you to conclude that the model was safe for us? that was a 65 page paper written by the ai safety research hours walking through every single step that we took. i think without that level of seriousness, to show me your work, the model, you can't just say, trust us. and some of this is in the executive order as well. yes. and that's where i think the executive order is pushing at the right topics. it's like for should everything be open source? no so should some things be open sourced? almost certainly. where do you draw the line? hard question. but in the meantime, let's talk about the steps that we're taking to answer it and then have the debate out in open. yeah. yeah. and i look back on the decision in june. i feel very good about it based on the immense amount of i mean, every
5:05 am
week i hear an incredible story of lama2 being used to, you know, it was this week, it was like stanford students writing a creating some glasses that allowed a blind person to understand what was being said in front of them. and that was just some stanford students using an open source model. it wasn't a massively multi billion dollar company. it was a group of students. yeah, no, these tools, when put in the hands in a democratic way, will create marvelous things. yeah, it's true. sam how do you respond to chris's point of view, of course, you're very familiar with it. strong strongly. i don't think i have anything interesting to say. i strongly agree with all of that. i wish i could find something to pick apart for a better panel, but what about you? well, i think i would agree also, but also emphasize a few other things. i think one of the things that open source matters for while in addition to encouraging innovation, entrepreneurship, ship, you know, people to innovate, the stanford students
5:06 am
chris talked about to do that. i think it also gives researchers and others a chance to understand what we're doing. yeah which i think is also quite important. so all those benefits i think are extraordinary. i think we have to take that quite seriously. i think the and i think we want to all of those. i mean, google's grown up, as chris said, as an open source company. i mean, android is open source. we actually have some models that are open source. i think the question for us going forward is, as chris and i think sam implied, but you can jump in, is as these models become more capable 2 or 3 generations from now, how do you want to think through that question? not with the concern about innovators and entrepreneurs and researchers, but with the possible of bad actors. so that's the question we're all going to have to grapple with. i think in a perfect world, i'd like to do all of it. i'd like us to get all the benefits of open source, which are big advocates for. but we are going
5:07 am
to have to think through these other questions going forward. and i think what i find a lot of a lot of comfort in is the fact that that you all are actually having these conversations and you know each other. and there's open discourse. there's some disagreement. but but it's healthy. it's healthy to have these conversations. what's not healthy is not talking about it. and going off and doing our own. and then endangering the rest of us. but and in the next several generations of products, i'm sure that the conversations will continue to be interrogated in 2024, which is this coming year. however, we have right in front of us not only the us election, but elections in 40 other countries around the world. and we've already seen disinformation in over the last eight years and ai has the potential to supercharge disinformation. and so how do you how are you all thinking
5:08 am
about that? how are you all thinking about ensuring that if ai is used in political advertising, it's disclosed like meta is positive thing, but also how do we make sure that we can understand if there are deepfake videos, if there are deepfake audios, if there's personalized. convincing adverts rising, how do we navigate through this in this coming year? yeah, i think one one thing that we've learned over the past ten years, i would say, of focusing quite seriously on our role in elections is, is first of all, ai is the sword as well as the shield. and what that means for us is so much of your ability to scale. how do i find every let's say i find a photo that was misleading? how do i find every other instance
5:09 am
of a photo that looks like it quickly? that's the kind of problem you want to be able to solve, to operate at scale, because that allows you to take for us, for example, if we have something that's reported, start to going viral, factcheckers can look at it. they can decide if it's misleading. they can label it. and then i can help us quickly detect everything that looks like it and those sorts of systems, whether or not an image was generated by ai, whether it was generated with photoshop or whether it was an actual image or whether it was a piece of text agnostic to how the piece of content was created. a lot of the systems that we built over the years can be used and be deployed against this sort of behavior, but also i remember and this may be a sore spot, but i remember there was a deepfake with nancy pelosi and facebook didn't take it down. so i'm not blaming you, but but that happened. and i know that that was a source of a lot of tension. so how do we as consumers trust what we're seeing and how do public
5:10 am
personas trust that their likenesses are not going to be manipulated? yeah. so i think the first thing is, sam and i were just talking about this. there is a fair amount of public awareness on people should be skeptical about deepfakes, like what's interesting about this chapter of where we are with the internet, is there already is a broadly held understanding that like deepfakes are something you should keep an eye out for. i think that's good. um hum. the second thing is, you know, we've gotten a lot more sophisticated at understanding and detecting, detecting this information. the way we do that is working with 90 different fact checkers certified fact checking institutions across 60 languages in order to understand that for content that goes viral, let's have somebody let's have sort of professionals take a look at it and then make sure we label that content. so those are the kinds of things that we've learned over the years to make a huge difference in protecting people
5:11 am
during elections. and for us, all those tools keep getting better every year, and we'll make sure that sort of whatever ai throws at us, like we'll use that for the benefit of our users. what do you think, james yeah, i would actually underscore something chris said. maybe add something to it. one is it's worth remembering that in fact these tools and technologies can actually be part of the solution when you have platforms, i'm sure it's true for you, for you, chris but in our case, like youtube and other platforms at the volume that content is being uploaded, it far outstrips any human's ability to review every single thing. so in many cases ai is actually assisting in that task to be able to do it at scale. we also find that even in our ai systems, when we're doing adversarial testing, ai tools are actually part of helping with this. they should be. exactly. so that's, that's kind of one part. the thing i was going to add is one of the important research tasks we all
5:12 am
have. and we're starting to make progress on it, is how do we develop new techniques to improve our confidence in information? so in our case at google, we've been spending a lot of time investing in research on watermarking and kind of provenance technology. you know, earlier this year we actually, for example, introduced something called synth id, which is a watermarking technique we're building into all our generative systems. it's still early days, but even that already is going to make a difference. so there's a lot of research still to be done on how do we safeguard these systems, how do we try to explain, make clear the provenance of the content and where it came from and how it was created and all of that. that's still work to be done. but this is an area where there's still so much more work to do. and, you know, i think we're just getting started. sam yeah, let's talk about next year's elections and what you anticipate. i really think the
5:13 am
thing chris and i were talking about, i really do think we underrate how much of like societal antibodies have already been built, but it's imperfect. and also that the dangerous thing there is not what we already understand, which is the sort of existing images and videos, but it's all the new stuff, the known unknowns, the unknown unknowns that are going to come from this so we can talk. we talked recently a little bit about this idea of personalized one on one persuasion and how we don't quite know how that's going to go, but we know it's coming. there's a whole bunch of other things that we don't know because we haven't all seen what, you know, generative video or whatever can do and that's going to come fast and furious during an election year. and the only way we're going to manage through that is a very tight feedback loop. we collectively, the industry, society, everything. i suppose the problem is that often the damage is done and then then we notice
5:14 am
and then we correct and, and i also understand about broad antibodies at the societal level because we've now been swimming in a sea of propaganda and misinformation. however where we still have a lot of people ill in this country and elsewhere who believe in conspiracy theories that are easily debunked. but nevertheless, yes, they believe in them. and that has to do with human nature and the way that the brain latches on to information and that's something that that we can't quickly evolve past. yeah, that's we've struggled with that for a long time in human history. you know, conspiracy theories are always that thing somebody else believes. but i don't want to i don't want to like discuss that problem at all. like that is something deep about human psychology. but it's not new with this technology. it may be amplified more than before. amplified. i agree. it's not new relative to what we've already gone through with the internet. it's not clear that ai
5:15 am
generated images are going to amplify it much more. the way you it's all of the other. it's the new things that i can do that i hope we spend a lot of effort worrying about. um, and, and a lot of the three of you and others speak often with elected officials and political leaders, and these are all us based companies. and so how are you thinking about obviously wanting to be inclusive of and have equitable distribution to benefits across the world, but also balancing that with us national security? and i'm sure that comes up in a lot of private conversations. but what can you share with this audience about that? well, i was going to pick up that question. lorraine, was something that sam just said. i think one of the things that is worth keeping in mind, especially in a setting like this at the asia-pacific kind of summit, is that a lot of these questions are really about us, not about the technology, energy
5:16 am
about us as societies, because a lot of these questions about whether it's bias or in all these questions often are very different from place to place. country to country. we found, for example, as we've rolled out these technologies, having to have very, very different conversations in different countries about what counts as bias, what's not in different communities. but these are questions about society and not so much about technology. i think we, you know, society kind absolve itself of this question, of these questions and sam said societies faced these things for centuries, but maybe this technology amplifies this. i think we all have to think about these normative different questions in all our different countries. i think that's important to keep, to keep in mind. but at least hopefully there's a baseline we all agree with, which is hopefully we all adhere to things like basic human rights, human freedoms. those kinds of things safety, violence and so forth. i think
5:17 am
we can agree on. but that's just the floor, not the ceiling. the rest, i think, is up to us collectively. i think in conversation. asians, to your question with various governments, i think there is clearly a serious concern around safety, around misinformation. and i think we all have the responsibility to do more technical work on that front to safeguard these systems and do as much as we can. one of the things many of us have encourage urged governments to do is to help us set up mechanist isms so we can actually all share and learn from each other because we're all experiencing, i'm sure, sam and his team and our teams and chris and we're all learning different things from how our technologies are being used. so there are some mechanisms to be able to share that. i think that's part of the hope that comes out of some of the uk summit, some of the announcements at least in the us, about the ai institute and the kind of the framework birx
5:18 am
that nest and others are going to put in place for how we can share kind of best practices. i think we're going to need that base infrastructure to quite frankly and have it be somewhat harmonized around different countries around the world. so i'm very hopeful about what the kinds of conversations that have been happening recently. yeah and to your question on working outside of the us, it's funny. i would say the biggest difference between operating in the us and operating in a country like brazil or india or indonesia is whatsapp and i think a lot of americans still don't understand the primacy of whatsapp as an experience that matters a lot for folks in huge swaths of the world. it's the world's largest messaging platform. we spend a lot of time with governments and also with ngos in each country where we're big during important events. so during covid we spent a bunch of time in brazil and india and indonesia and mexico trying to understand how folks
5:19 am
were getting information and what information they were getting, figuring out how we could play a role in helping them get up to date accurate information from the public health ministry. turns out the answer on whatsapp was like, just put all over the country like text this number on whatsapp and we'll send you up to date information on social distancing, on vaccines, on masks, on whatever. we do similar things during elections to give folks in region one basically access to some institution they trust in the tool that they're using to get a lot of their information. so we do that. we've learned that it's important not just for the people in those countries, but also for like the ongoing relationship of trust with for us to be brokers of those services like we do really need to do a good job of showing up during a public health emergency, during during a national natural disaster, during an election in during a war or a crisis. we put a lot of time and energy into how are we
5:20 am
helping people get accurate information. and i think that's something i know google does quite seriously as well. and it's something i think we as american companies operating global services have tried to really improve upon every year. yeah, i have to say one more thing, though, on this point, which i'm sure we've all had the experience, which is it's always exciting when you go to other countries, especially developing countries, especially in the global south, where you go to latin america or to africa or asia, where the conversation options are somewhat different. actually, often there's a lot of excitement about the possibility to transform, leapfrog and get access to the world's knowledge and information, but also some different other challenges about how do we make sure that they are including and participate and have the tools and the capabilities to capture relies on these technologies and the possibilities for their own economies. we get for example, we're spending an enormous amount of time on things like
5:21 am
these language moonshots. so for example, google translate works with roughly 133 languages, but we're trying to get to a thousand because is that opens up inclusivity in ways that are extraordinary to people in many, many different languages. us in latin america and africa and other places that i think i think so i think part of the exciting opportunity is to how do we enable economies entrepre occurs everywhere, quite frankly, to take advantage of this and pursue their own ambitions and do amazing things for their countries and for their communities. that's pretty exciting. yeah. well u2's up for the perfect last question. james so let's assume that the four of us are sitting on the stage here again in 2024 because the mayor did such a brilliant job at hosting everyone at apec. there are all coming back. so we'll come back to and we'll have this
5:22 am
conversation and i will ask each of you, what is the most remarkable surprise rise that happened in ai, in your field, in your company in 2024? and what is it that you'll be telling us about sam? do you have a thought? the model capability will have taken such a leap forward that no one expected? wait, say it again. the model capability, like what? these systems can do, will have taken such a leap forward. no one expected that much progress. and why is that a remarkable thing? why is it brilliant? well, it's just different to expectation. i think people have like in their mind, how much better the models will be next year. and it'll be remarkable how much difference it is. okay that's intriguing. plus one to that. but i'd also add that when is gemini going to ship? well, that's the thing we're about to enter an exciting we're all waiting to know when you would like to know. let's give us a
5:23 am
little information. i'm very curious trying to get them. that's going to be so exciting. i think it'll be a story about gemini. we'll be telling you about gemini and the kind of capabilities that these systems are unable to do. but more importantly, i think we'll be talking, i hope lorien about 3 or 4 big societal breakthroughs will have done with this technology. ideally in science, ideally on something that really , really matters for society. i hope we can show you 3 or 4 such things. i mean, proteins and biology are one thing, but i hopefully there'll be more in education, in more in health care. i hope to come back with at least a few of those. yeah, i really agree with what jane said . i think what we'll be talking about is something none of us anticipate that is positive, that comes from somebody who previously would not have been able to build somebody or some institution that would not have been able to accomplish something until one of these models and the associated tech that came with it. i think that's part of why we're all so
5:24 am
excited about this. yes. yes yes. i think in health care alone and in disease. quest alone, there will be some amazing breakthroughs. amazing. yeah could i add one more? yes, i also think we'll have gotten to a place, lorraine, to your earlier question, where we've we're figuring out and struck the right balance between making sure we have mechanisms to limit and address the harms and things we're concerned about. but at the same time have some extraordinary kind of pro-innovation things that enable the opportunities that we all want. i think we somehow get stuck in this debate. it's one or the other. we have to do both. we have to do both. well i'm i'm just pinning my hopes on the three of you to get that one right. so thank you. thanks to the panel. thank you. thank you. thank you.
5:25 am
>> driver, bye. >> hi. i'm will b. mixture weltake a walk with me. >> i just love taking strolls in san francisco. they are so many cool and exciting things to see. like -- what is that there? what is that for? hi. buddy. how are you. >> what is that for. >> i'm firefighter with the san francisco fire department havings a great day, thank you for asking. this is a dry sand pipe. dry sand pipes are multilevel
5:26 am
building in san francisco and the world. they are a piping system to facilitate the fire engineaire ability to pump water in a buildings that is on fire. >> a fire truck shows up and does what? >> the fire engine will pull up to the upon front of the building do, spotting the building. you get an engine in the area that is safe. firefighters then take the hose lyoning line it a hydrant and that give us an endsless supply of water. >> wow, cool. i don't see water, where does it come from and where does it go? >> the firefighters take a hose from the fire engine to the dry sand pipe and plug it in this inlet. they are able to adjust the pressure of water going in the inlet. to facilitate the pressure needed for any one of the floors
5:27 am
on this building. firefighters take the hose bunked and he will take that homes upon bundle to the floor the fire is on. plug it into similar to this an outlet and they have water to put the fire out. it is a cool system that we see in a lot of buildings. i personal low use federal on multiple fires in san francisco to safely put a fire out. >> i thought that was a great question that is cool of you to ask. have a great day and nice meeting you. >> thank you for letting us know what that is for. thanks, everybody for watching! bye! [music]
5:28 am
>> >> (indiscernible) faces transformed san francisco street and sidewalks. local business communities are more resilient and our neighborhood centers on more vibrant ask lively. sidewalks and parking lanes can be used for outdoor seating, dining, merchandising and other community activities. we're counting on operators of shared spaces to ensure their sites are accessible for all and safe. hello, san francisco. i love it when i can cross the street in our beauty city and not worry whether car can see me and i want me and my grandma to be safe when we do. we all want to be safe. that's why our city is making sure curb areas near street corners are clear of parked cars and any other structures, so that people
5:29 am
driving vehicles, people walking, and people biking can all see each other at the intersection. if cars are parked which are too close to the crosswalk, drivers can't see who is about to cross the street. it's a proven way to prevent traffic crashes. which have way too much crashes and fatalities in our city. these updates to the shared spaces program will help to ensure safety and accessibility for everyone so we can all enjoy these public spaces. more information is available at sf dot gov slash shared
5:30 am
>> bob morts, global german and chief executive officer of pwc. >> good morning everyone. thank you to be the early birds. sitting here, fighting all the possibilities so that we guarantee a great conversation and my name is kim way, the moderator for this panel. the gobel economy and state of the world. i am