tv APEC 2023 SFGTV December 20, 2023 5:00pm-6:00pm PST
5:00 pm
5:01 pm
good afternoon, everyone. i am honored to welcome all of you here to the great city of san francisco, bringing the economic leaders from across the asia pacific is an opportunity for us to learn from each other. san francisco has long had a history of serving as the connection between the united states and the asia pacific. our golden gate has welcomed those seeking to bridge our cultures and our economies. in san francisco, we have the spirit to take risks, to seek out what's new and different. we search for the next great idea. we don't just have the dreamers who look beyond what's possible, but we also have the builders with the
5:02 pm
talent to make the dream a reality. for decades, the san francisco bay area has provided added the culture, the talent and the spirit to change the world. the spirit of apec, of finding common ground and of power, of economic collaboration , and of celebrating diverse cultures, reflects that same spirit of innovation. it is a spirit that has driven san francisco and the bay area to become the economic engine of the world. it is the spirit that will continue to propel businesses from across the asia pacific region to be part of what is happening right here, right now. we are in the spring of yet another innovative boom in san francisco, driven by the rapid rise of artificial intelligence science. we have more ai job openings of any major city in the country. of
5:03 pm
the top 20 ai companies in the world, eight are located right here in san francisco. the conversations happen in this city and the conversation is happening here today. these these are the ideas that are going to transform our world in the decades to come. future generations will look back on these discussions as the start of something in entirely new and it's happening all right here in san francisco. it economies, industry and society change rapidly. we google was started out of a garage down highway 101 freeway meadow was a website for rating classmate appeared agencies openai was virtually unheard of last year at this time. now chatgpt has 100
5:04 pm
million users. google has over a billion users and meta has over 3 billion users. the entire world is using products imagined and built out right here to discuss all of this, we are going to hear about the future of ai from former secretary of state john kerry and salesforce ceo marc benioff, whose companies have been a leader in our city and in efforts to transform the world through innovation in. but first, it is my pleasure to introduce the next panel, a group representing the very best of how we dream in san francisco. so people who take time to listen for new ideas, who envision what that idea can become, who creates successful and thriving businesses, but also make life better for people around the globe. please join me in welcoming laurie powell jobs of
5:05 pm
the emerson collective. chris cock of meta, james manyika of google and sam altman of open ai . okay i feel like you guys are a little far away. did we bunch up? it does feel a little far. does it feel free to gather in for our for our conversation foreign hi everyone. good afternoon. i am very very pleased to be here at and hosting the discussion of one of the most important topics in
5:06 pm
technology and around the world. artificial intelligence. the era of generative ai is moving at breathtaking pace, making step function changes rapidly in a short amount of time. it's an astonishing time to be alive, and it feels different from other disruptive technologies. this year alone, design discussions have ranged from some possibilities of apoc elliptic doom to the exhilarating promise of unprecedented advancements. as these technologies are speaking and conversing with us, creating images and challenging our understanding of human intent emergence. however many questions remain around around how we can best utilize these ai as humans for the good of humanity and to craft a future that's equitable, responsible and aligned with our core values . to discuss this critical topic, as the mayor said, we're
5:07 pm
fortunate to be joined today by three of the world's leading thinkers and developers of ai chris cox, chief product officer of meta. james manyika, google's svp of research technology and society, and sam altman, ceo of openai. let's give them a warm welcome. well, colleagues, ai is a wondrous technology fauci researchers are using ai to create new proteins and discover new drugs. ai tutors may transform the way that children are educated. their their potential financial, male workforce and climate benefits that we can imagine and those that we can't imagine. and there are risks. but before we get into the existence, risks and the regulatory environment that we'd all like to see, i'd like to begin the conversation with
5:08 pm
looking through your eyes for a minute. chris james and sam, why are you devoting your life to this work? we can start with any of you, but since i can see you. chris yeah, i'm happy to start. i mean, it's funny. i started studying ai back in 2001 and back when i was a lot more arcane of a science than i think it is today. yeah. and i remember being first attracted to it because. because it felt to me like our ability to understand learning would help us understand ourselves. would help us understand how we learn in our own consciousness and part of what's been so interesting about it today is that the technology that's allowing ai to start to be really good is modeled after the way our minds work. you know, you don't teach a kid, i have two little kids, like you don't
5:09 pm
teach a kid. this is a noun and this is a verb and this is how you put a prepositional phrase together. you just speak to them and they learn through experiencing the world. yeah. and i think part of why this is such an exciting period for ai is we're starting to see that the technologies we're building are start to become a little bit closer to the way we learn, which is through exposure to one another. and by building them that way. i believe has the promise of making the technology really humane and really modeled after the way that we interact with the world in our own judgments about what's right and wrong and our own judgments about what feels good and what doesn't feel good. so i think over the years we've come to a pretty neat place. and like you said, i think we all feel the sort of the excitement, but also at the same time, the importance of discipline and seriousness about making sure that the way we usher it into the world is responsible. yeah of course. and
5:10 pm
for you, is this because it's been a 20 year period for you? focus on this is this to you your life's work? it's certainly what i've spent the most of my life on. i mean, i started at facebook back in 2005, so i was one of our, i think, 13 engineers or 14th engineers. and most of my work there has been in trying to design software that gives people the content they care about that connects them to their friends and family. and if you go back to at least to our company history, i think similar to google, the fundamental innovation was really about getting good at recommending content for each person, personalizing content, understanding vast amounts of data, helping each person get uniquely the stuff they care about. and so for me, that was i that was sort of behind the scenes that was really important. and part of what's starting to happen now is people are becoming, i think, having contact with it by talking to it . i think that's part of what chatgpt brought us for the first
5:11 pm
time is like, oh, now i can talk to it. yeah and i think that embodiment is part of what sort of like taking this tech that's been around for a little while and suddenly giving it a mode of interacting with with each other . yeah. hey, james, what about you? and you. you actually came. you were studying ai and went to mckinsey and then decided to make a real career pivot. well, it's wonderful to be here with sam and chris and you, laurene, as always, i'm looking forward to the conversation. for me, laurene, the very first thing i ever published in my whole life was in 1992. i was an undergraduate. it was a paper on training and modeling neural networks. that was the first thing i ever published in my whole life. i then went on after that to do a phd in ai and robotics at oxford and at the time, by the way, it was a very different time for the field. my advisors actually advised me not to put the word ai in my dissertation because no one would take me seriously. so we
5:12 pm
called it something else. but when i look back from that time to where we are, are the progress has been extraordinary. it's been extraordinary. we, you know, in the intervening time i was at mckinsey looking at these big problems in society, key things about economic growth, productivity, anti climate change and so forth. so part of what was realizing that i actually has a possibility of helping us to tackle all of these things. so what i get very excited and when i think about the work we're doing at google, for example, you know, there are several areas that kind of motivate me, excite us. the possibility of actually helping people in very assistive ways. do some of the most imaginative, creative endeavors, learn languages, speak languages, get past access barriers, linguistic difficulty. i will help with that. yeah. the possibility that in fact we could actually transform economy is all the stuff that i spend my time
5:13 pm
mckinsey global institute thinking about productivity growth, expanding prosperity party how do we power company size economies, sectors. i will help with that. then i think about science, the possibility of having these extraordinary breakthrough innovation engines to advance science. you mentioned proteins. i think it's quite stunning thing that, you know, my colleagues at deepmind, alphafold, was able to predict the protein structure of all 200 million proteins known to science and then make that available to everybody. it's astonishing. it's astounding. but then i think also about some of our pressing challenges today . think about access to maternal health in low income countries and communities. think about climate change. you spend a lot of time thinking about the effects of climate change. think about all the things we see in california wildfires. you know, the mayor can talk a lot about what we see in california. so all of these things gives us the possibility of actually
5:14 pm
addressing and enhancing how we tackle all of this. this is what motivates me and excites me. definitely my life's work and what i always wanted to work on since i was a little kid. i studied it in school. it wasn't working at the time. i got kind of sidetracked for a while, but as soon as it looked like we had an attack vector, it was very clear that this was what i wanted to work on. i think this will be the most transformative and beneficial technology. humanity has yet invented. i i think more generally the 2020s will be the decade where humanity as a whole begins the transition from scarcity to abundance. we will have abundant intelligence that far surpassed our expectations. same thing for energy, same thing for health. few other categories too. but the sort of technological change happening now that is going to so change the constraints of the way we live and the sort of economy and social structures and what's possible. i think this is like going to be the
5:15 pm
greatest leap forward that we've had yet far yet so far. and the greatest leap forward of any of the of the big technological revolutions we've had so far. so i'm super excited and i can't imagine anything more exciting to work on. and on a personal note, like four times now in the history of openai, the most recent time was just in the last couple of weeks i've gotten to be in the room when we sort of like push the front, the sort of the veil of ignorance back and the frontier of discovery forward and getting to do that is like the professional honor of a lifetime. so that's just it's so fun to get to work on that. and it's remarkable, though, that for each of you, you've been working on this for decades. and so but now we find ourselves in a particular moment of inflection. so i wonder if you can help ground the audience in in understanding the ai lab landscape. so how does how does each of you think of where we are with the technology in the development overall development
5:16 pm
of generative ai? james feel free. yeah, i'm happy to start. i mean, i think it's worth reminding ourselves that ai has actually been with us for a while. actually, a lot of the progress started to happen in the early 2000 with things like image recognition, natural language processing and in fact, many people today, you know, even before sam's extraordinary moment last year, were already using ai. so, for example, there's over a billion people who use google translate. that's a i. if you're using search, that's ai. but i think there was a particular moment that that that is brought us here in 2017 when my colleagues at google research published a paper called attentions all you need it introduced called attentions all you need. that's the paper that introduced these transformer based architectures that are the underpinnings of these large language models. that paper was in 2017. a lot of things and rapidly accelerated from that moment. we all started
5:17 pm
to train these large language models and they all started to do these very general things, not just narrow things, because remember before this we had what you might call narrow ai. you could do ai for speech synthesis , for image classification and all of these things. but these large language model systems suddenly be able to do very general things and things just accelerated. so i think that was a pivotal moment that has brought us to where we are and so where are we? well, i think we're at a place where we can talk about this, where these systems are now very broad. they're very general. everything from writing poetry, composing music and all these things. they've also become what's now termed multimodal. so it's not just language and text, but also images and video and all these things and coding. it's a very exciting time. we're starting to see them do very well on
5:18 pm
benchmark tests on how well do they do on kind of a range of cognitive tasks and capabilities. there's something called big bench, which has something like 204 kind of metrics. you can evaluate. so they're starting to be very, very, very good. but i think it's worth pointing out that they still have some serious limitations, actually. so as amazing as all these general capabilities are, there's still mistakes, there's still mistakes. factuality fauci and so forth. but these are things i think will get better. but those limitations are quite real. that's why i think it's very important to have a deeper understanding going for where we are now about what these systems are good at, what they're not good at, how we solve and augment those capabilities, link them to other systems. but i'm actually pretty excited because the what i now call the scaling laws and i'm sure sam will also get into this, is as you scale these systems, they seem to get more capable, more powerful, and
5:19 pm
the possibilities are very, very exciting. i agree. they're very exciting. they they also can be very concerning. and for some people really frightening. so i want to read to you, sam, a quote from the wise public intellectual, yuval harari, who said, ai is the first tool in history that can create new ideas by itself. there's a danger that we will spend all our effort on developing an ai at the time. we don't understand ourselves, which is right now, and then letting ai take over. and that would lead to human catastrophe. so we obviously don't want that. so that brings up this question of proper regulation and proper guardrails and i think having the industry be come together as it has and take some steps forward around how do we how do we think about
5:20 pm
this collectively. so the industry has taken a very healthy step forward by launching the frontier model forum. and just in the last two weeks we've had a lot of regulatory bodies come forward. we had the white house executive order, we had the bletchley declaration, we have the advisory body on ai that the un has convened. so i'd like each of you to talk a little bit about how you think about some of the existential threats like yuval has articulated and others as well as the state of regulation on what's what's proper, what's too much? how do we get it right now and then be open to evolving as the technology evolves. i had dinner with yuval in tel aviv and early june of this year. he was very concerned and i understand. and it i really do understand why if you have not been closely tracking the field, it feels like things just went vertical and we didn't you know, sure, people maybe were doing stuff
5:21 pm
before, but not not like people had these papers here. this model here, this narrow thing here. but people that use like machine translation don't really feel like they're using ai and all of a sudden there was the sort of perception of like something has qualitatively changed. now i can talk to this thing. it's like the star trek computer. i was always promised and i didn't expect it to like happen. why this year? why not a year ago? why not in ten years? like what? what happened? so i think a lot of the world has collectively gone through a lurch this year to catch up. now like humans can do with many other things. people are like, yeah, man, where's gpt five? what have you done for me lately ? we've already moved on and that's great. i think that's awesome. that's a great human spirit. i hope we never lose. but the first time you hear about this or use it, it feels much more creature like than tool. like yes. and then you get to use it more and you see how it helps you and you see what the limitations are. and it's just like, okay, we have like another thing on the technology
5:22 pm
tree that has been unlocked. now i do think this time it's different in important ways. and this is maybe the first tool that can self improve in the way that we understand it. but we need new ideas is like i think we're on a path to self destruction as a species, right now. we need new ideas. we need new technology. if we want to flourish for tens and hundreds and millions of hundreds of thousands of millions of years, more and i think a lot of people see the potential of that in ai. but it's not like a clean story of victory. it comes with, all right, we do have to mitigate these downsides. so we want this. it's great in the short term, it does all these wonderful things to help us. and we can see in the medium term how it can help us cure diseases and find new ways to do all these, solve some of our most pressing problems. but on the other hand, how do we make sure it is a tool that has proper safeguards as it gets really powerful today? not that powerful, not that big of a
5:23 pm
deal, but people are smart and they see where it's going. and even though we can't quite intuit exponential as well as a species, much we can tell when something's going to keep going. and this is going to keep going and so you have this question of how do we get as much of the benefits as possible, not it unduly slow those down. and everything you said, you know, i tutor for everyone on earth. yes please. sounds amazing. medical adviser. yes. cure every disease. great but but in the hands of bad actors. how are we going to what kind of limits are we going to put in place? who's going to decide what those are? how are we going to enforce them? like, what are the rules of the road going to be internationally where we have to have some agreement and people realize the challenge of that? that said, this has been like a significant chunk of my time over the last year. i really think the world is going to rise to the occasion and everybody wants to do the right thing. and what about the executive order? how close does that get to getting it right? lots of things there that are like worthy of
5:24 pm
quibbling and lots of areas to improve it. but as a start and saying we're going to do something here, i think it's a good a good start. the real the real concern of the industry right now to paraphrase, is how do we make sure we get thoughtful guardrails on the real frontier models without at us all turning it into regulatory capture and stopping open source models and smaller companies create, i think, open source is awesome. i'm not not everybody agrees with that. i'm thrilled you all are doing it. i hope we see more of it. i think i think we should have a conversation about that. sure but keep going because you have some elements. but but it's just it's a hard message to explain to people that like, current models are fine. we don't need heavy regulation here, probably not even for the next couple of generations. but at some point when the model can do like the equivalent output of a whole company and then a whole country and then the whole world, like
5:25 pm
maybe we do want some collective global supervision of that and some collective decision making, but to land that message and not say it's like, hey, we're not telling you, you have to totally ignore present harms. we're not saying you have to like you should go after like small companies and open source models . we are saying, you know, trust us, this is going to get really powerful and really scary and you've got to regulate it later. very difficult needle to thread through all of that. yeah. if i could before, we should obviously discuss the open source question, but i wanted to take a step back just on the regulatory question, first of all, which is i think it's worth thinking through. first of all, what are the kinds of concerns that seem to come up that then drive the need for thinking through regulation? i think it's worth teasing that out a bit. so, for example, what you typically have as concerns are in a few categories is one, are concerns about about the kind of outputs you get from these systems. there could be bias, there can be toxic, there can be
5:26 pm
non-factual and all of those things and those kinds of outputs can cause harms in themselves that we don't want that may be harmful to society. so there's the there's this need to think through that. question. yes, you've got a second. both the data and the algorithm, the outputs of them and so forth. so there's that question. there's a second question which has to do with how do we think about use and misuse, even if the systems work very well, how do we want to think about what uses are appropriate, what uses are not appropriate? it you know, are we misinformation, disinformation, that's all about kind of use and misuse. we should think through that and what the rules should be for that. then you've got all these other concerns about when this technology works its way through society. implications for labor markets, for things like intellectual property or copyright and so forth. then of course, you've got the kinds of safety questions that sam was talking about as we get more capable systems, how do we want
5:27 pm
to think about the approach to safety and those? so i think it's worth because what happens in these regulatory conversations, often people are coming from any one of these different consideration options. yes, i imagine there's a lumping in that happens exactly. and so when you look at, for example, i think many of the conversations that have come up so far, whether it's the uk summit, which is billed as a safety summit, was mostly concerned with that latter question about safety in these systems. if you look at some of the what were the white house commitments that came out of many of our companies, they're mostly again oriented around these safety questions. so i think it's worth thinking through the whole set. so the advisory council or the advisory board of the un that your co-chairing are you breaking down these these four different categories and trying to articulate appropriate regulations around each of these ? well, we're thinking through first of all, this is the un
5:28 pm
high level advisory body. so we've kind of organized ourselves to think about three kinds of areas. one is to think about the opportunities and enablers of the opportunities. keep in mind, as we all said, there's so much that's exciting that could benefit the world. so we're thinking about those opportunities and enablers. then we're also second thinking about the these complexities and risks of all these different kinds that i just described. then the third area we're thinking a lot about the governance questions. what's the right way to think about co ordination and governance? and so forth in ways that benefit the whole world? because one of the risks that we have is we likely could end up with a patchwork of regular relations and frameworks that are just so confusing. so i just thought it was useful to think about the broad landscape. but we should come back to the open source question because that is one of the we will just we will just to pile on with james like part of what's happening now that is useful. well, i mean, the first thing that's happening
5:29 pm
is a lot of people are paying attention to ai, which is a good thing. yes i think if you look at earlier chapters of the internet, it was not the case that everybody was paying attention. i think when the internet began. i think a lot of people thought it was a fad. and i think, you know, the silver lining of all of the attention on ai is there's a lot of attention on ai, and that means that you have the scrutiny of not just folks outside of the company, but folks inside of the company. i think one of the things a lot of people don't understand about our companies is like, you have a whole lot of very serious people who've spent their whole life thinking about ai safety and what does bias mean, what is an unbiased data set look like? what is toxicity mean? how would we measure it? how would we measure it in hindi? how would we measure it in bengali? how would we measure it in a language we don't have a huge corpus of on the internet? these are each of these questions. it turns out, is hard but important. and i think one thing that i think a lot of
5:30 pm
outsiders to new tech don't see that i'm inspired by is the amount of people inside of the companies who are incredibly serious and dedicated to each of the sort of nuanced chapters inside of the whole story of ai safety. yeah. thank you. that's such a good point. yeah. the other thing that i think james is helping to do that we need as an industry is not sort of like 32 different versions. us of regulations like we all are like totally down for regulation. and i agree with everything sam said that my general perspective and my company's perspective and my team's perspective is the current models are pretty, pretty good. if you look at the vast majority of good use cases and like we have not really been been able to find lots of ways that it can be misused, but that may not be true. 2 or 3 generations out. and i think that's something that the industry generally shares as a perspective right now that is very frequently lost in translation, as i think lots of
5:31 pm
folks start to peer in and try and pull apart the ai questions. and so returning to the open source question, how do you at meta balance use the benefits of open science and open research and open data sets or algorithms with the fact that bad actors could actually cause a lot of trouble and harm a lot of people and who decides what's open and what's not open and how? how is that decision made? yeah, sure. so just as history meta built, designed and deployed lama, which was the first large language model that we open sourced back in february, as well as lama to which was the most recent open source large language model. we open sourced in june. our company was built on open source technologies and i think it's worth remembering when we look back on the early days of the internet as was
5:32 pm
apple, as was as google. a lot of the tools that we use to build companies in the early days was open source technology. it was linux, it was apache, it was mysql, it was php, it was technologies that allowed entrepreneurs to not have to go pay huge licensing fees in order to get access to tools that allowed them to build companies. that became amazing things. so we all owe a lot to open source and the individuals who, whether for technologies or whether for services like wikipedia, sort of volunteered to contribute to technology changes for the benefit of technology artists and for the benefit of ultimately science and medicine and education and all these other things. so information exactly. and so i think, again, as folks are starting, some folks are looking in on this for the first time, like what does open source mean? and i think it's important to remember we are here because of open source
5:33 pm
technology. is now how did we make the decision to open source lama and lama to first of all, there was an enormous amount of inbound from scientists, from chemists, from folks who were at serious institutions working on the hardest problems, from folks researching cancer, from folks working in e commerce, working on fraud prevention. i mean, name a problem. we were seeing that there was an enormous appetite to have access to a model that was close to the frontier, that was safe, that had like gone through all of the hoops of fine tuning and adjusting system cards and red teaming, which is where you get various category of sophisticated people to pretend they are bad actors in order to test the veracity of the system. we also spent a long time talking to other folks, other sort of luminaries in our industry. we spent a bunch of time with government, with the
5:34 pm
white house, with elected officials in other countries to sort of just like bounce off them. the idea that we were contemplating this, i think for these sorts of technologies, those steps are necessary. the other step that's necessary is publishing all of the work that you did. and so not just publishing the model, but publishing how you what led you to conclude that the model was safe for us? that was a 65 page paper written by the ai safety research hours walking through every single step that we took. i think without that level of seriousness, to show me your work, the model, you can't just say, trust us. and some of this is in the executive order as well. yes. and that's where i think the executive order is pushing at the right topics. it's like for should everything be open source? no so should some things be open sourced? almost certainly. where do you draw the line? hard question. but in the meantime, let's talk about the steps that we're taking to answer it and then
5:35 pm
have the debate out in open. yeah. yeah. and i look back on the decision in june. i feel very good about it based on the immense amount of i mean, every week i hear an incredible story of lama2 being used to, you know, it was this week, it was like stanford students writing a creating some glasses that allowed a blind person to understand what was being said in front of them. and that was just some stanford students using an open source model. it wasn't a massively multi billion dollar company. it was a group of students. yeah, no, these tools, when put in the hands in a democratic way, will create marvelous things. yeah, it's true. sam how do you respond to chris's point of view, of course, you're very familiar with it. strong strongly. i don't think i have anything interesting to say. i strongly agree with all of that. i wish i could find something to pick apart for a better panel, but what about you? well, i think i would agree also, but also emphasize a few other things. i think one of the things that open source matters for while in
5:36 pm
addition to encouraging innovation, entrepreneurship, ship, you know, people to innovate, the stanford students chris talked about to do that. i think it also gives researchers and others a chance to understand what we're doing. yeah which i think is also quite important. so all those benefits i think are extraordinary. i think we have to take that quite seriously. i think the and i think we want to all of those. i mean, google's grown up, as chris said, as an open source company. i mean, android is open source. we actually have some models that are open source. i think the question for us going forward is, as chris and i think sam implied, but you can jump in, is as these models become more capable 2 or 3 generations from now, how do you want to think through that question? not with the concern about innovators and entrepreneurs and researchers, but with the possible of bad actors. so that's the question we're all going to have to grapple with. i think in a perfect world, i'd
5:37 pm
like to do all of it. i'd like us to get all the benefits of open source, which are big advocates for. but we are going to have to think through these other questions going forward. and i think what i find a lot of a lot of comfort in is the fact that that you all are actually having these conversations and you know each other. and there's open discourse. there's some disagreement. but but it's healthy. it's healthy to have these conversations. what's not healthy is not talking about it. and going off and doing our own. and then endangering the rest of us. but and in the next several generations of products, i'm sure that the conversations will continue to be interrogated in 2024, which is this coming year. however, we have right in front of us not only the us election, but elections in 40 other countries around the world. and we've already seen disinformation in over the last eight years and ai has the potential to supercharge
5:38 pm
disinformation. and so how do you how are you all thinking about that? how are you all thinking about ensuring that if ai is used in political advertising, it's disclosed like meta is positive thing, but also how do we make sure that we can understand if there are deepfake videos, if there are deepfake audios, if there's personalized. convincing adverts rising, how do we navigate through this in this coming year? yeah, i think one one thing that we've learned over the past ten years, i would say, of focusing quite seriously on our role in elections is, is first of all, ai is the sword as well as the shield. and what that means for us is so much of your ability to scale. how do i
5:39 pm
find every let's say i find a photo that was misleading? how do i find every other instance of a photo that looks like it quickly? that's the kind of problem you want to be able to solve, to operate at scale, because that allows you to take for us, for example, if we have something that's reported, start to going viral, factcheckers can look at it. they can decide if it's misleading. they can label it. and then i can help us quickly detect everything that looks like it and those sorts of systems, whether or not an image was generated by ai, whether it was generated with photoshop or whether it was an actual image or whether it was a piece of text agnostic to how the piece of content was created. a lot of the systems that we built over the years can be used and be deployed against this sort of behavior, but also i remember and this may be a sore spot, but i remember there was a deepfake with nancy pelosi and facebook didn't take it down. so i'm not blaming you, but but that
5:40 pm
happened. and i know that that was a source of a lot of tension. so how do we as consumers trust what we're seeing and how do public personas trust that their likenesses are not going to be manipulated? yeah. so i think the first thing is, sam and i were just talking about this. there is a fair amount of public awareness on people should be skeptical about deepfakes, like what's interesting about this chapter of where we are with the internet, is there already is a broadly held understanding that like deepfakes are something you should keep an eye out for. i think that's good. um hum. the second thing is, you know, we've gotten a lot more sophisticated at understanding and detecting, detecting this information. the way we do that is working with 90 different fact checkers certified fact checking institutions across 60 languages in order to understand that for content that goes viral, let's have somebody let's have sort of professionals take a look at it and then make sure we label that
5:41 pm
content. so those are the kinds of things that we've learned over the years to make a huge difference in protecting people during elections. and for us, all those tools keep getting better every year, and we'll make sure that sort of whatever ai throws at us, like we'll use that for the benefit of our users. what do you think, james yeah, i would actually underscore something chris said. maybe add something to it. one is it's worth remembering that in fact these tools and technologies can actually be part of the solution when you have platforms, i'm sure it's true for you, for you, chris but in our case, like youtube and other platforms at the volume that content is being uploaded, it far outstrips any human's ability to review thing. so in many cases ai is actually assisting in that task to be able to do it at scale. we also find that even in our ai systems, when we're doing adversarial testing, ai tools are actually part of helping with this. they should be.
5:42 pm
exactly. so that's, that's kind of one part. the thing i was going to add is one of the important research tasks we all have. and we're starting to make progress on it, is how do we develop new techniques to improve our confidence in information? so in our case at google, we've been spending a lot of time investing in research on watermarking and kind of provenance technology. you know, earlier this year we actually, for example, introduced something called synth id, which is a watermarking technique we're building into all our generative systems. it's still early days, but even that already is going to make a difference. so there's a lot of research still to be done on how do we safeguard these systems, how do we try to explain, make clear the provenance of the content and where it came from and how it was created and all of that. that's still work to be done. but this is an area where there's still so much more work to do. and, you know, i think we're just getting started. sam
5:43 pm
yeah, let's talk about next year's elections and what you anticipate. i really think the thing chris and i were talking about, i really do think we underrate how much of like societal antibodies have already been built, but it's imperfect. and also that the dangerous thing there is not what we already understand, which is the sort of existing images and videos, but it's all the new stuff, the known unknowns, the unknown unknowns that are going to come from this so we can talk. we talked recently a little bit about this idea of personalized one on one persuasion and how we don't quite know how that's going to go, but we know it's coming. there's a whole bunch of other things that we don't know because we haven't all seen what, you know, generative video or whatever can do and that's going to come fast and furious during an election year. and the only way we're going to manage through that is a very tight feedback loop. we collectively, the industry, society, everything. i suppose the
5:44 pm
problem is that often the damage is done and then then we notice and then we correct and, and i also understand about broad antibodies at the societal level because we've now been swimming in a sea of propaganda and misinformation. however where we still have a lot of people ill in this country and elsewhere who believe in conspiracy theories that are easily debunked. but nevertheless, yes, they believe in them. and that has to do with human nature and the way that the brain latches on to information and that's something that that we can't quickly evolve past. yeah, that's we've struggled with that for a long time in human history. you know, conspiracy theories are always that thing somebody else believes. but i don't want to i don't want to like discuss that problem at all. like that is something deep about human psychology. but it's not new with this technology. it
5:45 pm
may be amplified more than before. amplified. i agree. it's not new relative to what we've already gone through with the internet. it's not clear that ai generated images are going to amplify it much more. the way you it's all of the other. it's the new things that i can do that i hope we spend a lot of effort worrying about. um, and, and a lot of the three of you and others speak often with elected officials and political leaders, and these are all us based companies. and so how are you thinking about obviously wanting to be inclusive of and have equitable distribution to benefits across the world, but also balancing that with us national security? and i'm sure that comes up in a lot of private conversations. but what can you share with this audience about that? well, i was going to pick up that question. lorraine, was something that sam just said. i think one of the things that is worth keeping in mind, especially in a setting like
5:46 pm
this at the asia-pacific kind of summit, is that a lot of these questions are really about us, not about the technology, energy about us as societies, because a lot of these questions about whether it's bias or in all these questions often are very different from place to place. country to country. we found, for example, as we've rolled out these technologies, having to have very, very different conversations in different countries about what counts as bias, what's not in different communities. but these are questions about society and not so much about technology. i think we, you know, society kind absolve itself of this question, of these questions and sam said societies faced these things for centuries, but maybe this technology amplifies this. i think we all have to think about these normative different questions in all our different countries. i think that's important to keep, to keep in mind. but at least hopefully there's a baseline we all agree with, which is hopefully we all
5:47 pm
adhere to things like basic human rights, human freedoms. those kinds of things safety, violence and so forth. i think we can agree on. but that's just the floor, not the ceiling. the rest, i think, is up to us collectively. i think in conversation. asians, to your question with various governments, i think there is clearly a serious concern around safety, around misinformation. and i think we all have the responsibility to do more technical work on that front to safeguard these systems and do as much as we can. one of the things many of us have encourage urged governments to do is to help us set up mechanist isms so we can actually all share and learn from each other because we're all experiencing, i'm sure, sam and his team and our teams and chris and we're all learning different things from how our technologies are being used. so there are some mechanisms to be able to share that. i think that's part of the hope that comes out of some of
5:48 pm
the uk summit, some of the announcements at least in the us, about the ai institute and the kind of the framework birx that nest and others are going to put in place for how we can share kind of best practices. i think we're going to need that base infrastructure to quite frankly and have it be somewhat harmonized around different countries around the world. so i'm very hopeful about what the kinds of conversations that have been happening recently. yeah and to your question on working outside of the us, it's funny. i would say the biggest difference between operating in the us and operating in a country like brazil or india or indonesia is whatsapp and i think a lot of americans still don't understand the primacy of whatsapp as an experience that matters a lot for folks in huge swaths of the world. it's the world's largest messaging platform. we spend a lot of time with governments and also with ngos in each country where we're big during important events. so during covid we spent
5:49 pm
a bunch of time in brazil and india and indonesia and mexico trying to understand how folks were getting information and what information they were getting, figuring out how we could play a role in helping them get up to date accurate information from the public health ministry. turns out the answer on whatsapp was like, just put all over the country like text this number on whatsapp and we'll send you up to date information on social distancing, on vaccines, on masks, on whatever. we do similar things during elections to give folks in region one basically access to some institution they trust in the tool that they're using to get a lot of their information. so we do that. we've learned that it's important not just for the people in those countries, but also for like the ongoing relationship of trust with for us to be brokers of those services like we do really need to do a good job of showing up during a public health emergency, during during a national natural disaster,
5:50 pm
during an election in during a war or a crisis. we put a lot of time and energy into how are we helping people get accurate information. and i think that's something i know google does quite seriously as well. and it's something i think we as american companies operating global services have tried to really improve upon every year. yeah, i have to say one more thing, though, on this point, which i'm sure we've all had the experience, which is it's always exciting when you go to other countries, especially developing countries, especially in the global south, where you go to latin america or to africa or asia, where the conversation options are somewhat different. actually, often there's a lot of excitement about the possibility to transform, leapfrog and get access to the world's knowledge and information, but also some different other challenges about how do we make sure that they are including and participate and have the tools and the capabilities to capture relies on these technologies and the
5:51 pm
possibilities for their own economies. we get for example, we're spending an enormous amount of time on things like these language moonshots. so for example, google translate works with roughly 133 languages, but we're trying to get to a thousand because is that opens up inclusivity in ways that are extraordinary to people in many, many different languages. us in latin america and africa and other places that i think i think so i think part of the exciting opportunity is to how do we enable economies entrepre occurs everywhere, quite frankly, to take advantage of this and pursue their own ambitions and do amazing things for their countries and for their communities. that's pretty exciting. yeah. well u2's up for the perfect last question. james so let's assume that the four of us are sitting on the stage here again in 2024 because the mayor did such a brilliant job at
5:52 pm
hosting everyone at apec. there are all coming back. so we'll come back to and we'll have this conversation and i will ask each of you, what is the most remarkable surprise rise that happened in ai, in your field, in your company in 2024? and what is it that you'll be telling us about sam? do you have a thought? the model capability will have taken such a leap forward that no one expected? wait, say it again. the model capability, like what? these systems can do, will have taken such a leap forward. no one expected that much progress. and why is that a remarkable thing? why is it brilliant? well, it's just different to expectation. i think people have like in their mind, how much better the models will be next year. and it'll be remarkable how much difference it is. okay that's intriguing. plus one to that. but i'd also add that when is gemini going to ship? well,
5:53 pm
that's the thing we're about to enter an exciting we're all waiting to know when you would like to know. let's give us a little information. i'm very curious trying to get them. that's going to be so exciting. i think it'll be a story about gemini. we'll be telling you about gemini and the kind of capabilities that these systems are unable to do. but more importantly, i think we'll be talking, i hope lorien about 3 or 4 big societal breakthroughs will have done with this technology. ideally in science, ideally on something that really , really matters for society. i hope we can show you 3 or 4 such things. i mean, proteins and biology are one thing, but i hopefully there'll be more in education, in more in health care. i hope to come back with at least a few of those. yeah, i really agree with what jane said . i think what we'll be talking about is something none of us anticipate that is positive, that comes from somebody who previously would not have been able to build somebody or some institution that would not have been able to accomplish something until one of these
5:54 pm
models and the associated tech that came with it. i think that's part of why we're all so excited about this. yes. yes yes. i think in health care alone and in disease. quest alone, there will be some amazing breakthroughs. amazing. yeah could i add one more? yes, i also think we'll have gotten to a place, lorraine, to your earlier question, where we've we're figuring out and struck the right balance between making sure we have mechanisms to limit and address the harms and things we're concerned about. but at the same time have some extraordinary kind of pro-innovation things that enable the opportunities that we all want. i think we somehow get stuck in this debate. it's one or the other. we have to do both. we have to do both. well i'm i'm just pinning my hopes on the three of you to get that one right. so thank you. thanks to the panel. thank you. thank you. thank you.
5:55 pm
community. >> hello, i'm iowa join the series for the city and county of san francisco for thirty years ago all san franciscans can watch their government in action to reliable service and program tuesday's sfgovtv for all you do and going 90 charlie. go ahead. we moved to san francisco in 1982. we came from the philippines. i have three kids nathan, jessica
5:56 pm
and iva. i was really young. when i had neat, i turned 19. and then two weeks later, he was born. so when he was fine, i used to watch cops all the time. all the time and so he would watch with me. he had his little handcuffs and his lite at sheridan because he and i attended the same elementary school there was an officer bill. he would just be like mom officer bill was there then one day, he said, mom, i touched his gun. and he was just so happy about it. everything happened at five minutes. i would say everything. happened at 4 to 5 years old. it's like one of those goals to where you just you can't you can't just let go.
5:57 pm
high school. i think you know everybody kind of strays. he was just riding the wave. and i mean, he graduated. thank god. one day i think he was about 20 or 21. he told me, he said mom. i want to be a cop or a firefighter, i said. no you're going to be a firefighter. but that's really not what he wanted to do. his words were i want to make a difference. and that was a really proud moment for me when he said of that played a role into his becoming a cop. my dad was really happy about it. my mom. she was kind of worried, but i just figured i can't stop him. he can make his own decisions. stu. i just want to say what's up? how you doing?
5:58 pm
good. good. no i'm trying to look good for us to looking good for us to so when he was in the police academy, mind you this kid was not a very studious kid. but i've never seen him want something so bad when he was home. he'd be in his room studying the codes. he really fought for it. hi. what's your name? i'm nate. nate is great with kids, and he would give them hugs or give them stickers. i think that that's a positive influence on the kids, and then the people around you see it. once he makes that connection with people and they trust him that foundation that respect people look at you and see your actions more than your words and so that i think will reach people more than anything. you could say you later, brother. thank you. all right, see you.
5:59 pm
it's a really hard job. i know you. you see a lot of the negative for me. i would not put myself through that if i didn't care. you know, you have to be the right kind of person. you have to have the right heart to want to do that. when people ask me if you know what my son does , um, i just tell him he's a cop , and i just feel like i'm beaming with pride. i always told him when he was young that he would do something great. and so to see it. it's i have a moment. i'm very proud of him.
6:00 pm
23 Views
IN COLLECTIONS
SFGTV: San Francisco Government Television Television Archive Television Archive News Search ServiceUploaded by TV Archive on