Skip to main content

tv   Nobel Minds  BBC News  December 30, 2024 1:30pm-2:01pm GMT

1:30 pm
nobel minds: physics and chemistry in this programme, this year's nobel laureates in physics and chemistry tell us about the benefits and pitfalls of artificial intelligence. this is nobel minds 202a. nobel laureates, this is the first time that some of you have been brought together in discussion on television, and we're also joined by some of your family and friends, as well as students from here in stockholm. before we start, let's just give them a really big round of applause. renewed congratulations to all of you.
1:31 pm
i guess you're all getting very used to the sound of applause now, aren't you? so tell me, how has winning the nobel prize changed your life? er..who shall i start with? gary. well, the level of attention is something that's a thousand x what it ever was for other awards. you know, the nobel is a brand and it's120—something years of history. yeah. completely, um. . mesmerising. daron, one of the economists, what about you? how has it changed your life? i mean, i'm here. being in stockholm for one week in december, that's a life—changing event. but i'm amazingly grateful and happy, honoured, and i'll take it as it comes. your diary is going to be superfull from now on.
1:32 pm
you're going to be running around from lecture to lecture and guest appearances, so.. professor geoffrey hinton, what about you? um.. yeah, it makes an amazing change. i get huge amounts of email asking me to do things. i luckily have an assistant who deals with most of it. i get stopped for selfies in the street, which is, um.. it's very annoying. but if it went away, i'd be disappointed. they laugh. but also, you've been teaching for many years at the university of toronto, and you said after you won the nobel prize, they at last gave you an office? yes. they didn't think i was worth an office before that. james? i've noticed that people take what i say much more seriously. i've always proceeded on the assumption that no—one was ever actually listening to anything i said. so now i have to really choose my words carefully. yeah. and does that extend to your family members as well? do they listen to what you say now? um.. i'd have to think about that one! david baker? well, actually, a highlight has really been this week and having
1:33 pm
all my family and colleagues here, it's been a great celebration. and, um, yeah, i've had to give up email, which has been a positive. and i've learned to completely avoid selfies, so.. but on the whole, it's been very exciting. and you don't travel light, do you, if i can put it that way? just remind me, how many people have you come with to stockholm? 185. i think that must be a record. i'm going to have to check, but i'm pretty sure that must be a record. well, it's quite a party you're going to have. and sir demis hassabis. well, of course, it's been an honour of a lifetime. and to tell you the truth, it hasn't really sunk in yet. so maybe i'll do that over the christmas holidays. but it's also, you know, an amazing platform to talk about your subject more widely and i have to think about that responsibility in the coming years. mm—hm. 0k. let's turn now to the awards that were made this year, and let's start with the physics prize. and here's a brief summary of the research behind that prize.
1:34 pm
this year's physics prize rewards research that laid the foundations for the development of ai, enabling machine learning with artificial neural networks. john hopfield created a structure that can store and reconstruct information. geoffrey hinton built on his ideas and made it possible to create completely new content with the help of ai, so—called generative ai. this opens up numerous potential areas of use, for instance by providing techniques for calculating and predicting the properties of molecules and materials. their research has also prompted extensive discussion of the ethics around how the technology is developed and used. so, geoffrey hinton, you actually wanted to find out how the human brain works.
1:35 pm
so how does it work? we still don't know. we've made lots of efforts to figure out how the brain figures out how to change the strength of connections between two neurons. we've learned a lot from these big systems that we've built, which is if you could find any way to know whether you should increase or decrease the strength, and then you just did that for all of the connections, all 100 trillion connections, and you just kept doing that with lots of examples, slightly increasing or decreasing the strength, then you would get fantastic systems like gpt—ii. these big chatbots learn thousands of times more than any one person, so they can compress all of human knowledge into only a trillion connections. and we have 100 trillion connections, and none of us know much. they laugh. but that's.. speak for yourself! but that's interesting.. he says, "speak for yourself!" but anyway.. he does know a lot actually! but not compared with gpt—ii! so you make it sound, though, as if this is the best computer and it's never been bettered,
1:36 pm
we don't quite know how it works, and yet you also say that artificial intelligence, artificial neural networks could outsmart humans. oh, i think we've been bettered already. if you look at gpt—ii, it knows much more than any one person. it's like a not—very—good expert at everything. so it's got much more knowledge in far fewer connections, and we've been bettered and we've been bettered in that sense. in that sense. do you agree with that, demis hassabis? do you agree with that, demis hassabis? well, look, i think so. well, look, i think so. i mean, just going back i mean, just going back to your initial question. to your initial question. originally with the field of ai, originally with the field of ai, there was a lot of inspiration taken there was a lot of inspiration taken from architectures of the brain, from architectures of the brain, including neural networks including neural networks and an algorithm called reinforcement learning. and an algorithm called reinforcement learning. then we've gone into a kind then we've gone into a kind of engineering phase now where we're of engineering phase now where we're scaling these systems up to massive of come sort of full circle. scaling these systems up to massive size, all of these large foundation size, all of these large foundation models or language models. models or language models. and there's many leading models now, and there's many leading models now, and i think we'll end up in the next and i think we'll end up in the next phase where we'll start using these phase where we'll start using these ai models to analyse our own brains ai models to analyse our own brains and to help with neuroscience as one and to help with neuroscience as one of the sciences that al helps with. of the sciences that al helps with.
1:37 pm
so, actually, i think it's kind so, actually, i think it's kind of come sort of full circle. neuroscience has sort of inspired modern ai, and then ai will come back and help us, i think, understand what's special about the brain. will machine intelligence outsmart humans? i mean, what kind of timeframe are you talking about? are you saying it's already happened? so in terms of the amount of knowledge you can
1:38 pm
do you think that machine intelligence could outsmart, outwit us to the extent that, actually, they start ruling the roost? no. well, look, i think for now.. so i disagree with geoff on the fact that today's systems are still not that good. they're impressive. they can talk to us and other things. they have quite a lot of knowledge, but they're pretty weak at a lot of things. they're not very good at planning yet or reasoning or imagining and creativity, those kinds of things. but they are going to get better rapidly. so it depends now on how we design those systems and how we decide to sort of, as a society, deploy those systems and build those systems. all right. so we'll look at what we do about it. but, gentlemen, this is a very big fundamental question. gary, and then you, david. i think you're overrating humans in this. so we make up a lot of untruths as well. and there's so many examples of false ideas that get propagated, and it's getting worse, of course, with social networks.
1:39 pm
so the standard for al to. it's pretty low. you know, humanity is way overrated. right. 0k. david? humans. i'll take a contrarian view here. you know, humans, since really the beginning of civilisation, have created things that are better than them in almost every domain. you know, cars can go infinitely faster than humans. planes can fly. humans can't. you know, for a long time, we've had computers that can do calculations that humans can't do. demis has developed programs that solve go and chess, so we're very comfortable, i think, with machines being able to do things that we can't do. chatgpt, you know, gpt—4, has much more knowledge than a human being. i think we just take this kind of thing in stride. i don't think we worry about losing control. so i guess. but that's the key issue. we know that computers can do a lot that we can't, but it's this question of control. i think we'll remain in control.
1:40 pm
because, i mean, planes fly, but it's the human pilot who's in the cockpit.. that's. .assisted by technology, obviously. and we still drive cars. yeah. what about you two, the economists? where do you stand on this question? i'll take the opposite position to gary. i think humans are incredibly underrated right now. they chuckle. human adaptability, fluidity, creativity, but also community. i think humans are just amazing social animals. we learn as collectives, and as collectives we are able to do a huge number of things in very quick succession. so i would worry about those people controlling ai before ai itself turning on us. humankind's greatest enemy is humankind. the sort of doctor evils that we see in popular science fiction. 0r doctor do—goods who think they are doing good. i wouldn't put those past doing huge damage. yeah. yeah, i would agree. i mean, as the tools get more powerful, i think the worry is not
1:41 pm
the machines themselves, but people using the tools, misinformation, autonomous military weapons, all kinds of things. humans have a great track record of inventing things that _ jeopardise the human race, such as nuclear weapons. l i mean, just think about how close we've been i to obliterating the planet with the cuban missile i crisis and..you know. so we've done this already. we can do it again in a different form or, i you know, with a different. so i guess i would like to ask demis, you know, i i take the point of view, | everyone's saying, "yes, we need to regulate, we need to.." but who has the - incentive to do that? i don't.. you know, like, it's. one thing to say that, but i suspect the politicians and the governments, - they're just playing catch—up. you know, the thing is moving faster than they can get their hands on. - and who in the private sector..? theyjust want to make money and get this stuff out there. _ and so where are the incentives - to actually do something about that? yeah. well, look, i mean, there is.. obviously the reason that many of us are working on al is because we want to bring to bear
1:42 pm
all of the incredible benefits that can happen with al, in medicine, but also productivity and so on. but i agree with you. there is going to be a kind of coordination problem where i think there has to be some form of international cooperation on these issues. i think we've got a few years to get our act together on that. and i think leading researchers and leading labs in industry and academia need to come together to kind of demand that sort of cooperation as we get closer to artificial general intelligence and have more information about what that might look like. but i'm a big believer in human ingenuity. and as david says, you know, we're unbelievably adaptive as a species. and, you know, look at our modern technology we already use today that we sort of seamlessly.. the younger generation just seamlessly adapts to and takes as a given. and i think that's also happened with these chatbots, which, you know, 25 years ago, we would have been amazed, those of us in the era of ai, if you were to transport
1:43 pm
the technologies that we have today back then. and yet we've all.. society seems to have sort of seamlessly adapted to that as well. geoffrey hinton, do you see that happening? you've raised the alarm bells about humans becoming subservient, in a way, to machines. um, do you think that there's enough of a debate at an international level? do we need more ethics in science to debate these kind of issues? do you see that happening? so i want to distinguish two kinds of risks from al. one is relatively short term, and that's to do with bad actors. and that's much more urgent. um, that's going to be obvious with lethal autonomous weapons, which all the big defence departments are developing, and they have no intention of not doing it. the european regulations on al say none of these regulations apply to military uses of ai. so they clearly intend to go ahead with all that. and there's many other short—term risks like cybercrime and generating bad pathogens, fake videos, surveillance, all of those short—term risks are very serious, and we need to take them seriously. and it's going to be very hard to get collaboration on those.
1:44 pm
then the long—term risk, that these things will get more intelligent than us, and they'll be agents, they'll act in the world and they'll decide that they can achieve their goals better, which we gave them the goals, and they can achieve them better if theyjust brush us aside and get on with it. um, that particular risk, the existential threat, is the place where people will cooperate, and that's because we're all in the same boat. nobody wants these als to take over from people. and so the chinese communist party doesn't want als to be in control. it wants the chinese communist party to be in control. you know, for somebody who's described as the godfather of ai, you sound quite a bit down on it in so many ways. well, it's potentially very dangerous. it's potentially very good and potentially very dangerous. and i.. you know, i think we should be making a huge effort now into making sure we can get the good aspects of it without the bad possibilities. and it's not going to happen automatically, like he says. well, we've got some students in the audience here, and i know that some of them
1:45 pm
want to pose a question to you laureates. prashant yadava from the kth ai society, your question, please. i'd like to know in what ways ai can i be put to use in bringing truly. democratic values and bringing. economic equalities to the world. so in what way can ai promote democracy and equality in the world? who's going to answer that one? demis, go on, have a stab. i can start off. i mean, ithink, um, as we've discussed actually for most of the conversation, i think powerful technologies in and of themselves are, um, kind of like neutral. they could go good or bad, depending on what we as a society decide to do with them. and i think ai is just the latest example of that, in that case. maybe it's going to be the most powerful thing and most important that we get right. but it's also, on the optimistic end, i think it's one of the challenges, it's the only challenge i can think of that could be useful to address the other
1:46 pm
challenges if we get it right. so, um, so that's the key. um, i don't know, democracy and other things, it's a bit out of scope. maybe it's for the economists to talk about. well, i'lljust say, i think ai is an informational tool and it will be most usefuland most, er.. .enriching for us in every respect if it's useful, reliable and enabling information for everybody, not just for somebody sitting at top to manipulate others, but enabling for citizens, for example, enabling for workers of different skills to do their tasks. all of those are aspects of democratisation, but we still have a long way to go for that sort of tool to be available in a widespread way and not be manipulable also. ok, so let's turn now to some of the work that has contributed to the award for the chemistry prize this year for demis hassabis, david baker, along withjothumper. and let's just get a brief idea of the research that led to the chemistry nobel prize award. the ability to figure out
1:47 pm
quickly what proteins look like and to create proteins of your own has fundamentally changed the development of chemistry, biology and medical science. by creating the ai program alphafold2, this year's chemistry laureates, demis hassabis and jothumper, have made it possible to calculate the shape of proteins and thereby understand how the building block of life works. the second half of this year's award goes to david baker, for what's been described as the almost impossible feat of building entirely new kinds of proteins, useful not least for producing what could block the sars—cov—2 virus. making new proteins can simply open up whole new worlds. so let's start with you, david baker. you've been applauded for creating these new proteins. and actually, you didn't even want to become a scientist
1:48 pm
in the first place, so it's quite amazing that you've now got this nobel prize. butjust tell us, what kind of applications, implications do you think your work has led to or could lead to? yeah, i think following up on our previous discussion, i think i can really talk about the real power of al to do good. so some of the.. proteins in nature solve all the problems that came up during evolution. and we face all kinds of new problems in the world today. you know, we live longer, so neurodegenerative diseases are important. we're heating up and polluting the planet. these are really existential problems. and now, you know, maybe with evolution, another 100 million years, proteins would evolve that would help address these. but with protein design, we can now design proteins to try and deal with these today. and so we're designing proteins, completely new proteins to do things ranging from breaking down plastic that's been released into the environment to, um, combating neurodegenerative disease and cancer.
1:49 pm
mm. and demis hassabis, of course, you're well known for being a co—founder of deepmind, the machine learning company. and, i mean, you were a chess champion. you were a child prodigy, really. you know, making video games when you were only in your teens. so here you are, you've got a nobel prize under your belt as well, but you've already actually started using the research for which you were awarded the prize, along withjothumper. that's right. so we are, with our own collaborations, we've been working with institutes like the drugs for neglected diseases, part of the who. and indeed, because if you reduce the cost of understanding what these proteins do, you can go straight to drug design, that can help with a lot of the diseases that affect the poorer countries of the world, where big pharma won't go because there isn't a return to be made.
1:50 pm
but, in fact, it affects, you know, a larger part of the world's population. so i think these technologies, actually, going back to our earlier conversation, will help a lot of the poorer parts of the world by making the cost of discovery so much lower, you know, that it's within the scope then of ngos and non—profits. anybody else want to chip in on this? i mean, obviously i think this is just an amazing opportunity for science. anything we can use to improve the scientific process can have, can have, not necessarily will have, can have great benefits. but that doesn't change some of the tenor of the earlier conversation. great tools also still create great risks. fritz haber, you know, a nobel prize winner for work on which we depend every day with synthetic fertilisers, you know, also made chemical weapons for the german army in world war i, directly causing the deaths of hundreds of thousands of people. so the responsibility of scientists with powerful tools is no less.
1:51 pm
mm. we're seeing scepticism in all sorts of positions of power now, aren't we, all over the world. is that something that worries you, that policymakers don't perhaps understand the full complexity of science, be it climate science or, you know, other difficult issues? well, i would say it's also part of our responsibility that we have to work harder in getting people to trust science. i think there is much greater scepticism about science. and i don't know.. i don't think anybody knows exactly why, but it is part of the general polarisation. but it's also probably the way that we are not properly communicating the uncertainties in science, the disagreements in science, what we are sure and what we are not sure. so i think we do have a lot more responsibilities in building the public�*s trust in the knowledge that's usable in order for that knowledge to be seamlessly
1:52 pm
applicable to good things. demis, and then maybe gary. yeah. i think i agree with that. and i think in, just in the realm of ai, i feel like one of the benefits of the sort of chatbot era is al is much more thanjust chatbots, it's scientific tools and other things, but it has brought it to the public's consciousness and also made governments more aware of it and sort of brought it out of the realm of science fiction. and i think that's good, because i think in the last couple of years, i've seen a lot more convening of governments, civil society, academic institutes to discuss the broader issues beyond the technology, which i totally agree with, by the way, including things like, what new institutes do we need? how do we distribute the benefits of this widely? um, that's a societal problem. it's not a technological problem. and we need to have a broad debate about that. and we've started seeing that. we've had a couple of global
1:53 pm
safety summits about al, one in the uk, one in south korea, and the next one's in france. and i think we need actually a higher intensity and more rapid discussion around those issues. gary, do you want to come in here? yeah, i.. the engine of western economies, in terms of the revolution in the last 50 years, has been technology and science and silicon valley and that sort of thing in terms of.. and if you wanted to.. if you're an enemy of the west, you want to destabilise that. and so i think this whole social network, "i don't trust technology, "i don't trust any of the enterprises," i don't think that's evolved naturally. i think that's been manipulated by bad agents. and we have to be aware of that. which bad agents? i think it's russia and iran. i don't think it's stupid to say that. and.. geopolitics. yeah. they're not looking in our best interests. i think there's other
1:54 pm
bad agents too. sure. probably the energy industry would like you not to believe in climate change, just like the tobacco industry knew very well that cigarettes caused cancer, but they hid that fact for a long time. you know, if we cannot trust the energy companies, - we cannot trust pharmaceuticall companies, tobacco companies, i can we trust the tech companies, l which are extremely concentrated? and if ai is so important, what about the power- of tech companies? i don't know why you're asking me. i don't work for a tech company. laughter. so you have an objective opinion! no, no. but that's one aspect of the risks of ai that we didn't talk about. i 0k. well.. just to take a more positive point of view again, i mean, despite the scepticism about science, and certainly you don't have to look far in the us, it should be pointed out that it was the response to covid with the mrna vaccines was truly miraculous. yes. it was a technology that really had not been proven at all and in very little time.. because it was this thing about having a common
1:55 pm
enemy and a threat. um, you know, we were able to mobilise very quickly, try something really completely new and bring it to the point where it did a huge amount of good. so, um..so there are reasons to be optimistic that were other threats to appear, that a lot of the silliness would sort of filter out and the correct actions would be taken. and the sceptics died. 0k. well, on that positive note, i think we can say that's all from the first of two programmes with this year's nobel laureates. next time, we'll be discussing the questions prompted by the prizes in economics and medicine. till then, from me, zeinab badawi, and the rest of the team, from the royal palace in stockholm, goodbye.
quote
1:56 pm
hello there. we're saying goodbye to christmas. grey sky and dense fog and hello to wet and windy weather to see in the new year. a conveyor belt of weather fronts sitting out in the atlantic, waiting to dominate the story over the next few days. first signs of rain pushing into northern ireland and southern scotland. very windy across the tops of the trans—pennine routes. to the south of that we've got some brightness and temperatures, generally at around 9 to 11 degrees, somewhat colder conditions in the far north east of scotland. but it's overnight tonight and into the first half of new year's eve that we see this relentless heavy rain continue to push into scotland. and that means that rainfall totals are going to start to add up. so the met office has issued an amber weather warning stretching from inverness down to fort william. here we could see the possibility
1:57 pm
of some flooding and some travel disruption on a very, very busy travel day. and that amber amber weather warning will remain in force throughout new year's eve as that heavy rain continues to push its way steadily south into the north of england, but still feeding in plenty of wet weather across scotland to the south of that. a slightly quieter story. still pretty windy with it, but largely fine and dry. so if you are heading out towards midnight, it looks likely that the heaviest of the rain is going to be across northern england and parts of north wales. it will be relatively mild with it windy, but the wind direction coming from the southwest. so as we move towards new year's day, we'll be able to split the country into two. a band of heavy rain and strong gale force gusts of winds moving their way steadily south across england and wales. behind it, the wind direction changing to a northerly. some of those showers will turn wintry in nature and it's going to be noticeably colder, but there will be some sunshine, windy day for all the strongest of the winds with that heavy rain as it moves through east anglia
1:58 pm
and down through the kent coast. here we mightjust see double figures, but noticeably colder across the country. and that colder theme is going to stay with us as we head through thursday and into friday. at least we've got some sunshine, but it will feel bitterly cold, particularly when you factor in the direction of the wind. so our week ahead, heavy rain and snow accompanied by some strong winds to begin with. as that moves through from new year's day onwards, it turns colder but crisper.
1:59 pm
live from london. this is bbc news. south korea's acting president orders a review of the country's aviation safety procedures — as grieving families call for more support after sunday's deadly plane crash. tributes are being paid tojimmy carter — the 39th president of the united states, and winner of the
2:00 pm
nobel peace prize — after his death aged 100. joe biden has announced a national day of mourning will be held onjanuary ninth. this is the scene in washington dc — where flags are flying at half mast. reports from argentina say five people have been charged in connection with the death of one direction star, liam payne. and — we'll tell you how you might be able to glimpse a very rare star — that hasn't been visible to the naked eye — for 8 decades. hello. i'm annita mcveigh. in south korea families of the victims of the nation's worst aviation diaster in decades, are criticising the lack of updates from officials. the acting president has visited the crash site — and ordered an emergency safety
2:01 pm
inspection of the country's entire airline operation system, a day after the plane

0 Views

info Stream Only

Uploaded by TV Archive on