Skip to main content

tv   BBC News  BBC News  June 3, 2023 3:00am-3:30am BST

3:00 am
�* the �*the centre lawmakers written by the centre for al safety. it but despite the threat of an ai apocalypse, they have also been ethical concerns. lawyers and cases before him are required to satisfy they did not use ai to satisfy they did not use ai to draft their failings without a human checking the accuracy and this comes after a new york lawyer made headlines for filing an error didn't legal brief that he drafted with the help of chatgpt. but it is not all doom and gloom. late last month canadian researchers discovered a promising treatment for an antibiotic resistant superbug with the help of artificial intelligence. this new technology may help revolutionise our approach to medical research. streaming platforms are also using ai technology to help boost consumer content. spotify announced this week it is
3:01 am
releasing its new personal ai dj for spotify premium members and the dj will use ai to 08 playlists and videos based on what the user likes and dislikes. so, there is certainly a lot to talk about with al and we certainly a lot to talk about with aland we have two guests on with me tonight to discuss it all. andrew rosso is a legal expert on aland... good it all. andrew rosso is a legal expert on al and... good to it all. andrew rosso is a legal expert on aland... good to see you both. first of all, what you both. first of all, what you make of these different stories we have seen over the past two weeks talking about the benefits and the potential harms of ai.— the benefits and the potential harms of ai. ., ~ i. . harms of ai. thank you so much for having _ harms of ai. thank you so much for having me. _ harms of ai. thank you so much for having me. it _ harms of ai. thank you so much for having me. it is _ harms of ai. thank you so much for having me. it is a _ harms of ai. thank you so much for having me. it is a great - for having me. it is a great question because everything comes down to what is the framework that we want to see implemented. where are the parameters, where do we want to operate within? i think with all of these good news cases, but also some of the more scary instances, begs the question of
3:02 am
where do we stand, what are we allowed to do?— allowed to do? margaret, what is our allowed to do? margaret, what is your take — allowed to do? margaret, what is your take on _ allowed to do? margaret, what is your take on this? _ allowed to do? margaret, what is your take on this? as - is your take on this? as someone _ is your take on this? as someone who - is your take on this? as someone who has sort of worked on al _ someone who has sort of worked on al ethics work for a long time, — on al ethics work for a long time, i'm _ on al ethics work for a long time, i'm actually excited that people — time, i'm actually excited that people care now, that is a big seachange. there is so many details — seachange. there is so many details that are getting lost in the — details that are getting lost in the weeds stop so many powerful people putting forward statements without the necessary background to really grant _ necessary background to really grant what they're saying and the solutions, so i have sort of this— the solutions, so i have sort of this mix _ the solutions, so i have sort of this mix of excitement and then— of this mix of excitement and then frustration with alice is playing _ then frustration with alice is playing out. then frustration with alice is playing out-— then frustration with alice is playing out. margaret, some well-known _ playing out. margaret, some well-known names - playing out. margaret, some well-known names in - playing out. margaret, some well-known names in tech i playing out. margaret, some l well-known names in tech put well—known names in tech put out this letter calling for a pause in al development and you and your fellow pause in al development and you and yourfellow ai pause in al development and you and your fellow ai enthusiasts criticise that letter for failing to point out some existing problems.- failing to point out some existing problems. both the letter that _ existing problems. both the letter that came _ existing problems. both the letter that came out - existing problems. both the letter that came out the - existing problems. both the l letter that came out the other day and — letter that came out the other day and that letter came from groups — day and that letter came from groups of— day and that letter came from groups of people who were really — groups of people who were really into effective altruism which — really into effective altruism which is _ really into effective altruism which is a philosophical
3:03 am
approach to life in general, notjust_ approach to life in general, notjust ai, and the way approach to life in general, not just ai, and the way that things— not just ai, and the way that things are prioritised there and — things are prioritised there and sort _ things are prioritised there and sort of effective altruism, it is fundamentally different to what _ it is fundamentally different to what i think a lot of ai ethicists _ to what i think a lot of ai ethicists have done for example. we focus a lot on what is actually — example. we focus a lot on what is actually happening in the real— is actually happening in the real world, what is the context of what — real world, what is the context of what is _ real world, what is the context of what is happening now and addressing those so that people aren't— addressing those so that people aren't harmed right now. that further— aren't harmed right now. that further informs how things can progress _ further informs how things can progress. by getting solutions to the — progress. by getting solutions to the table now, we can inform ai to the table now, we can inform alto _ to the table now, we can inform alto move _ to the table now, we can inform alto move in a really nice path _ alto move in a really nice path. these other approaches are sort— path. these other approaches are sort of like, never mind happening now, let's envision some — happening now, let's envision some interesting scenario which, _ some interesting scenario which, by the way, is aligned with— which, by the way, is aligned with effective altruism and then— with effective altruism and then let'sjust talk with effective altruism and then let's just talk about it without— then let's just talk about it without really proposing solutions.— without really proposing solutions. �* ., ., solutions. andrew, what are we thinkin: solutions. andrew, what are we thinking about _ solutions. andrew, what are we thinking about this? _ solutions. andrew, what are we thinking about this? are - solutions. andrew, what are we thinking about this? are we - thinking about this? are we focusing too much on the doom and gloom of future ai harms and gloom of future ai harms and not focusing on what is happening right now? i think
3:04 am
that is what _ happening right now? i think that is what we _ happening right now? i think that is what we have - happening right now? i think that is what we have all - happening right now? i think| that is what we have all been trained to believe through mainstream media also hollywood. we have seen to four years these different depictions of how ai can be valuable and enrich our lives but what happens when something goes wrong? when does technology turn against us and why? do i do agree to an extent that we are missing the mark in terms of the questions we're asking and where we're placing our effort in trying to answer those questions. at the same time, we're still at starting point. we still need to figure out what we have before us, how we're going to apply, what sector we want to apply it to, right? health medicare, privacy right? health medicare, privacy right now is such a big question and privacy impacts us no matter what area of the workforce you're in, so i think they are very important questions that we still need to consider. a . questions that we still need to consider. n, ., , ., consider. margaret, i saw you nodding. _ consider. margaret, i saw you nodding. do _ consider. margaret, i saw you nodding, do you _ consider. margaret, i saw you nodding, do you think- consider. margaret, i saw you nodding, do you think privacy| nodding, do you think privacy is something we should be addressing right away? yes, absolutely- _ addressing right away? yes, absolutely. going _ addressing right away? yes, absolutely. going to -
3:05 am
addressing right away? yes, absolutely. going to discussion i've already had which there is so much — i've already had which there is so much at stake, so much already— so much at stake, so much already happening and when will technology be turned against us? it — technology be turned against us? it is _ technology be turned against us? it is actually already been turned — us? it is actually already been turned against some subpopulations by people so we can think— subpopulations by people so we can think of how it is actually being — can think of how it is actually being turned against some people _ being turned against some people by other people and address those sorts of issues right— address those sorts of issues right now. privacy is a big one of them — right now. privacy is a big one of them and that goes into surveillance and the ways that surveillance and the ways that surveillance is weaponised against _ surveillance is weaponised against people, goes into the right— against people, goes into the right to — against people, goes into the right to be forgotten and the sorts— right to be forgotten and the sorts of— right to be forgotten and the sorts of issues so there are tons— sorts of issues so there are tons to _ sorts of issues so there are tons to be done right now but i think— tons to be done right now but i think a — tons to be done right now but i think a lot _ tons to be done right now but i think a lot of this rhetoric is currently— think a lot of this rhetoric is currently overlooking. we're coin: currently overlooking. we're auoin to currently overlooking. we're going to talk— currently overlooking. we're going to talk about - currently overlooking. we're going to talk about the - going to talk about the benefits and risks of ai in a moment but first i want to look at another aspect. in january, at another aspect. in january, a democrat congressman made news after he delivered a speech on the floor in the us house of representatives that was generated by chatgpt. i spoke with him about that and his concerns about al development. first he was a clip of the speech he delivered in congress. i
3:06 am
clip of the speech he delivered in congress— clip of the speech he delivered in congress. i stand here today because l'm — in congress. i stand here today because i'm planning _ in congress. i stand here today because i'm planning to - because i'm planning to reintroduce the united states israel bipartisan that will cement mutually beneficial partnership between united states and israel and artificial intelligence research. this is a critical step forward where ai and its indications are taking centre stage in public discourse. we must collaborate with international partners like the israeli government to ensure the united states maintains a leadership role in al —— research and development and as responsibly explores the many possibilities of all technologies provide. the united states israel artificial intelligence centre act will allow us to tap into the expertise of both countries and draw upon each other�*s resources to explore and develop ai advancements. congress congressman, it is great to have you back. we saw a clip of your speech in congress. what point did you want to make?— want to make? first of all, thanks for— want to make? first of all, thanks for having - want to make? first of all, thanks for having me - want to make? first of all, thanks for having me back| want to make? first of all, i thanks for having me back on. since my time at mit and then through the years working in high—tech, i have been
3:07 am
following ai closely and i could tell it was hitting an inflection point where it was going to be disruptive to wide swathes of our economy and i really wanted to draw attention to this issue because ai cannot be like social media where policymakers were five, ten, 15 years behind the curve, it got so big so fast that its negative effects on society, democracy, were unrealised and unmitigated. and have now become very hard to repair. we've got to be ahead of the curve, not behind the curve. it does seem to be reaching an inflection point and as this week we saw leaders in ai sign a statement warning of ai risks. what steps do you think congress should take immediately?- congress should take immediately? congress should take immediatel ? , ., ., immediately? first and foremost is deep fake- _ immediately? first and foremost is deep fake- l— immediately? first and foremost is deep fake. i am _ immediately? first and foremost is deep fake. i am working - immediately? first and foremost is deep fake. i am working on . is deep fake. i am working on legislation right now that would make providers and companies liable for defamation through ai —— generated
3:08 am
keepsakes. i am deeply concerned that deep fakes generated for the next brad lynch are going to make the twitter bots from russia look like child's play. —— the next presidential election. politicians seem to be saying or doing things they did not say or do in the way that moving voters, move elections and that really undermined the bonds of trust and shared reality that any healthy civil society requires. we cannot allow social media platforms to hide behind section 230 and liability shields and claim that deep fakes are fair game when they are defamatory and when they are defamatory and when they are defamatory and when they drive disinformation. congressman, you just mention social media pythons and we have just seen that axios is reporting that youtube is reversing its election to live —— to nihilism policy and will leave out content that says that was brought in a 2020 election because they say leaving a policy in place might have the effect of "curtailing clinical speed without
3:09 am
meaningfully reducing the risk of violence. what do you think about? . , of violence. what do you think about? ., , ., ' about? that is two different issues. one _ about? that is two different issues. one the _ about? that is two different issues. one the ability - about? that is two different issues. one the ability of i about? that is two different issues. one the ability of a | issues. one the ability of a platform to censor speech and while the first amendment does not prohibit private companies from centring speech, there is also nothing that requires them to do that. so, a robust give—and—take ideas, even ideas we find despicable or downright false is part of democratic debate. when a —— what i am discussing right now is defamatory deep fakes. what i'm saying is generated videos and images showing people doing or saying things they never said or did in order to drive disinformation and to create confusion. for example, you may have seen a couple of weeks ago doctored images of the pentagon under attack which briefly caused the stock market to flutter. that wasn't a
3:10 am
particularly sophisticated deep fakes. what we're going to see, i'm worried in the next couple of years, our deep fakes at ten times the level of sophistication. we are already seeing the chinese communist party do this and burkina faso and in other areas and i worry that the russians and even individuals can start deploying them to undermine our democracy. them to undermine our democracy-— them to undermine our democracy. them to undermine our democra . ~ ., ., ., them to undermine our democra .~ ., ., ., democracy. what do you want our fellow colleagues _ democracy. what do you want our fellow colleagues in _ democracy. what do you want our fellow colleagues in contact - fellow colleagues in contact make your fellow colleagues to do it? i make your fellow colleagues to do it? ., , . do it? i want internet service providers _ do it? i want internet service providers and _ do it? i want internet service providers and social - do it? i want internet service providers and social media . providers and social media companies to be liable for knowingly hosting deep fakes that are defamatory, or, for not taking reasonable care and being negligent and allowing them to be disseminated. right now under section 230, they can plausibly assert that it is not a problem. these are trillion dollar companies, it is their problem. dollar companies, it is their problem-— dollar companies, it is their roblem. ., ., , , ., problem. you mentioned russia and china- _ problem. you mentioned russia and china- l _ problem. you mentioned russia and china. i want _ problem. you mentioned russia and china. i want to _ problem. you mentioned russia and china. i want to ask - problem. you mentioned russia and china. i want to ask you - and china. i want to ask you about something we heard from one of your colleagues in congress last friday, i spoke to seth moulton. here is what he said. ~ �* to seth moulton. here is what hesaid.~ �* , . to seth moulton. here is what hesaid. �* , . ., he said. we can't expect our adverse areas _ he said. we can't expect our adverse areas like _ he said. we can't expect our adverse areas like russia i he said. we can't expect our| adverse areas like russia and china to follow the same rules
3:11 am
of the road when it comes not only to their truth but to their autonomous weapons. that is why we need to get ahead of this problem and set some international norms they use the same way we have come to agreement on things like chemical weapons. agreement on things like chemicalweapons. d0 agreement on things like chemical weapons. do you think that china _ chemical weapons. do you think that china and _ chemical weapons. do you think that china and russia _ chemical weapons. do you think that china and russia are - that china and russia are willing to work with the us and its allies on setting these types of norms on how ai is used? ~ ., types of norms on how ai is used? a, ,., ., ., ., used? more important our guard rails with the _ used? more important our guard rails with the chinese _ rails with the chinese communist party. right now we are seeing putin sending weapons into maternity wards. my weapons into maternity wards. my faith in him abiding by rules is severely limited. with the ccp, we know they are trying to deploy ai through both the people's liberation army as well as through the navy and for starters, our
3:12 am
response needs to be stronger and better at it than they are. we can negotiate from a position of strength. we do not want to negotiate from a position of a weakness or being behind. ., , , ., ., ~ position of a weakness or being behind. ., ,, ., ., ~ i., behind. congressman, thank you aaain for behind. congressman, thank you again forjoining _ behind. congressman, thank you again forjoining us _ behind. congressman, thank you again forjoining us on _ behind. congressman, thank you again forjoining us on bbc. - again forjoining us on bbc. that was a conversation we had with the congressman a little bit earlier. bringing back my panel. margaret, iwant bit earlier. bringing back my panel. margaret, i want to stay with you because i saw you nodding vigorously through certain parts of that interview. the's start with the congress and's concern about deep fakes. do you share that concern especially looking to the election year next year? yes, absolutely was a this is something that has been concerning ai ethics oriented people for a long while. if you work through the most likely harms of deep fakes being one of them and also revenge porn is also in the deep fake realm that disproportionately harms women and also happening now. the thing about deep fakes is that it the thing about deep fakes is thatitis the thing about deep fakes is that it is so possible to do now, so easily with also fake
3:13 am
language that you can make it seem like news events have happened that haven't and have lots of people think that they are surrounded by like—minded people when it is only sort of bots generating fake content so it can give a rise to extremism. �* �* ., it can give a rise to extremism. �* �* ~' ., extremism. andrew, i'd like to net our extremism. andrew, i'd like to get your take _ extremism. andrew, i'd like to get your take on _ extremism. andrew, i'd like to get your take on that - extremism. andrew, i'd like to get your take on that as - extremism. andrew, i'd like to get your take on that as well. | get your take on that as well. you know, there is a lot of other— you know, there is a lot of other trends —— there is another— other trends —— there is another trend happening as the use of— another trend happening as the use of al — another trend happening as the use of ai generated deep fakes is that— use of ai generated deep fakes is that now you have murder victims. _ is that now you have murder victims, videos of murder victims _ victims, videos of murder victims being recreated, bringing back a deceased victim such— bringing back a deceased victim such as — bringing back a deceased victim such as a — bringing back a deceased victim such as a young child, recalling, recanting the details behind their murder and the ethics and questions of is that— the ethics and questions of is that 0k — the ethics and questions of is that ok and how do we minimise the risks— that ok and how do we minimise the risks that can come from that — the risks that can come from that and _ the risks that can come from that. and i think from a standpoint of revenge porn, absolutely. it is a big problem and the — absolutely. it is a big problem and the problems that can come from _ and the problems that can come from the — and the problems that can come from the legitimate use of this technology if not guarded against is —— can be exemplary.
3:14 am
when they asked the congressmen about what should be done specifically in congress he said congress should support legislation that would make internet providers liable for knowingly hosting deepfakes or for being negligent in allowing them to be disseminated, do you think that is the right approach?— think that is the right approach? think that is the right a- roach? ., , ., think that is the right auroach? ., , ., _ approach? so, what is meant by dee fake approach? so, what is meant by deepfake l _ approach? so, what is meant by deepfake i think _ approach? so, what is meant by deepfake i think is _ approach? so, what is meant by deepfake i think is really - deepfake i think is really critical here, because a deepfake is a fake image generated by a deep learning system so a lot of images being shown in this generative ai technology could be called deepfakes so i think there is really a lot of details that need to be worked out about what counts as an inappropriate deepfake but minimally disclosing that something is an image generated by an ai system should be required and it is not, in the us.— should be required and it is not, in the us. how difficult would that _ not, in the us. how difficult would that be _ not, in the us. how difficult would that be to _ not, in the us. how difficult would that be to do? - not, in the us. how difficult would that be to do? i - not, in the us. how difficult would that be to do? i thinkj not, in the us. how difficult l would that be to do? i think it is one that — would that be to do? i think it is one that will _ would that be to do? i think it is one that will take _ would that be to do? i think it is one that will take time - would that be to do? i think it is one that will take time butl is one that will take time but that— is one that will take time but that is— is one that will take time but that is necessary, to reasonably inform the public to
3:15 am
those _ reasonably inform the public to those coming onto a plate form, to understand what it is that they— to understand what it is that they are _ to understand what it is that they are looking at, whereas this coming from and how is it generated, how is the data used — generated, how is the data used is _ generated, how is the data used. is at an overnight process? absolutely not but it will take — process? absolutely not but it will take time and starting somewhere is the first step in the right— somewhere is the first step in the right direction especially if we — the right direction especially if we are _ the right direction especially if we are looking to see how we can adapt— if we are looking to see how we can adapt section 230 into a digital— can adapt section 230 into a digital age that makes. that takes us right _ digital age that makes. that takes us right to _ digital age that makes. that takes us right to the - digital age that makes. trisgit takes us right to the next pointer want to discuss with you which is regulation so i'll ask you both to stay there again so we can listen to one conversation we had a little bit earlier. we had an opportunity to speak to co—founder of the centre for humane technology. he previously worked as a design assistant at google. let's watch this and talk about it with our panel afterwards. thank you so much forjoining us here on nbc news. maxted mark is one of the leading ai expert from mit and earlier this week he signed a statement along with other ai leaders
3:16 am
about the risks of artificial intelligence and he spoke to the bbc, here is what he said. lou we are not saying that doom is guaranteed, we are just saying that this is really a serious possibility, i think in fact that you and they are most likely to die from some ai catastrophe rather than any other cause of death, so we should take it seriously. do you agree with that. i should take it seriously. do you agree with that.- you agree with that. i think the important _ you agree with that. i think the important thing - you agree with that. i think the important thing about i you agree with that. i think i the important thing about the statement that max is referencing is that it is the first time, with the pause letter that happened a few months ago that elon musk signed, only people in the community and safety and around the world, but not the ceos of the world, but not the ceos of the ai companies themselves signed that letter saying we need to pause at this time the letter said we need to acknowledge that this is sincerely an existential risk to humanity on the scale of climate change and the possibility of nuclear war and having the ceos of a company
3:17 am
saying that this is how the set of risks should be treated i think is a very important signal. if the engineers of boeing said that there is an existential risk with this aircraft being deployed too quickly, we should listen to boeing when they say this is happening. one of the things that people find be a contradiction is, why are they building it if they say it is an existential risk? so some people believe they are hyping the technology, that somehow they don't believe it is an existential risk because if they did they wouldn't be building at. what the public needs to understand is that the ceos of these companies are trapped by a race dynamic. if i don't release that i willjust lose to the guy does release it so i think i am a good person so i think i am a good person so if a release i think i will have a better chance of doing it safely but we are progressively right into what he called a suicide race. and somehow the weird belief is that we can go as fast as possible to beat that risk had the breaks before going over.
3:18 am
if i could jump in, who is responsible to the breaks? this is why the _ responsible to the breaks? try 3 is why the conversation about regulation and multilateral co—ordination are so important because if i am just sam altmann who runs open ai, can i it the breaks and because the whole space to hit the brakes? i need everybody to hit the brakes at the same time which means we need a view and a trustworthy agreement that we will all hit the brakes because we all recognise where the risk is and where the lines we are crossing are that amount to that risk. some people believe we have already crossed of those lines the release of these open source models, we can go into that later but we need a co—ordination. i was just on a podcast and she asked me, what would you have sam altmann do in my answer was put $1 billion into global co—ordination on decelerating the race and creating a managed safety outcome. it is not up to one actor, it is about all of
3:19 am
the actors agreeing together and that is why what the eu and the us are doing in terms of another in cooperation is good news. ., ., , ., news. one of the arguments that we hear counter _ news. one of the arguments that we hear counter to _ news. one of the arguments that we hear counter to that - news. one of the arguments that we hear counter to that is - news. one of the arguments that we hear counter to that is that i we hear counter to that is that if the us doesn't develop these countries further, countries that are adversaries well, what is your response to that? it’s is your response to that? it's true that _ is your response to that? it's true that if — is your response to that? it's true that if china _ is your response to that? it�*s true that if china viewed this as risk free, and it wasn't risk—free, but they viewed it as risk—free and theyjust had the gas pedal to get to this outcome while the other actors did not, that would still result in a catastrophe, and this is why in our work we often cite the effect of the film the day after, about nuclear war which aired in 1982, 83, because what it did was create a shared believable fate of this is the aftermath of what would happen in the event of nuclear war. right now we don't have a shared fate about where ai goes if everybody races to get there as fast as possible when no—one
3:20 am
knows how to control it or even stop the malicious actors from doing dangerous things whether it is creating lethal bio weapons are exhilarating cyber attacks or doing a lot of other dangerous things so everyone is racing and we do need to have a shared view of everybody and the reason in that letter where they chose to pause for six months is the view that china is not right on the heels and believe it is intrinsically a more dangerous technology than people in the west might. the let's talk about _ people in the west might. the let's talk about regulation, you mentioned that the eu and us are working together on what might be a voluntary code of conduct, do you think that is going to be enforceable? that the right first step?— the right first step? what we need a good _ the right first step? what we need a good first _ the right first step? what we need a good first step - the right first step? what we need a good first step that l the right first step? what we | need a good first step that we need a good first step that we need alignment and agreement on the need to create a global enforcement mechanisms the one of the challenges is enforcement. how do you prevent people from shipping code that onceit people from shipping code that once it is out there it is released on the open internet,
3:21 am
and there have already been models of, that facebook actually, letter, the company that owns facebook, they released the open waits for the model and that is very dangerous copy we needed to be global co—ordination on enforcing and preventing these open weight models from getting released because think about like, i'djust released because think about like, i'd just released a genie that can answer questions about anything and if it had the ability to answer questions about how to build nuclear weapons or by weapons, you would not want that genie who has the ability to answer those questions to be out there on the internet and the eu act has recently updated to try to prevent this form open sourcing of model so that the eu is on that front. what we need is for the eu and the us to collaborate and agree and further co—operate on when we put on those guardrails as quickly as possible. this is a big long road to travel and we just need those first steps of agreement to happen and happen quickly because what people need to get is that al moves at a double exponential curve. people are feeling like there is this timewarp, this human
3:22 am
years in dog years, ais is moving so much faster. final question. — moving so much faster. final question, what _ moving so much faster. final question, what would - moving so much faster. final question, what would be - moving so much faster. final question, what would be the| question, what would be the very first guardrail, if you were to go to congress right now and said this is something you need to put in place right now, what would that be? the simple one — now, what would that be? the simple one is _ now, what would that be? tue: simple one is preventing now, what would that be? tte: simple one is preventing the further release of these open weight models, you cannot put them back in the bottle and people can tune in to do more and more dangerous things. the eu ai acted update to this provisionjust a few eu ai acted update to this provision just a few weeks ago and the us should follow suit. there are those who argue that we should prevent the training of gpt five and beyond level systems, those who argue we should not have more than gpt three model systems. think of this like enriching uranium, thatis this like enriching uranium, that is considered a national security threat. we just can't agree on how far we should be going enriching the ai capabilities because people have different views and it offers us so many benefits in the meantime so we have to get a shared view of the risks and
3:23 am
what do we not want to have out of the bag that we want publicly available and how do we lockdown the more dangerous capabilities?— capabilities? really interesting - capabilities? really - interesting conversation, capabilities? really _ interesting conversation, thank you so much forjoining us. {131 you so much for “oining us. of course, you so much forjoining us. of course, great you so much forjoining us. qt course, great to be with you so much forjoining us. t>t course, great to be with you. all right, back our panel now. i'm not sure if you agreed what he was saying there, what would your first guardrail be? data transparency. _ your first guardrail be? data transparency. so _ your first guardrail be? data transparency. so in - your first guardrail be? data transparency. so in order i your first guardrail be? t�*tsta. transparency. so in order to predict anything the system does, if we knew about the data, we would be really well and formed and that is super easy, be transparent about the data. ~ . , easy, be transparent about the data. ~ ., , ., , ., ., data. what needs to be done to ensure data _ data. what needs to be done to ensure data transparency - data. what needs to be done to ensure data transparency with l ensure data transparency with yellow there is a lot of documentation proposals out that are really useful and helpful but for example, where the data is sourced from, what the data is sourced from, what the different kinds of topic covered r, how adjustable they are, things like the kinds of toxicity captured in it, that kinds of scientific facts versus fiction, all the sorts of things can be used to give you, think of it as like a nutrition label. what is the content of your dataset, that
3:24 am
is really a way to think about it. datasets shouldn't be released without the equivalent of nutrition labels for data. that is a really interesting point and andrew, just quickly your take on this as well? absolutely. identifying what data — absolutely. identifying what data is — absolutely. identifying what data is out there, how is a sample, _ data is out there, how is a sample, how is it collected, how — sample, how is it collected, how is— sample, how is it collected, how is it _ sample, how is it collected, how is it dispersed and then from — how is it dispersed and then from there, putting together a group — from there, putting together a group of— from there, putting together a group of experts, a task force, the attorneys, the ethicist, the attorneys, the ethicist, the researchers so that you can have _ the researchers so that you can have this — the researchers so that you can have this collective conversation to be able to put some — conversation to be able to put some of— conversation to be able to put some of these changes through. it some of these changes through. it has _ some of these changes through. it has been a terrific elective conversation with both of you certainly getting your perspectives on this margaret and andrew and i think were certainly a lot to discuss how. we would love to have you both back on the programme soon as we see these developments in al moving forward but thank you very much forjoining us tonight on bbc news. thanks. thank you _ tonight on bbc news. thanks. thank you for _ tonight on bbc news. thanks. thank you for having - tonight on bbc news. thanks. thank you for having us. - tonight on bbc news. thanks. thank you for having us. and | thank you for having us. and that is our— thank you for having us. and that is our show _ thank you for having us. and that is our show at _ thank you for having us. and that is our show at this - thank you for having us. fich that is our show at this hour, we will be back at the top of the next hour with more news, don't forget you can download
3:25 am
our app for all the latest news and headlines and also follow us on twitter for the latest breaking news. i'm sumi somaskanda in washington, thank you so much for watching. hello there. no shortage of sunshine in the weekend forecast for most. one thing we will continue to see a shortage of is rain. there are many places, particularly in the south and west of the uk, that have not seen any measurable rainfall for more than three weeks. that's not going to change very much over the next few days. in fact, over the next five days, while southern parts of europe will continue to be very wet indeed, across our shores, very little, if any, rain in the forecast. that is because high pressure continues to hold firm to the north—west of europe, allowing these thunderstorms to pop up down towards the south, but keeping us largely dry, fine and settled, with some spells of sunshine. that sun is strong at this time of year, with high uv levels. most starting on a sunny note on saturday. a little bit of patchy cloud here and there in parts of england and wales,
3:26 am
tending to retreat back to the east coast. a bit more cloud around northern scotland, particularly for orkney and shetland. there's the small chance of a shower over high ground in scotland and northern ireland, but really only a small chance. a slightly warmer day, 16 or 17 on the east coast, and 23 further west. the fa cup final at wembley, the weather is set fair with quite a lot of sunshine in the afternoon. at hampden park for the scottish fa cup final, very similarforecast, sunshine and 22 or 23. as we head to saturday evening, fine with looks of sunshine and clear skies overnight. that will mean a rather chilly light but low cloud in the northeast of scotland and may be some patches of low cloud across eastern parts of england as well. temperatures generally seven to 10 degrees but it may be a little chillier in some spots in this countryside. on sunday, a bit more low cloud in northern and eastern scotland, threatening to roll
3:27 am
onto the east of england. further west, spells of sunshine and temperatures up to 22 or 23 degrees. as we look ahead to the coming week, little, ifany, rain in the forecast. it's going to stay dry and, if anything, it's set to turn warmer later in the week.
3:28 am
3:29 am
this is bbc news. we'll have the headlines and all the main news stories at the top of the hour, straight up this programme. vladimir putin is now a wanted man. russia's president has been indicted as a suspected war criminal... woman chatters happily ..for the forced removal of children from ukrainian territory. air raid siren wails

38 Views

info Stream Only

Uploaded by TV Archive on