tv BBC News BBC News June 3, 2023 1:00am-1:30am BST
1:00 am
one sentence statement to lawmakers mitigating the risk of extinction from al should be a global priority. threat they have also been a threat they have also been a threat of legal and ethical concerns, us federaljudge in texas announced that he is now requiring lawyers in cases before him to certify that they did not use alto drive their filings without a human checking the accuracy. this comes after a york lawyer made headlines for filing comes after a york lawyer made headlines forfiling an comes after a york lawyer made headlines for filing an error ridden legal brief that he drafted with the help of chat gpt. canadian researchers discovered a treatment for an anti— resistance superbug with the help of ai. this new technology may revolutionise our approach to medical research. streaming platforms are also using ai technology to help boost consumer content.
1:01 am
spot if i announced this week that it spot if i announced this week thatitis spot if i announced this week that it is releasing its new dj for spot by premier members, and it will use ai for spot by premier members, and it will use alto 08 playlists based on what the user likes and dislikes. today we have experts focusing on preventing and exposing aa harm. so, first of all, what do you make of these different stories that we have seen over the past few weeks talking about the benefits and potential harms of ai. thank ou so potential harms of ai. thank you so much _ potential harms of ai. thank you so much for— potential harms of ai. thank you so much for having - potential harms of ai. thank you so much for having me | potential harms of ai. thank i you so much for having me and it's a great question, because everything comes down to what is the framework that we want to see implemented? where are the parameters? where do we want to operate within, and some of the more scary
1:02 am
instances begs the question of where do we stand, what are we allowed to do?— allowed to do? what is your take on this? _ allowed to do? what is your take on this? is _ allowed to do? what is your take on this? is someone i allowed to do? what is your l take on this? is someone who started work— take on this? is someone who started work on _ take on this? is someone who started work on al _ take on this? is someone who started work on al ethics - take on this? is someone who i started work on al ethics work for a _ started work on al ethics work for a long _ started work on al ethics work for a long time i am excited fora long time i am excited that— for a long time i am excited that people can now. that's a bil that people can now. that's a big seachange. there are so many— big seachange. there are so many details getting lost in that, — many details getting lost in that, and we have this mix of excitement and frustration with how this — excitement and frustration with how this is actually playing out of _ how this is actually playing out of. in how this is actually playing out of. ~ . ~ how this is actually playing out of. a . . how this is actually playing out of. a, . ., ,., out of. in march are some well known names _ out of. in march are some well known names calling _ out of. in march are some well known names calling for - out of. in march are some well known names calling for a - out of. in march are some well i known names calling for a pause in al development and you and some of ourfellow in al development and you and some of our fellow ai users are trying to address some of the existing problems, thurtell is more about that. the existing problems, thurtell is more about that.— more about that. the letter that came _ more about that. the letter that came out _ more about that. the letter that came out the - more about that. the letter that came out the other- more about that. the letter| that came out the other day more about that. the letter - that came out the other day and that came out the other day and that letter came from people who were really into affected
1:03 am
altruism, a philosophical approach to life in general, notiust_ approach to life in general, notjust ai, and the way approach to life in general, not just ai, and the way things are prioritised there and this affected _ are prioritised there and this affected altruism mindset is fundamentally different than a lot of — fundamentally different than a lot of al — fundamentally different than a lot of ai processes have done. for example we focus a lot on what — for example we focus a lot on what is — for example we focus a lot on what is actually happening in the real_ what is actually happening in the real world, what is the context_ the real world, what is the context now and addressing those — context now and addressing those people are not harmed right— those people are not harmed right now, and that further informs_ right now, and that further informs how things can progress, so by getting solutions to the table now, we can inform _ solutions to the table now, we can inform ai solutions to the table now, we can inform alto move in a really— can inform alto move in a really nice _ can inform alto move in a really nice part. these other approaches are like never what's _ approaches are like never what's happening now, let's envision _ what's happening now, let's envision a futuristic scenario which — envision a futuristic scenario which is _ envision a futuristic scenario which is aligned with affective altruism, and let'sjust talk about— altruism, and let'sjust talk about it _ altruism, and let'sjust talk about it without really proposing solutions. what you think of are — proposing solutions. what you think of are we _ proposing solutions. what you think of are we focusing - proposing solutions. what you think of are we focusing too i think of are we focusing too much about the doom and gloom of future ai harms not looking about that at what is happening
1:04 am
right now? i about that at what is happening riaht now? ~ ., , about that at what is happening riaht now? ~ . , ., right now? i think that is what we have all — right now? i think that is what we have all been _ right now? i think that is what we have all been trained - right now? i think that is what we have all been trained to i we have all been trained to believe their mainstream media, but also hollywood, we have seen for years these different depictions of how ai can be valuable, and enrich our lives but what happens when something goes wrong? when does technology turn against us? i do agree to an extent that we are missing the mark in terms of the questions that we are asking, and where we are placing our efforts in trying to answer those questions. at the same time we are still at the same time we are still at the starting point, we need to figure out what we have before us, how we're going to apply it, what sectors we want to it to, healthcare, medicare, privacy right now is such a big question and privacy impacts us no matter what area of the workforce you are in, so a very important question we still need to consider. margaret i saw ou need to consider. margaret i saw you nodding, _ need to consider. margaret i saw you nodding, you - need to consider. margaret i saw you nodding, you agree| need to consider. margaret i - saw you nodding, you agree that privacy is something we should be addressing right away?
1:05 am
absolutely, this is going to the discussion i have already had, _ the discussion i have already had, weather is so much at stake. _ had, weather is so much at stake. so _ had, weather is so much at stake, so much already happening, when will technology be turned against us? it is already— be turned against us? it is already being turned against some — already being turned against some subpopulations by people, some subpopulations by people, so we _ some subpopulations by people, so we can— some subpopulations by people, so we can think about how it has — so we can think about how it has been _ so we can think about how it has been turned against some people — has been turned against some people by other people and address those issues and privacy— address those issues and privacy is a big one of them, going — privacy is a big one of them, going to _ privacy is a big one of them, going to surveillance on the way— going to surveillance on the way surveillance is weaponised against — way surveillance is weaponised against people, that goes into the right— against people, that goes into the right to be forgotten, these _ the right to be forgotten, these sorts of issues, so there's— these sorts of issues, so there's tons to be done right now— there's tons to be done right now that _ there's tons to be done right now that i think a lot of this rhetoric— now that i think a lot of this rhetoric is _ now that i think a lot of this rhetoric is overlooking. stay ri . ht rhetoric is overlooking. stay right there _ rhetoric is overlooking. stay right there because - rhetoric is overlooking. stay right there because we - rhetoric is overlooking. sta. right there because we will talk about more aspects of ai and the benefits and possible risks in just a and the benefits and possible risks injust a moment but i want to look at another aspect first, injanuary, a democrat congressman menus after he delivered a speech on the floor of the us house of representatives that was generated by chat gpt. i spoke with him about that and his concerns about al development, so first, he was a clip about bt delivered in congress. i
1:06 am
bt delivered in congress. i stand here today because i'm planning to reintroduce the united states israel artificial intelligence centre act, a bipartisan piece of legislation that was cemented mutually beneficial partnership between united states and israel on artificial intelligence research, a step forward in an era where ai is taking centre stage in public discourse. we must collaborate with international partners to ensure that the united states maintains a leadership role in al research and development and ai research and development and responsibly explores the many possibilities of evolving technologies. the united states, israel artificial intelligence centre act will allow us to tap into the expertise of both countries and draw upon each other�*s resources to explore advancements.- resources to explore advancements. �*, ., ., advancements. it's great to have you — advancements. it's great to have you back _ advancements. it's great to have you back on _ advancements. it's great to have you back on bbc- advancements. it's great to l have you back on bbc news, thank you forjoining us. we saw a clip of your speech in congress there. what point did you want to make?— you want to make? thanks for havin: you want to make? thanks for having me _ you want to make? thanks for having me back— you want to make? thanks for having me back on. _ you want to make? thanks for having me back on. since - you want to make? thanks for having me back on. since my. having me back on. since my time at mit and through the
1:07 am
years working in high—tech, i have been following ai closely, i can tell it was sitting in inflection point where he was going to be disruptive to lives and the economy and i wanted to draw attention to this issue, because ai cannot be like social media where policymakers, five, ten, 15 years behind the curve. it got so big so fast that its negative effects on society, democracy were unrealised and unmitigated, and have now become very hard to repair. we need to be ahead of the curve, not behind the curve. it need to be ahead of the curve, not behind the curve.— not behind the curve. it does seem to _ not behind the curve. it does seem to be _ not behind the curve. it does seem to be reaching - not behind the curve. it does seem to be reaching in - seem to be reaching in inflection point and this week we saw leaders in al sign a statement warning of ai risks. what concrete actions do you think congress should take immediately?— think congress should take immediatel ? , ., ., immediately? first and foremost is defaced. _ immediately? first and foremost is defaced, i'm _ immediately? first and foremost is defaced, i'm working - immediately? first and foremost is defaced, i'm working on - is defaced, i'm working on legislation right now that would make internet service providers and social media companies liable for defamation through ai generated defects. i
1:08 am
am deeply concerned that deep fakes generated generated in the 2024 general election look like child's play, we will be situation — seeing situations in which the polities —— politicians appear to be saying or doing things that they did not do that move voters, move elections and undermine the bonds of trust are and a shared reality that any healthy civil society requires. we cannot allow social media platforms to hide behind section 330 and liability shields and claim that the flakes are fair game in a defamatory statement is made. ., , ., ., made. you 'ust mention social media made. you just mention social media platforms _ made. you just mention social media platforms and - made. you just mention social media platforms and u-tube l made. you just mention social| media platforms and u-tube is media platforms and u—tube is reversing its election to nihilism policy and will leave up nihilism policy and will leave up content which says there was fraud in the 2020 election because they say leaving the policy in place might have the effect of curtailing political
1:09 am
speech without meaningfully reducing the risk of violence. what you think of that? that's two different _ what you think of that? that's two different issues. - what you think of that? that's two different issues. one - what you think of that? that's two different issues. one is i two different issues. one is the ability of a platform to censor speech and while the first amendment does not prohibit private companies from free speech there is also nothing that requires them to do that. a robust give—and—take of ideas, even ideas that we find despicable or downright false is part of democratic debate. what i'm discussing right now is defamatory deep fakes, i am saying is generated videos and images showing people doing or saying things they never said or did in order to drive this information and create confusion, for example, you may have seen a couple of weeks ago, doctored images of the pentagon under attack, which briefly cause the stock market to flutter. that wasn't a particularly sophisticated
1:10 am
deep fake. what we are going to see, i'm worried in the next couple of years, i deep fakes that ten times the level of sophistication. we already seen the chinese communist party do this and other areas, and i worry that the ccp, the russians and even individuals can start deploying them to undermine democracy. so can start deploying them to undermine democracy. so what do ou want undermine democracy. so what do you want your _ undermine democracy. so what do you want your fellow _ undermine democracy. so what do you want your fellow congress - you want your fellow congress and —— colleagues in congress to do about it was yellow i want them to support legislation that will make companies liable for knowingly hosting deep fakes that are defamatory, often not taking reasonable care and being negligent in allowing them to be disseminated. right now under section 230, they can plausibly assert that it is not their problem. needs $1 trillion companies, it is the problem. you mentioned russia and china, i would ask you about something we heard from one of your colleagues in here is what he said. irate one of your colleagues in here is what he said.—
1:11 am
one of your colleagues in here is what he said. we can expect to follow the _ is what he said. we can expect to follow the same _ is what he said. we can expect to follow the same rules - is what he said. we can expect to follow the same rules of - is what he said. we can expectj to follow the same rules of the road when it comes not only to the troops but to their autonomous weapons and that's why we have to get out of this problem, set some international norms for their use, the same way that we have come to agree on things like chemical weapons. on things like chemical weapons-— on things like chemical weaons. ,, ~ . on things like chemical weaons. ~ ., ., weapons. do you think china and russia are _ weapons. do you think china and russia are willing _ weapons. do you think china and russia are willing to _ weapons. do you think china and russia are willing to work - weapons. do you think china and russia are willing to work with i russia are willing to work with the us and its allies on setting these types of norms as to how ai is used?— to how ai is used? more importantly _ to how ai is used? more importantly i _ to how ai is used? more importantly i guide - to how ai is used? more importantly i guide rails| to how ai is used? more - importantly i guide rails with the chinese communist party. right now we're watching putin �*s and cruise missiles into maternity wards, so my faith in his willingness to abide by international agreements is severely limited, and when it comes to brush ourjob is to secure strategic victory for the ukrainians and be with them for as long as it takes. with the ccp we know that they are trying to deploy ai through both the people's liberation army and through the pla navy, and for starters, our response
1:12 am
needs to be to be stronger and better at it and they are. we can negotiate from a position of strength but we do not want to negotiate from a position of weakness or being behind. thank ou so weakness or being behind. thank you so much _ weakness or being behind. thank you so much for— weakness or being behind. thank you so much forjoining _ weakness or being behind. thank you so much forjoining us - weakness or being behind. thank you so much forjoining us on - you so much forjoining us on bbc. that was a conversation we had with the congressmen a little bit earlier, let's bring back my panel, margaret and andrew watching that interview, and i want to start with you because i saw you nodding vigorously through certain part of that interview so let's start with the congressmen's concerns about deep fakes. do you share that concern especially looking to the election year in the us? this is something _ election year in the us? this is something that _ election year in the us? this is something that has - election year in the us? this is something that has been concerning ai ethics oriented peopie — concerning ai ethics oriented people for a long time. if you work— people for a long time. if you work to— people for a long time. if you work to the most likely harms, deep _ work to the most likely harms, deep fakes are one of them, revenge _ deep fakes are one of them, revenge porn is often in that realm, — revenge porn is often in that realm, that this product — disproportionally harms women, and the — disproportionally harms women, and the thing about deep fakes is that— and the thing about deep fakes is that it's so possible to do now, — is that it's so possible to do
1:13 am
now. so— is that it's so possible to do now, so easily with also fake language _ now, so easily with also fake language that you can make it seem — language that you can make it seem like news events have happened that have not and have lots of _ happened that have not and have lots of people think that they are surrounded by like—minded peopie — are surrounded by like—minded people when it is only sort of bots — people when it is only sort of bots generating fake content. | bots generating fake content. would bots generating fake content. i would like to get your take on that as well.— that as well. there is another trend that — that as well. there is another trend that has _ that as well. there is another trend that has been - that as well. there is another l trend that has been happening, to margaret's point, ai generated deep fakes, you have murder victims, videos of murder victims, videos of murder victims, videos of murder victims being recreated, bringing back a deceased victim, such as a young child, recalling, recanting the recalling, reca nting the details recalling, recanting the details behind their murder, and the ethics and questions of is that ok and how do we minimise the risks that can come from that? and i think from a standpoint of revenge porn, absolutely, a big problem, and the problems that can come from the legitimate use of this technology, if not guarded against can be
1:14 am
exemplary. guarded against can be exemplary-— guarded against can be exemplary. guarded against can be exemla .~ exemplary. when i asked the congressmen _ exemplary. when i asked the congressmen about - exemplary. when i asked the congressmen about what - exemplary. when i asked the i congressmen about what should be done specifically in congress, he said congress should support legislation that would make internet providers and social media platforms liable for knowingly hosting deep fakes also being negligent in allowing them to be disseminated. do you think that the right approach? what is meant by deepfake is really— what is meant by deepfake is really critical here. a deepfake is an image generated by a deep learning system and a lot of— by a deep learning system and a lot of images being shown are generative ai new technology could — generative ai new technology could be called deepfakes. there _ could be called deepfakes. there are a lot of details that need — there are a lot of details that need to— there are a lot of details that need to be worked out as what counts — need to be worked out as what counts as— need to be worked out as what counts as an inappropriate deepfake. butjust disclosing deepfake. but just disclosing that deepfake. butjust disclosing that something is an image generated by an ai system should _ generated by an ai system should be required and it's not in the — should be required and it's not in the us _ should be required and it's not in the us-_ in the us. andrew, how difficult _ in the us. andrew, how difficult would - in the us. andrew, how difficult would that - in the us. andrew, how difficult would that be l in the us. andrew, howl difficult would that be to in the us. andrew, how - difficult would that be to do? i think it is one that will take time but is necessary, to
1:15 am
reasonably inform the public, to those coming onto a platform to those coming onto a platform to understand what they are looking at, where the information is coming from and how it is generated, how is the data used? is it an overnight process? absolutely not, that it will take time and starting somewhere is the first step in the right direction, especially for looking to see how we can adapt section 230 into a digital age that makes sense. that takes us to the next point i want to discuss, which is regulation, so i will ask you to stay there so we can listen to stay there so we can listen to one conversation we had earlier. we have the chance to speak to executive director and co—founder of the centre for humane technology tristan harris. he previously worked as a designerfor google and harris. he previously worked as a designer for google and we talked about efforts to regulate ai. let's watch and talk about it after. tristan, thanks forjoining us. max is one of the leading ai experts from mit and this week
1:16 am
he signed a statement along with other ai leaders about the risks of artificial intelligence. he spoke to the bbc. we are not saying that doom is guaranteed, we arejust saying that this is really a serious possibility. in fact, ou serious possibility. in fact, you and — serious possibility. in fact, you and i— serious possibility. in fact, you and i are _ serious possibility. in fact, you and i are most - serious possibility. in fact, you and i are most likely i serious possibility. in fact, | you and i are most likely to die from _ you and i are most likely to die from some _ you and i are most likely to die from some ai _ you and i are most likely to i die from some ai catastrophe rather— die from some ai catastrophe rather than _ die from some ai catastrophe rather than any— die from some ai catastrophe rather than any other- die from some ai catastrophe rather than any other cause i die from some ai catastrophe| rather than any other cause of death. — rather than any other cause of death. so— rather than any other cause of death. so we _ rather than any other cause of death, so we should - rather than any other cause of death, so we should take - rather than any other cause of death, so we should take it i death, so we should take it seriously _ death, so we should take it seriously. do _ death, so we should take it seriously-— death, so we should take it seriously. do you agree with that? the — seriously. do you agree with that? the important - seriously. do you agree with that? the important thing i seriously. do you agree with - that? the important thing about the statement _ that? the important thing about the statement that _ that? the important thing about the statement that max - that? the important thing about the statement that max is - the statement that max is referencing is it is the first time, with the pauls letter that happened that elon musk signed a few months ago, only people in the community safety around the world, the experts in the field, but not the ceos of ai companies signed the letter needing to pause, this time they said they needed to acknowledge that this is sincerely an existential risk on the scale of climate change and the possibility of nuclear
1:17 am
war. having the ceos of companies say that is how the set of risks should be treated is a very important vignal. if the engineers of boeing that there is an existential risk of an aircraft being deployed too quickly, we should listen to them when they say it's happening. one thing people find to be a contradiction is why are they building it if they say it's an existential risk? some people believe that they are hyping the technology, that somehow they don't believe it's an existential risk because if they did, they wouldn't be building it. the public needs to understand that the ceos of these companies are trapped by a race dynamic. if they don't release it they will lose to a guy who does, and they think they are a good person so if they release it they think they will have a better chance of doing it safely, but we are all racing to what max tegmark said, a suicide risk, erase to the cliff, and somehow they think they can go as fast as possible but hit the brakes. who
1:18 am
they can go as fast as possible but hit the brakes.— but hit the brakes. who is responsible _ but hit the brakes. who is responsible to _ but hit the brakes. who is responsible to hit - but hit the brakes. who is responsible to hit the - but hit the brakes. who is - responsible to hit the brakes? this is why the conversation about regulation and multinational co—ordination are so important. if i am just sam, who runs open ait,, can i cause china or the other is to hit the brakes? everyone needs to do it at the same time, which means we need a collective view and a trustworthy agreement that everyone will hit the brakes because we all recognise where the risk is and what the lines we are crossing are that amount to that risk. some people believe we have already cross those lines with release of the open source, open weight models. we can go into that later. but we need co—ordination. i was recently on a podcast and i was asked, what would you have sam altmann do and my answer was put $1 billion into global co—ordination on decelerating the race and create a managed safety outcome. it is not up to
1:19 am
one actor, it's about all the actors agreeing together and this is what the eu and the us are doing to announce cooperation, and that is good news. ., . , news. one of the arguments we hear counter — news. one of the arguments we hear counter to _ news. one of the arguments we hear counter to that _ news. one of the arguments we hear counter to that is - news. one of the arguments we hear counter to that is if - news. one of the arguments we hear counter to that is if the - hear counter to that is if the us does not develop these technologies further, countries that are adversary �*s or rivals, like china, for example, will. what's your response to that? it example, will. what's your response to that?- example, will. what's your response to that? it is true that if china _ response to that? it is true that if china viewed - response to that? it is true that if china viewed this i response to that? it is true that if china viewed this as | that if china viewed this as risk—free, then they hit the gas pedal to get to this outcome while the us and other actors did not, that would still result in a catastrophe and this is why in our work we often cite the effect of a, the film at the des after, which a ed in 1982 or 1983, because it shared a believable fate about the aftermath of what would happen in the event of nuclear war. right now we don't have a shared fate about where ai goes and everyone races to get there
1:20 am
as fast as possible if no—one knows how to control it, align it all. malicious actors from doing things, whether creating bio weapons of cyber attacks or a bunch of other dangerous things. instead everyone is racing and we need a shared view. the reason in that six months later they chose to pause for six months was because they said china is not on the heels of the us, and right now the chinese communist party believes that it's a more dangerous technologies than people in the west might think. let's talk about regulation. you mentioned that you us and the eu are working together on a voluntary code of conduct. do you think that's going to be enforceable and is that the right first step?— enforceable and is that the right first step? what we need our aood right first step? what we need our good first _ right first step? what we need our good first steps _ right first step? what we need our good first steps and - our good first steps and alignment and agreement on the need to create a global enforcement mechanism. one of the challenges is enforcement. how do you prevent people from shipping code that, once it's out there, is released on the
1:21 am
open internet? there have already been models, facebook, meta, the company that owns facebook, the new name, released open waits for the model, and that's very dangerous. we need global enforcement on open models being released. think about it like releasing a genie that can answer questions about everything and if it had the ability to answer questions about building nuclear weapons or bio weapons, you would not want it to be out there. the eu ai act has recently updated to try to prevent these further open sourcing models, so the eu is ahead on that front. we need the us and the eu to collaborate and agree and further co—operate on where the guardrails are put in as quickly as possible. this is a big, long road to travel and we need the first steps of agreement to happen and happen quickly because people need to get that al moves at a double exponential curve. people feel
1:22 am
like there's a time warp, human years and dog years, but ai is moving faster. by, years and dog years, but ai is moving faster.— moving faster. a final question. _ moving faster. a final question, what - moving faster. a final question, what would j moving faster. a final i question, what would be moving faster. a final - question, what would be the first card rail? if you went to congress and suggested a guardrail, what would it be? the first is preventing the open weight models, the genius that when they get out of the bottle, they cannot be put in and they are available for ever. the eu ai act updated a few weeks ago to include this provision and i would say the us should follow suit. there are those who argue that we should prevent the training of gpt five and beyond levels. those that are that we should not have beyond gp three. it's like enriching uranium. israel does not let iran enrich uranium beyond a certain degree as it's a national security threat. we should have the same with al, the ar uranium, because people have different views and it offers so many benefits in the meantime. we need a view of the risks and
1:23 am
then say what we should not have out of the bag that's publicly available and how do we lockdown the more dangerous capabilities?— capabilities? really interesting, - capabilities? really interesting, thank i capabilities? really - interesting, thank you, triston. interesting, thank you, triston-_ interesting, thank you, triston. , , ., ., interesting, thank you, triston. , , ., triston. of course. great to be with you- _ with you. back to the panel. margaret, what do you think the first guardrail should be?- guardrail should be? data transparency. _ guardrail should be? data transparency. to - guardrail should be? data transparency. to predict l transparency. to predict anything a system does, if we knew about the data, we would be really well—informed and that's not hard. be really well-informed and that's not hard.— that's not hard. data transparency. - that's not hard. data transparency. what l that's not hard. data - transparency. what needs to that's not hard. data _ transparency. what needs to be done to ensure data transparency? done to ensure data transaren ? , ., ., ., transparency? there is a lot of documentary _ transparency? there is a lot of documentary proposals - transparency? there is a lot of documentary proposals of - transparency? there is a lot of documentary proposals of thatj documentary proposals of that are really useful and helpful. for example, where the data is sourced from and the different types of topics, how trustful they are, the kinds of toxicity captured, the kinds of scientific facts versus fiction. all those things can be used. think of it like nutrition labels. what's the
1:24 am
content of your data set? that's a really helpful way to think about it. datasets shouldn't be released without the equivalent of nutrition labels. . �* . the equivalent of nutrition labels. . �*. . the equivalent of nutrition labels. . h ., ., , labels. that's a really interesting _ labels. that's a really interesting point. - labels. that's a really - interesting point. andrew, your take in about 30 seconds, if you can?— take in about 30 seconds, if ou can? , . ., ., you can? identifying what data is out there, _ you can? identifying what data is out there, how _ you can? identifying what data is out there, how is _ you can? identifying what data is out there, how is it - is out there, how is it sampled, collected and disbursed. from there, putting together— disbursed. from there, putting togethera disbursed. from there, putting together a collective group of experts. _ together a collective group of experts, a task force, the attorneys, the researchers, so you can — attorneys, the researchers, so you can have this collective conversation to put some of these — conversation to put some of these changes through. it�*s these changes through. it's been a terrific _ these changes through. it�*s been a terrific collective conversation with both of you, getting your perspectives on this, margaret and andrew. there is certainly a lot to discuss. we would love to have you both back on as we see these developments in al moving forward. thanks forjoining us tonight on bbc news. thanks. thank you _ tonight on bbc news. thanks. thank you for _ tonight on bbc news. thanks. thank you for having - tonight on bbc news. thanks. thank you for having us. - tonight on bbc news. thanks. thank you for having us. that | thank you for having us. that is our show — thank you for having us. that is our show at _ thank you for having us. that is our show at this _ thank you for having us. that is our show at this hour. - thank you for having us. that is our show at this hour. we i is our show at this hour. we will be back at the top of the next hour with more news. you
1:25 am
can download our app for the latest news and headlines, and you can follow us on twitter for the latest breaking news. thank you for watching. hello. no shortage of sunshine in the weekend forecast for most. one thing we will continue to see a shortage of �*s rain. there are many places, particularly in the south and west of the uk, that have not seen any measurable rainfall for more than three weeks. that's not going to change very much over the next few days. in fact, over the next five days, while southern parts of europe will continue to be very wet indeed, across our shores, very little, if any, indeed, across our shores, very little, ifany, rain in indeed, across our shores, very little, if any, rain in the forecast. that is because high pressure continues to hold firm
1:26 am
to the north—west of europe, allowing these thunderstorms to p0p up allowing these thunderstorms to pop up down towards the south, but keeping us largely dry, fine and settled, with some spells of sunshine. that sun is strong at this time of year, with high uv levels. most starting on a sunny note on saturday. a little bit of patchy cloud here and there in parts of england and wales, tending to retreat back to the east coast. a bit more cloud around northern scotland, particularly for orkney and shetland. there's the small chance of a shower over high ground in scotland and northern ireland, but really only a small chance. a slightly warmer day, 16 or 17 on the east coast, and 23 further west. the fa cup final at wembley, the weather is set fair with quite a lot of sunshine in the afternoon. at hampton park for the scottish final, very similar forecast, the scottish final, very similarforecast, sunshine the scottish final, very similar forecast, sunshine and 22 or 23. as we head to
1:27 am
saturday evening, fine with looks of sunshine and clear skies overnight. that will mean a rather chilly light but low cloud in the northeast of scotland and may be some patches of low cloud across eastern parts of england as well. temperatures generally seven to 10 degrees but it may be a little chillier in some spots in this countryside. on sunday, a bit more low cloud in northern and eastern scotland, threatening to roll onto the east of england. further west, spells of sunshine and temperatures up to 22 or 23 degrees. as we look ahead to the coming week, little, if any, rain in the forecast. it's going to stay dry and, if anything, it's set to turn warmer later in the week.
1:29 am
22 Views
IN COLLECTIONS
BBC News Television Archive Television Archive News Search ServiceUploaded by TV Archive on