tv The Media Show BBC News September 2, 2023 1:30pm-2:01pm BST
1:30 pm
india launches its first observation mission to the sun, just days after the country made history by becoming the first to land near the moon's south pole. the studies will help scientists understand solar activity. now, on bbc news: the media show: ai — destroyer ofjournalism? hello. ai, it is all we seem to hear about these days. but what does it mean for the news business and the way we all find out about what's going on in the world?
1:31 pm
what sources will i rely on to deliver trustworthy news? will it putjournalists out of a job? the chances are you've already perhaps unknowingly read a news article that wasn't entirely written by a human. so what's going on? today, we're dedicating the whole programme to these questions. with me are the artificial intelligence editor at the financial times. sky news's science and technology editor tom clark and liz mizen from independent media cooperative, the bristol cable, as well as jackson ryan, science editor at cnet. welcome to you all. and i think we should start with the basics. if i could bring you in from the financial times, explain what we mean by aland why particular in terms of the role ofjournalism and it has why it's getting so much coverage now. so ai is artificial intelligence. and i mean, supposedly it's a mechanical computer version of human intelligence. so at least that's the hope. right? but today what we have is it's basically a powerful statistical system, a computer software which finds patterns in large amounts of data. what this means is that it can,
1:32 pm
you know, find diagnoses from pictures of x—rays or it can look through lots of words and help translate them into different languages. and what we're talking about today is generative ai, which is software that can actually create and generate things that include words, images, code, even video. and how widely is it being used in newsrooms, do you think? i mean, what's the financial times doing, for example? so i think over the last six months, it would be impossible to ignore it if you were a newsroom with a digital operation that was trying to reach people online. i think you'd have to be aware and you know, and have to be experimenting with it. most big, large news publishers are doing it. the f.t is — we've put out a letter
1:33 pm
saying we're not going to be publishing any stories that are written by ai, but we will be looking at how it might help journalists do theirjobs better, things like summarising complex documents like, you know, tax documents or, you know, readouts from court cases, things like that, that are difficult for humans to read lots of very quickly. it could help to sort of pull out trends, and it doesn't mean it might be great at it. we're trying it out. we will continue to experiment. but i would say nothing, nothing that we're putting out into the world for our readers is al generated today. and when it comes to concerns around accuracy and bias, just talk us through that. so the way that generative ai works, text generation, let's look at writing words. so something like gpt, which we know you ask it a question and it comes out with an answer. the way it works is it's been trained on billions of words that it's taken from the internet. and those could be words from books, from websites, blogs, reddit posts, youtube comments, think anywhere where there's been words written by humans on the internet. if you think about that corpus of data, you can also see that it's not necessarily fact
1:34 pm
checked accurate in terms of bias. it's also pulling a lot of the sort of implicit assumptions, stereotypes and so on. and all of that is kind of pulled into the software to be trained from. and the way it works is by predicting the next word based on all of the words that it's already been able to analyse. so you can see then why it's not going to be 100% accurate because it's just telling you what it thinks is most likely based on the past, which is usually true, but not always. 0k. and when we talk about al, it's sometimes discussed as an existential threat to humans. i suppose what we'd be talking about here is whether ai and journalism is going to put us all out of a job. tom clark, you recently did an experiment for sky news asking if i could replace your reporting. let's just hear some of it. and here you're watching a report by a visual avatar whose image is based on a real life colleague. we associate things like this
1:35 pm
with hotter, drier countries the next task is to use different types of ai and a human volunteer to give our reporter a personality. our producer, hannah, lending her face and voice to train an avatar. i have been trained using a four minute video clip of hannah speaking into camera. it's pretty convincing to me. yeah, that would fool me. 0k, well, there you are. you were impressed. talk us through what you found by doing those series of reports. yeah, they were visually pretty impressive. and weirdly, we watched them get better during the process of of even making the report. so it was the pace at which things are getting better that also really blew us away. the other thing we try to look at, though, is these natural language models that we just heard about there and how much potential they have for doing. and with the help of someone who understands far more than i do.
1:36 pm
we came up with this little wheeze where we basically got to agents powered by bbc four to sort of talk to each other. one played the role of an ap reporter, the other of an editor to kind of pitch stories, refine them, pitch them again, then go and find sources for them. another prompt to go off and do that to build something up. and we were feeding it news from this thing called a web crawler to give it some sort of awareness of what's going on out there. and you know what? it was quite impressive. it was quite cool. they could come up with reasonable sort of pitches for stories, and it could certainly do a really quite convincing job of writing an article. were you encouraged by the things it couldn't do? i mean, were there things that it couldn't do that made you think, "oh, i've got a job for a bit longer?" heaps. so while it was quite good at coming up with pitches, stories that sort of were credible, they weren't particularly great. and i think there's reasons for that. we gave it the news, what was out there in the news, and it was coming with stories. it was like take event x happening in the uk. so house prices and interest rates, oh, there must be a connection.
1:37 pm
it would sort of take two things and pitch a story around that. it would pitch feature ideas basically, but it doesn't know what news is. it wasn't breaking news every day. well, it can't it's a natural language model designed to predict where the next text was based on a sort of training data. that's a little bit out of date. it doesn't have any awareness of what's going on in the world beyond what we could feed it from google and the sky news website and other sources. and it also didn't have the capacity for abstract thought or imagination or the sort of ideas that we need to make, sort of... news. the other interesting thing was the hallucination thing, the kind of where it really gets a bit more worrying. it's really convincing. it does a very good job of presenting you text. that's quite believable, but it can be really wrong. so one story i came up with, there was a lorry crash on the m6 that spilt i think it was 20,000 litres of milk all over the m6 motorway. it was a news story and it came up with this pitch that scientists
1:38 pm
had discovered a hidden benefit of spilling milk on motorways, actually made road surfaces safer. and i thought that's... i mean, it was really bizarre. i mean, itjust creates this idea. it even found a piece of academic research from a university in new zealand that supported this, that didn't exist. it gave me it gave me the academics' names and a journal that had been published in. and i couldn't find any record of it and said that this discovery we made sort of overnight after the accident had happened absolutely untrue and is a really good example of where you want to be letting an aldo anything, approaching the kind of editorial side ofjournalism. i wanted to bring injackson ryan here to talk about transparency. you work for the american tech website cnet, and they've been using alto help write stories. tell tell us a little bit about that. i think cnet's approach . was a little controversial,
1:39 pm
but we have been using artificial a generate generative ai tool. to create articles. and then those articles - were fact checked by a human and then published on our website. this was what was called i an experiment at the time. | it actually happened very, very| early after gpt sort of exploded across the web. and i think we were kind i of like one of the very early movers on the generative ai movement where a tech website | that seems pretty like something i that we would do. but unfortunately a lot of these articles that were generated . by the tool were incorrect - and they were generated in an area that we know the tool is not very good at generating text for, - which is with numbers, even simple l things like what is a credit card? i we were getting an ai - to generate an answer to that. unfortunately, more than half- of those articles that we published were incorrect, in some way needed correction. |
1:40 pm
so we have had to kind of change i tack a little bit and we haven't i stopped doing it and we put a big pause on it at cnet. _ and this is one of the things that i'm personally quite worried - and concerned about in the ai world. it's silicon valley mentality, move fast, break things, . see what happens. and a piece i recently wrote, - you know, idon't want to compare this too much to the atomic bomb. i had just seen oppenheimer when i wrote this piece. - but basically, like for me, - in some ways, it's like standing and watching the gadget at trinity be assembled, right? _ this first test of kind of a world changing technology. and we haven't really grasped - the consequences of what deploying that technology in full means. and unfortunately for us, . it's seen that we did deploy at cnet we did deploy it without really thinking _ about what it could mean or perhaps even, i guess, what it could do to some of our credibility. - and it was a real harsh lesson that we had to learn. - in terms of what you have learned, then, you know, are there things
1:41 pm
that you're now putting in place at cnet? yeah, yeah, definitely. so we put a pause on articles oncej this these articles were discovered and essentially we . rewrote a whole a.i. policy forcing. policy for cnet which now basically states that we will not use a.i. . to write entire articles. we also not use it for photos or images on our website. i but what we will do, _ it's actually even got a funky name. it's called ramp, which means. responsible ai, machine partner. and basically this ramp tool is meant to assist us - with creating articles. have i used it in any of the reporting i do as a science editor? no, i don't. there's not anything that i can really use it for, especially - when i'm talking about breaking news or new studies. _ but recently we published - about the best broadband provider in tulsa, oklahoma. and for an article like this there's probably a lot of work _ that can be reused and that al tool that we're using is trained - on our own data rather than the whole web. .
1:42 pm
now, there are some still somel questions that have to be asked, and that's why i'm saying we need to slow down with a lot _ both of you have raised the fact that, you know, a.i generative articles that you know, you're aware of or in tom's experiment actually produced things that were factually inaccurate, why was it was it not getting it right? it's trained to generate - what the next best word is. it's like a really fancy order, predictable on your phone. i your phone learns what you're texting all the time _ to your friends, to your partner. it knows you're going to say i love you. - and if you go to say- i to me and we've just met, it will still tell you i love you. it's just the way that i the models are trained. and they're also i think we discovered they're also really lousy experts. they're written to give you an answer to whatever you ask. if it can't come up with a good answer, it'll make one up. that's the worst kind of expert. a real expert says i'm sorry.
1:43 pm
i don't have the relevant information. i don't know. but i think we mustn't underplay how good these language models are. they're extremely clever at creating text. it's hard to know where you might be looking at one of these hallucinations or not. you know, they're very unreliable experts. you have to be pretty careful because they can give you very convincing, wrong answers. in terms of cnet�*s journalism on the subject, you know, you now carry a declaration, you know how we will use artificial intelligence at sea. intelligence at cnet, now, i think the guardian does the same. but how much do you think this matters to audiences? i mean, that's the million dollar question. - l i don't think audiences necessarily l care that much where the news comes from, like this experiment, i that scene that ran originally was not found for three monthsl because we had a dropdown box that said this was generated with the help of an ai, - | and it was only that someone had j scraped google basically and seen that we were publishing it, that it became known. - i don't think audiences even care what bylines are in an article -
1:44 pm
half the time. also, i don't even know that audiences read - past the headlines all that often. idon't want to denigrate our audiences because thank- you for reading our site. but at the same time, i don't know that it- matters too much and that's scary to me. . i would much prefer that that wasn't the case, - but i feel like it is. but you are trying to be transparent. is there the same commitment to transparency across the news industry? well, i'd say i disagree. i think that maybe it's true that people don't recognise different bylines always. but i think people expect that there was a person there who went off and did theirjob, which is to fact check what they wrote and tell you some version of at least of of the facts or the truth. right. i think there is a sense with audiences of breaking some sort of implicit trust that you have, whether you're broadcast or print media. and i think any media organisation that wants to maintain that relationship of trust will have to be transparent going forward, partly because of the problems around hallucinations and inaccuracy, but also
1:45 pm
because it's a huge shift in how we as a society are consuming information. you know, you can'tjust go from saying, you know, humans no longer do thisjob. this is just all written by a machine. and that's just ok with everybody. i don't know that that everyone would accept that. well, let's look at this from a local news perspective. i'm very aware liz mizen has been sitting there very patiently while this has been going on from the bristol cable. i mean, i must be tempting, liz for local news publishers. if you can produce news cheaper than with humans. i mean, i think for a lot of people, probably what we do at the table is we're really trying to do something different with our co—op. and a lot of that is just to do with the business model. and i think that there's a bit of maybe a bit of a paradox in that we don't necessarily have the resources to be doing lots
1:46 pm
and lots of research into how can we use a.i. if we did use a.i., we haven't so far. i think there's a general feeling that we want to see kind of where the where the dust settles, if it indeed settles. but we don't have lots of money and lots of resources to start experimenting with this kind of thing. and we are really committed to investigative slow news, if you like. so there's a bit of a paradox there that, you know, you'd think, oh, it would be really useful for localjournalism, which has been worst hit by the kind of collapse, if you like, of thejournalism business model, particularly printjournalism. but then at the same time, we don't necessarily have the resources to be picking up new tools and doing new things and learning new software. so it's kind of 6 of1 and half a dozen of the other. i mean, i suppose you're bucking the trend, aren't you, because you say you're very much focused on investigative stories and the sort of proper, meaty end ofjournalism for local news more widely, i suppose, which is often those little local organisations that are owned by bigger organisations that are looking to cut costs. i suppose that's where this might
1:47 pm
accelerate trends that are already under way in terms of cost cutting and cutting journalists. yeah, definitely, i think so. there are kind of two things that i think are really interesting, and certainly the business model for me is the most important thing. so the business model of print journalism particularly has kind of collapsed. the worst hit of that are the local outlets. and i think that there is a situation in which this is going to be really useful for some of those outlets. so news corp australia, for example, recently started using ai in order to essentially kind of aggregate information. so one of the really good examples that they used was looking at the cheapest fuel prices in in an area, for example, now i wouldn't necessarily call that reporting. i think that that's really useful. and i think they even called it service information, providing service information. and i think they also made it clear that it was overseen by humans. so they weren't just letting the ai tool going off go off and print stuff. exactly. so i think that's the ways in which it can be really useful. but i think what is a problem and what is likely to become
1:48 pm
a problem is that the collapse of the business model is simply going to continue if we start thinking, oh, well, i canjust do thinking, oh, well, ai canjust do human beings�* jobs and we can sack more people because now we can get gpt to write all of our articles for us. i think that really misunderstands, particularly with investigative journalism, how much emotion there is, how much empathy there is. i mean, one of the examples that i really like to use is it will be really useful for an al to take the minutes of a council meeting and to write that up into an article and then we can fact check it. but an ai is never going to be able to convince some whistle—blowers at the council to tell you what was never minuted in the first place. injournalism, you know, if we think about the upcoming news cycle, the us election, for example, a general election coming in the uk, you know, we worry about disinformation and deepfakes, but do we need to be concerned about the impact of ai produced disinformation and fakery that they might have on those stories, for example? definitely.
1:49 pm
i think this is probably the kind of near—term top thing that everybody is worried about. and we've already seen examples of, you know, politicalfake news, misinformation and disinformation being generated by ai tools. i reported a while ago on this happened in venezuela, where they had ai generated news readers reading out government propaganda. much of it wasn't true and it was being generated using a technology based here in the uk. so this stuff goes global really quickly and you know, there's been hundreds of examples over the years and particularly recently, you know, dozens where we've seen how even images can be manipulated, pictures of borisjohnson being arrested. there was a fake image of trump hugging anthony fauci. you know, and this can be deployed and employed as political tactics. so until there's some kind of law that forbids it and a way for people
1:50 pm
to tell real from al generated it's the flood of it is just going to make it harderfor us to tell the difference. yeah, i wasjust going to add and we can't underestimate the power of the tools, the same way that they could benefit local news by giving you sort of hyperlocal targeted information. and we found it was very easy to get gpt four through a little few prompts to generate emails and send those emails automatically. you know, we did that with off the shelf stuff, very unsophisticated. you could have a targeted political ad campaign posting social media on a hyper local level. you could get inside particular electoral areas with particular memes or messages or whatever in an extremely powerful way. and i think while we injournalism, we're here, we're sort of discussing how it might change ourjobs, but we also have to think about how we need to understand what it's doing in order to do ourjobs. i mean, we have to get much, much smarter about how we understand ai,
1:51 pm
what we know about how it's being used, who's using it. and we were talking earlier about how newsrooms are using ai. if we flip it around and look at how ai has been using newsrooms, if you like, often without their permission, if what it is is essentially bots extracting data from all sorts of online sources, it's not clear if the tech companies have been paying news outlets to suck up, you know, all those years of news stories that have been paid for by whoever it might be, the bbc male group or whoever it is, the bbc, mail group or whoever it is, and then train that, train their eyes. or maybe it is becoming very clear that they haven't paid for this. they definitely do this to build ai systems out of news publishers data. but what does that mean? because clearly there's a tension. there are these news organisations wising up to this now and saying, you need to pay us? or how's that going to work? yeah, we reported a few weeks ago about basically all the biggest tech
1:52 pm
companies building ai models in talks with the biggest media publishers to kind of strike deals, proper financial deals about how they might be compensated for the use of news content because they have to scrape news websites in order to learn. and what happens when you ask them a question about a news topic and they generate an answer that's essentially generating journalism, but kind of sidestepping all of the sources that they used to train themselves. because i did read that the daily mail is looking at potentially suing google, taking legal action over the scraping of its news articles. yeah, you know, i think that they're definitely wise to it. everybody�*s wise to it. and as and as i said, you know, there was i think we named news corp, axel springer, the new york times, the guardian, all of these we know to have been in discussions around, you know, do theyjust pay you a check? do they build something for you? so there are definitely going to be partnerships that will have some kind of financial shape to them. i mean, we keep mentioning
1:53 pm
google and obviously there are lots of other organisations allegedly doing this as well. just to explain what you were saying, because i think for audiences, what this is going to mean is that the moment you might go into a search engine and you'll put in a question about something and it'll come up with a whole load of different articles that you can choose to read. and what we're saying is in the future, it'll be a one stop shop, the summary of all those articles, and that's what's different. jackson, is this something you're worried about? yeah, absolutely. this is the thing i'm most worried about as a digital publisher. - i mean, from my point of view, you know, we know that nine . out of ten people use google, right? so basically all the internet search traffic goes through google - right now still. and although google says that it has this idea - that if it summarises something, it will provide links _ so you can go deeper. i think we know that the behaviour of a searcher is not to try - and go that deep. i don't think the second page - of google is hardly ever clicked on. so we already know that summarising i those articles is going to take away. a lot of this traffic, _ and some of our digital publications are propped up by how much search
1:54 pm
traffic they get, right? _ like i know that cnet's google traffic is like a big, big chunk. of where we get our eyeballs from. it's predominantly through google. the business model gets broken by this summarising. _ and what are the digital. publications going to do? especially like very, very new stuff. - and that's why in recent piece i argued like, we should not. be allowing this to happen. we should have some sort. of moratorium on how quickly these models can suck up data. i don't see any other way to prevent some of this from happening. - i think we're coming towards the end and i would like to end _ i think we're coming towards the end and i would like to end byjust looking to the future. you know, isjournalism doomed or is the debate about al's application and news reporting actually a reminder of the value of human journalists? if we don't get this right, it could. i think there is a kind of existential crisis for information we're looking at. it's kind of what jackson wasjust touching on. microsoft, which invested,
1:55 pm
is investing $100 billion in openai the company that generated tgt for they've already put that into being so you can effectively use their search in the way that we were describing already these tech companies are throwing everything at it and think on this and this is what really troubles me. the more i generated content, we put it on the web without knowing the more ai generated content, we put it on the web without knowing whether it's accurate or not. the more data there is out there to scrape for the eyes there to scrape for the ais of the future, if we don't somehow manage to step in and separate what is true or human. generated, whether it's true or not from the ai stuff, we get to a point where we're actually feeding the ai with al generated stuff and we might end up in a situation in a very, very short order. because don't forget how much computing capacity goes into these, how much data they're able to scrape. we end up polluting the wellspring of the information that goes into the ms in the first place, and we could be really, really stuck. so i think there's a real crisis there, but there are also
1:56 pm
very important questions forjournalists about how we can use these tools to make ourjobs better. assuming we survive this and continue gathering that information, i think to turn our back on a.i. say we have to just get rid of it, these tools could be so powerful for doing investigations, for freeing up time if it's done in the right way, for streamlining the work we do, getting stories out there faster. so i think a.i. tools have enormous potential in news that we mustn't overlook. thank you so much to you all for taking part in this media show. that was contributions from the ft, sky news, see net, and the bristol cable. goodbye. hello there. we started off this morning with a bit of patchy mist and fog, but much of that is starting to clear away, and for many of us, we're looking
1:57 pm
at some warm, sunny spells into the afternoon. that was the scene this morning in gwynedd, a lovely calm start to the day there across the sea. now, this is the satellite image. you can see we've got quite a bank of clouds to the far northwest of the uk, but generally speaking, across the uk, this is where we've got the clearer skies, the finer weather as we go through this weekend. so, still a bit of cloud across some central areas continuing to clear away, and we could catch just the odd shower across the far south of england and across wales. but really, for most of us, it's going to be a dry day with those warm, sunny spells, temperatures getting up to about 21 to 2a degrees celsius quite widely, about 15 to 18 degrees further north and west across scotland where the cloud will thicken later in the day. tonight, there'll be some patches of mist and fog developing mainly across southern areas of england, the midlands, towards east anglia as well, that cloud thickening in the far north and west of scotland. some outbreaks of rain here, but a milder night to come across scotland
1:58 pm
compared to last night. temperatures last night down close to freezing in the northeast but tonight 13 to 15 degrees. throughout sunday then, still that cloud and that rain across the far northwest, any of that mist and fog across southern areas clearing away and lots of sunshine expected during sunday. and with light winds across england and wales, that's going to feel really quite warm, breezier across scotland and the far northwest, really, with that cloud, that rain, that brings temperatures to probably more like about 15 to 17 celsius, but towards eastern scotland, 23 degrees celsius there in aberdeen, 2a, 25 degrees, the further south we come. on through next week, high pressure will move its way a bit further eastward. but what it does, it keeps things relatively settled. and with that, a southeasterly wind will bring in much warmer conditions. so, this is the air mass picture. there's a lot of orange here in the map and that's just showing us that we've got this warm air coming in from the south east. temperatures will rise throughout the week, particularly for england and wales,
1:59 pm
25 to 28, perhaps 30 degrees on wednesday or thursday with that sunshine. temperatures even across scotland, northern ireland in the low—to—mid 20s. it will start to break down a little bit though by the end of next week. bye— bye. live from london. this is bbc news. concerns in england over the presence of lightweight concrete in schools and hospitals — labour calls for urgent checks on all public buildings. blasting off to study the sun — india launches its first solar mission to learn more about the star closest to earth. and new polar researchers start their four year programme training — with the aim to redress the historic
2:00 pm
gender imbalance in polar science. hello, i'm anna foster. in the uk, labour is calling for an urgent audit of the concrete in public buildings, with some hospitals and courts known to contain the potential dangerous concrete known as raac. it comes as more than 150 schools in england and 35 in scotland were found to have the material, with some fully or partially closing. investigations in wales and northern ireland are contining. with more, here's harry farley. emergency classrooms being set up in bingley, west yorkshire. more schools are expected to close next week. parents are facing an anxious wait to know if it's safe for their children to return to the classroom. in one school, i have ten rooms and a staff room i cannot use. my second school, 16 rooms,
20 Views
IN COLLECTIONS
BBC News Television Archive Television Archive News Search ServiceUploaded by TV Archive on