Skip to main content

tv   DW News  Deutsche Welle  July 21, 2019 12:00am-12:16am CEST

12:00 am
predicting flu epidemics with the help of twitter data researchers at university took 500000000 tweets from around the world and fed them into a watson computer system watson found the relevant tweets was more it recognized what they were about for instance if the writer got a flu shot or already had flu symptoms this process is known as cognitive computing the digital simulation of human thought processes. for example for key words like and for information into the right context even berlin sorry to a hospital is using big data to diagnose illnesses more quickly and treat them more effectively. this is a tumor cells which are specific molecular markers no 2 tumor cells are the same and ideally therapies would be targeted to the precision medicine hopes the term in which therapy is likely to be most effective researchers identify the tumors genetic characteristics to select
12:01 am
a targeted treatment. dr claudia fall brecht from berlin charity hospital is using big data to improve cancer treatment she's collaborating with the molecular health data analysis company. did it is generated worldwide through various clinical studies and research experiments and then collected in databases for example molecular health checks these databases regularly on a daily basis and compares the results with those from the patient samples. to get such results tumor cells molecular markers or analyze the process called sequencing doctors at the sheraton a send the results to molecular health. but you are cells molecular markers are compared with those of thousands upon thousands of other stored in the company's database. the database also contains information about therapies a report is produced to provides doctors with a recommended treatment tailored to the specific characteristics of the tumour cell
12:02 am
. that this is what we're looking for it's the direction we hope things will go in the future we'd like patients to receive personalized treatment based on molecular changes we can identify during sequencing. the idea behind the project is revolutionary but using data from so many people as the basis for medical decisions and possibly superseding diagnosis of the patients doctor is also country. in germany people are still quite cautious we're afraid to give others access to our data which is understandable as a researcher i'd like to manage my own data and know exactly what's happening to it . but i think we must do away with this idea of keeping it all to ourselves the amount of data is just too large for that. big data has already allowed berlin's charity hospital to identify individual therapies for some 30 cancer patients.
12:03 am
treating illnesses with the help of big data that's real progress and it shows that ai and humans can work together for the benefit of mankind another thing that big data has done is make human behavior more predictable that's especially interesting for companies who want to target us online with personalized advertising that can still be hit or miss just because of research diving expeditions doesn't mean i want to buy a wetsuit right away but data analysts should paul monk from from thailand are working on ways to optimize targeting when you play when you like when you start to see you in all the data that will go to whatever. we are the technology to help the brand to know how to talk the right way to the right consummate with information. should call monk problem and his fellows andrew co-founders have been
12:04 am
actively collecting data since 2013 today the firm employs more than 160 people and mainly analyzes data from the asian market they help authorities and companies to control their image there's been little criticism about how they process the data our job is not on the data but our. father brand to understand it so in the end we help it to understand it outta here. but data security specialist cash to normal is more critical he believes that the global trade in data is a multi-billion dollar business from which only a few players profit. google alone earns over $100000000000.00 a year with online ads. and of course not $100000000000.00 has to be recovered somehow through the products that are being advertised so a single company earns hundreds or thousands of dollars a year from each internet user. then there are the data brokers who profit from
12:05 am
collecting and analyzing this flood of data using special software we try to find out who's tracking user behavior the triangles here be represent the trackers the circles the websites visited. even users who don't log in aren't surfing anonymously with every click the trackers network grows in this test there were close to 20 trackers for every website visited. big data analysis helps link that information and produce a digital profile of the user. and. a profile like this describes a person in their fears their needs and possibly their financial situation allowing for advertising to be tailored to meet their budget that it describes as better than even our best friends could want to conquer. so companies might know me better than my friends do even very sensitive data like that used by health apps as often
12:06 am
passed on to data collectors without users knowledge the legal basis for this is sometimes highly questionable. massive amount of data is generated every day it comes from a variety of sources not just the internet. whether on facebook instagram or net flix every day we humans generate 2500000 terabytes of data but not all of it on the net visit the doctor in your symptoms and diagnoses are stored in servers this data is often anonymous the person to market researchers when you found someone the collins location and contact details are scooped up and become part of big data brokers encyclopedia defines big data as a monster so large change so fast or so very that the counting process with standard software. exactly how much data counts as big is hard to say as it's not
12:07 am
stored to analyze centrally i see analysts estimate that in the next 6 years the global data sphere will rise to 175 2nd bites per year one set of buy does equal to $1000000000.00 terabytes one trillion gigabytes one quadrillion megabytes in comparison a 3 minute m p 3 track is around 3 megabytes in size so one is that the point can store around 333 trillion songs processing such masses of data isn't easy there are 3 aspects to consider. there is the hardware aspect what hardware can handle it. second is the software the process is the data directly. there's the algorithms which glean information and knowledge from this data. coming . hardware software algorithms
12:08 am
it's a big business big data is analyzed using software platforms called frameworks they divide data between several high performance servers. where it can be processed simultaneously processing that data quickly is key that's where data artisans burden based startup comes in they analyze very large amounts of data very fast using an open source platform called apache flink they help create it processes incoming data in real time and can simultaneously analyze data has already been stored stream processing is a big new thing so it's no surprise that chinese conglomerate alibaba snapped up designs for an estimated 90000000 euros earlier this year that's great for the start of founders but is it good for society. with feeling that this poses a risk the data could be compiled on unaligned in such a way that for instance human behavior becomes more predictable and transparent.
12:09 am
and that could end up limiting individual freedom if i had designs and i suppose honestly i'm pretty generous when it comes to my personal data if i look into a service and like it i'm willing to pay for it with my data but.

28 Views

info Stream Only

Uploaded by TV Archive on