Skip to main content

tv   Lorien Pratt Link  CSPAN  February 9, 2020 11:00pm-12:03am EST

11:00 pm
at the college level or harvard or admissions lawsuits you see the idea that when it comes to african-americans in so many ways there is this inherent belief that somehow you have not earned what you have. you don't belong in your constantly fighting to prove that you do. . . . .
11:01 pm
we live in a fragile world today and in the class of difficult situations dot we as a society have yet to figure out and a personal level i think i make decisions every day. i bought a car and it's a hybrid. but i have no idea if the co2 that i get for my 70 miles per gallon as a benefit to all of us outweighs the cost of creating the car and potential pollution impact of the battery after it gets stored in.
11:02 pm
>> this is my mom. i bought this card a couple months ago at christmas. if i bought it 100 years ago, fine, and no problem. today i don't know if it was made in a sweatshop using called hicoffins that have pesticides growing on it -- cotton with pesticides. my choices have actions that it is to, and i have no idea what those impacts will be. that is what i'm going to be talking about today. you have a new superpower. i think of it as a fish in a pond may be about the size of
11:03 pm
the societies where our brains evolved and everything is going okay for a few years. and then a storm happens in a river opens up connecting the fish pond the ocean. new fish swim upstream and it's fundamentally different because there is some saltwater. i think we are in a situation today as a society. things are changing radically. our actions have outcomes at a distance and that is what "link" is about the actions we take and the decisions we make as we consider the actions and the path through which those actions become reality. how can we be responsible for the outcomes of our action if they are visible and can't see the outcomes. if there are fish in a ocean, no
11:04 pm
longer in a small pond we simply didn't default for this situation. i'm lucky i went through the computer resolution and computers were things nobody could accuse and my mom didn't know the difference between hardware and software. i had to explain that to her. we've gone through a massive democratization of computing technology. we are in the same cusp today with data and that new technology stack. ai has done to us what we don't have control over. the data is overwhelming and as besatbest we get a data visualin but i had the honor of interviewing him words of people as an analyst for a few years and asking them what are you frustrated about them if technology could solve one problem for, what would it be and over and over again i heard
11:05 pm
a similar answer, and it looks sort of like this this is why, i'm pretty sure i know what i'm doing, just background for me. i've been building machine learning applied systems for a really long time over 30. so in the human genome project in graduate school i spent 100 million-dollar budgets for the governor and hand machinery models closely supervised. the publication who would have known it would be with us so many years later. my machine friends know what i'm talking about. i believe that this background has given an insight that is k key. something that is missing is that we have been coming up from the technology in that of putting humans at the center of the patient, -- i liked that she
11:06 pm
called it intelligence because they can sort of figure it is upside down to the eye a. of putting them at the center of the equation again. for the decision architect what is the position. it's a process that leads to an action and in a complex world moves through some stuff and i don't know what biting that score for that car is going to do to the world. it's going to have some impact but i don't feel very motivated because i can't see the impact. it isn't visceral and it doesn't grab my brain in a way that makes me think i really need to buy a hybrid car. i can't see it. and the date of today isn't
11:07 pm
giving to me. i'm training my dog to be a service dog and i got him pretty much his whole lifecomedies about 11 months old. i have this thing happened to me. i have a trainer that is teaching me to train a service dog and company about the behavior consequence. that is exactly what i heard from the humans i've been interviewing and executives they are always talking about an antecedent that is the context. we are in the kitchen and i say sit. the dog sits down and gets a cookie. so this is a universal archetypes. i will tell you how that fits into this in a moment. what i think and i'm pretty certain this is the way to think about it because it has the lowest friction, the lowest
11:08 pm
friction to how humans actually think they don't have much optimization or inference methodology or any of those fancy thing. we have to meet them where they are at and it creates a giant cultural barrier between people at the head of government, the head of businesses and even me as they try to make decisions and use evidence and data to make sure that those decisions have a ripple effect. farmers find working within the proposal they have to decide what crops to plant. they are out on the road and don't know if it will make them productive or what is going to happen because they have fewer migrant workers and the situation has changed. it might decide where to a player the company and you hear much of what my dog hears
11:09 pm
there's an antecedent that is a situation there is a behavior we are going to launch the project and down the road somewhere we can think fo through long chainf consequence. but that's limited and we need computer help. as we think through the actions into some context that they are leading to some results. remember this template. you can take it home and use it immediately. that is my promise if you stick with the talk. how do we make decisions today? i have the sense that this was happening. especially in a complex world, but going back from the human
11:10 pm
evolution, we don't really think through the consequences of our decisions very deeply. we are much more likely not to think through things rationally and instead to use social signaling. it turns out that is very effective. it has been hugely successful for the human race and in fact it is what separates us from many other species. we are great copiers and cultural evolution with behavior and pattern that any individual can't understand that the society like the unconscious process of genetic evolution leaves cultural evolution to come up with these behaviors. some prestigious or comment to be dominant person to think through the consequences and that was great but the situation
11:11 pm
has changed. first of all, if there is a bad actor here or here and they tell us what to do and they are smart, they can subvert behavior and influence us to make decisions that benefit them but not us if they are smart about the situation. second, the context is changing. we need to be developing new ways of coping in this ocean that is different than our pond because it keeps changing the water flowing back and forth into the old way of thinking through the problems that the societal and crowd level are no longer working. these are complex system dynamics with feedback effects. we see winners take all their large companies did a benefit and matches any quality. action at a distance we talked about the importance.
11:12 pm
anybody that's worked with data we tend to focus on the things we can measure easily. money, size, price. we tend to overlook reputations, happiness, morale. and i've never built a decision model that can have one feedback loop that involved something tangible. a soft factor. we must start talking to the sociologist, the cultural evolutionists and all the other discipline is to understand the soft factors. the decision intelligence creates a roadmap for how to do that. the other thing i didn't say is the future is no longer let the paswith thepast and so if you kk its problem when we have seen they are the same and based on the past we don't realize the situation has changed, suddenly what do i do? i believe that ai, and di,
11:13 pm
decision intelligence, can solve this problem. i grew up in a period of technology optimism. we were all sharing the code and the internet was going to democratize reality and we were going to collaborate. we had a dream. and i don't think that we have realized that dream. i think decision intelligence will help us go there. we've created the link in the chain, the internet's social media, and there is one more link that we need to start to have a nonlinear impact, and that will start to talk a little practically right now. how do we do di? we start with people. we don't say where is the data. we don't say we can't do this without data. it's great there's a huge amount of human knowledge that is a no
11:14 pm
data set whatsoever. we are good at knowing how the actions lead to. your home work is to go home and ask a friend who didn't come to the talk how we think of a complex decision and i promise we will talk about action and they will lead to some intermediate effect and that will lead to outcome and the context so what i do is i sit down with a diverse group of experts and say what are the outcomes you are trying to achieve? they've never sat down and brainstorm through the outcome. i consulted at senior levels of many organizations, and i think what are you trying to achieve command o,and the list of outcos different for each person. let me tell you, you don't need technology to get better. you just need a process where
11:15 pm
you think through what are the outcomes we are trying to achieve as a team, is it higher revenue, net revenue after two years, is it some kind of military advantage? do we want a military advantage that doesn't create a backlash that will get us in terms of the psychological reputation of the country what are the outcomes that we are trying to achieve, make sure that you ask that question. second, brainstorm through the action. many folks don't take the time. move to the creative side because when the blood is in the creative side or analytical side of your brain which i guess is over here you don't have room for the creative side spend time
11:16 pm
being analytical. most decision models i believe as we democratize, this is the pattern. she was so compelling is that we have a climate crisis and the way that we sold this, it's simple, stop worrying about analysis at the very least. there are organizations over the world will take in money and if enough people do this, perhaps on its own perhaps it would make a big difference. i don't know how it would lead
11:17 pm
to a chain of events of an outcome if i'm going to use ai to benefit me, i want a visceralwon't bevisceral, intern experience. so, this is what i think is the future. it's going to look like a videogame and i hope that we can do some of this in a basement because we can do this and walk through the spaces and what are we doing in the spaces, experimenting with actions we might take and letting the computer help us understand the chain of events that lead to the consequences. valuable in the personal level and organizational level. let's see if this works. it has no purpose whatsoever.
11:18 pm
we are trying to make a decision how much money i'm going to pay to sequester carbon. as i chang changed the decisione data is telling me about what it puts into motion. i spend the money and change my decision here i here's the carbn sequestered, here is the total atmospheric carbon so i can see the chain of events. now, linda is the expert in the background has done research and by the way of the action of the actionbeactions connect to the e in here i can not only change my positions but also whether sam is right and who i can trust because i can see the different people, different experts claim different thing. ultimately i can also click on their name and go to a site where i can see they are making
11:19 pm
the case and that's going to be like wikipedia we will have a site that is curated that we can use to understand people. sorry, to understand the situation. now this sort of looks like a business intelligence dashboard. it looks like stuff we've been building for a long time. let me tell you it's not. we are not looking at the data set here. we are looking at the future. let me summarize you can understand this. in the background depending on the choices, we also have a physics and in generating the implications of the choices. there's a lot of investment and then as i changed my investment i can see how my decisions interact with the situation as they characterize it in order to impact the outcomes i care about. this is a universal pattern.
11:20 pm
this example is an example of something you do in your head and large organizations really struggle with understanding the impact of today's decision tomorrow. so i give you a couple of examples. these are machine learning models here, but all triangles, and it detects whether it has a current integration happening right now. that is a common kind of machine learning model that is pretty widespread with intrusion detection. it might be 20%, 20%, there is an integration happening right now. it's got some spaghetti and if that it'it's maddening how peope naturally think.
11:21 pm
if that is how you are thinking about things, it is a lot better to have it on paper than to try to keep it in your mind and explain it using these mechanisms today which is inherently linear to communicate the decisions. we need a blueprint like this happens up being an architecture diagram i can send the information to the police to investigate the intrusion and benefit the outcome. if i call the police every time, it might be costly. i need to try to call them only when necessary. this is a decision of the labeled with a bunch of farming experts as a prayer that big pag project i talked about the past 23 phd's and institutions we are finishing the proposal this week and there will be a national center for agriculture.
11:22 pm
what's important about this is i didn't have to explain the decision intelligence to anybody. i sat down with my team of experts and said what is a typical farmer trying to achieve. they want to be profitable at the end of the year and since we were brainstorming a citizen at all and they said you know what, they don't want to take any actions that's going to put them out of business in a few years, so that is the second goal with the need to balance against that. and then we talked about the actions that they might take. this gives us a map to understand how the machine learning technology sits together to tell us how the positions of impact that yield and there is another model that says the type of diseases or contaminant into these are
11:23 pm
sensors farmers might have on a drone or someone on the field that they can use again with ai to interpret the data to know that the early warning possible so that we can spray as little as possible in order to achieve our goal. in the liability wit solutions interview. i promise you having this spaghetti on paper is a lot better than what's going on which is the invisible in people's heads. it becomes an artifact and we taltalked about the design thi thinking. we can bring them back to the decisions. we can continuously improve it. it acts as a blueprint that connects the end-users.
11:24 pm
my stakeholders build this model that connects them down to the people so they know where to fit in. there's a moment i came together before we did this diagram and i don't think anybody knew how the whole thing fits together. we don't have to explain in words anymore. we got it back and we know how it fits together and where it will go in. those of you that know me a few years know that it's been quite i've been playing all over the world. i'm really happy to the extent this matters there is intelligence on the cycle now and i consider that a big accomplishment. they've trained 20,000 people in
11:25 pm
the eye and she's the other big evangelist in the space. then there's a bunch of companies that have started to identify most of them in specific areas. these are all people that recognize if they go beyond building a machine learning model and embed it into a position model surroundinthedece machine learning model, and it connects to actions of outcomes concept that is goin,that that m to become more successful. it is essentially the same thing. it answers the question if they make this decision today, what will be the outcome tomorrow?
11:26 pm
people have different approaches and the coastal decision diagram is mine and other people have other approaches. some people don't even come from technical, the sociologists, economists and more. it is common to us as we've taken seriously the action to outcome is the way the technology and science interact. we also made it to hollywood and who knows what this is? is a good place. this is ted danson. i'm not going to do a spoiler but this is episode ten and i recommend you take a look at it. he's basically discovered dia by the end of the cvs. it's the big reveal, the mystery. this is janet.
11:27 pm
is she human? though, she is ai. so ted is trying to understand why there's peace negative unintended consequences. with great power comes great responsibility. you have the ability to take actions as an individual and as an organization that had a giant impact. i am an optimist but believe that they are at the beginning of the solutions renaissance was not just a guy that all of these technologies will come together under a common blueprint. the cultural revolution, all of these fields many of them traditionally viewed a soft
11:28 pm
field. we are the experts that focus on the outside. what we start to know using some version o d. i is to crystallize solutions to understand the solution to water impacts property. the impacts hit all of us, government, democracy, compani companies.
11:29 pm
the only way we consult them is to have a new approach to understanding how actions don't surround the whole world to lead the best outcomes. thank you. [applause] do you have a case study or story to tell us i think the example i showed with farming is a good one. we also built in initial model for the government decision regarding conflict in the
11:30 pm
sub-saharan african country where the model showed, and it wacome in topost a preliminary t shows to take action from two places at once. you have to do work with the rule of law treaty and stimulate the police. it was less complex than this one but just an initial preliminary study. i spend time interviewing experts. >> i wonder how much you feel this could be because one of the special things about people is that they are thinking that it works and models of how things are like other things.
11:31 pm
it happened one place at another and it is where we are taking action that we think is good and it is good in the short term. i call this the lobster claw and it leads to a negative consequence in the long-term that the visibility horizon that would tell us that. >> i would like to bring in other people to talk as well
11:32 pm
[inaudible] the ceo of inquirer. would you like to ask a quick question? >> good to see you again. i have a background in science and it's an interesting time everybody is kind of weighing in on climate science. [inaudible] i think about your metaphor and challenging. would you be able to teach us the complexity.
11:33 pm
have you thought about ways to make this a tool -- as a metaphor to answer the question i can drive aid for ari without seeing what is under the hood but certain people are impressed when you open up and see all that complexity. some of us don't want to hear about that at all. we want someone that we trust to tell us if we take this action n that will lead to another outcome. some people we want the ability to open up with him so we need a multilayered approach to this. we need beautiful video games that really grab your attention. then you also have to be able to click on that excerpt and click on the process that led to the mechanism of that model.
11:34 pm
[inaudible] >> i think that we need both. as technologists, we are obsessed with the under the hood stuff and haven't paid attention to the user interface. >> he would be happy we are talking about this. >> a chief scientist and ai. >> thank you for the wonderful talk and a great book. great insight. i loved what you are saying the vision and the internet to get things started and it's how it is going to be the next positive and bring humanity together. you can understand and increase empathy and connection, and i
11:35 pm
think the reality in the last few years has been much more divisive and sort of tribalism and breaking up the pieces. i'm trying to imagine a world in which it is everywhere where everybody has the tools to make decisions in better ways and then i wonder if you think that that will help to bring people together. >> i think there's a lot of people that feel overwhelmed by the information. i don't want to talk to them. i can't even understand a word they say. and if they have interfaces like this, they will engage with the evidence in more solid ways in which the data and if they get the assistance, they can do better. i think as an individual, i am not a sociologist, but it's complex i can't figure it out, so the biggest initial impact woulinitial impactwould be the y
11:36 pm
that we will balance out the inequities become when technologists dominate the world and send us the end we start to use it for our own personal needs. so i think that we will hit the quality. we could democratize the computer. what's democratize ai. >> let's make this do that. >> this technology really empowers people and enables a kind of democratization of a complex -- >> let me repeat myself because it's important. you don't need to read the book or learn tech just make sure
11:37 pm
that you brainstorm your actions if you do nothing else. we are not doing it. we are sort of lost in our specialty. it became somewhat of an argument. you are organizing a mountain bike and it was the intention that it was just too simple he wanted it to be much more robust, but i think that the news from what you are talking about and one of the rules as most of the really big changes take 30 to 50 years to be an overnight success. it was claimed 60 or 70 years ago.
11:38 pm
>> it was artificial intelligence. >> that was a darkness in 1956. >> to me i think that is load up the evolution. >> she is a brilliant technologist and you might resonate with the view that this is tricycle any to be more sophisticated and comple than ct we don't have to be either more.
11:39 pm
you have a different set of controls. then we ca can be fancy under be hood. it's classic computer science principles as another metaphor. [inaudible] about one of these people there are rules on how people can ed edit. should it be possible to have multiple visions of these different connectors rather than having only one set of their
11:40 pm
outcomes. >> this will begin over the next year or so. it maximizes how to get to the reason the wii accurate information from the crowd source. when you do this that causes this. you spend ten and you get 20 and someone else has been asked when. each tree will have this biomass and another expert says it would have this. so i think what is critical here is that we create again what was
11:41 pm
called the beta version of wikipedia that talks about the link. it is what we have been missing for years. >> one of the things as they are getting the unintended consequences because we are not modeling the context of which that happens. so, when i start a new project, the first thing they say we have the data and we've been closing the data, put the data aside. swami what the decision is that it's going to be used for because unless we understand the context of the decision and the action, it will lead to some outcomes i might build the wrong system. you mean you can do this without data? it might help and we can do that
11:42 pm
later but if you don't, the suspect to software engineering if you don't understand the requirements you are going to sit there and code and build things an into this nine out ofn projects fail and this is part of why. >> i think that is an interesting point. one topic that came up that i'm aware of is the correlation of access to birth control. it's a simple link actually. womewomen can note the size of family they would like to support and family planning. it is a missing link and yet so they talk around it. what are we going to do about overpopulation, we need to educate women and take things that are more viable and now it is secondary and we also see
11:43 pm
this in the primary link. so, i am curious what technology can do. >> i would love to see the technology to be able to resolve. when we have these multiple interpretations and somehow you see the way to build a system that people could see like a bigger link. >> i don't think we have a lot of research that demonstrates these kind of things. science has been given randomized trials in this situation with this intervention to get this result. we've got a lot of results. i think what's missing is taking those individual links and connecting them to actions i could take today so let's say i care about the status of women globally, what can i do about that? show me some evidence in a picture naked fun, immersive and cool that i can see as they move
11:44 pm
the lever for that particular link. i can see how it's individualized we need to have these displays that show them immediately as they play out. so, that is the piece that is missing. i think we have great science and data, but they don't have this last piece where we democratize ai through the chain of events in the agency. >> [inaudible]
11:45 pm
that is different for the organization and you can see people don't know what the decision is going to do. i think we've been through 2,000 years and ai is the extreme example but they can view it as inference and logic, but there is a subconscious rationality that's very smart where individuals in a group, none of them can explain what they are doing. none of them even know what they are doing on their own. somehow the signals they are
11:46 pm
giving each other a kind of cultural evolution does that, but i think we are in a new fish pond. i think we are in a situation where natural instinct for how we behave doesn't work anymore. you asked how it fits into this and i think tha that it surfaces assumptions and it creates a challenge to the cultural evolution as to really study the emergent behavior of humans as they make decisions. and it is an invitation to that part of the world to work together with the data people of the world in a coherent way. it's a great point that we do have these behaviors and model complex systems as my friend knows better than i do and build models that help us understand how we get the behavior. behavior. it's very nascent and you can
11:47 pm
help. [inaudible] how do you get these ideas across more effectively for the tools that you found. i go in with not a lot of studies. he would brainstorm and was then
11:48 pm
in the causal change. there would be a really ugly picture and then it would be just gross. at that point they can start to get their intuition going. given the decisions we're makine making into the situation that we are in come here is where our positions were going to go.
11:49 pm
>> [inaudible] it is part because they get in some ways put off by some other ways i don't have the huge career that you do in this space and so i would like to make an invitation to you they need to
11:50 pm
be invited into this picture and it just has not happened yet. >> [inaudible] giving the innovation is opposite o what are you proposig as that is the only reason that it needs to exist.
11:51 pm
it came together for pieces and portion. instead of using responsibly. i was very interested in your talk thinking of how the business is make the decision. i actually find that the way that we have designed our economy. for the same people that have the same outcome to make more money, so just going outcome and action, the actions don't change and people don't get the place
11:52 pm
to give amazing tools but any larger sort of way. >> there's a lot there and you are doing amazing work. i have one point which is the reason i believe in this is because it is making the invisible visible. no matter what it does, even if we have bad actors using this, it invites them and they draw a map. i would love before we made any big government positions that we insist on somebody doing a decision model and that we have a collaborative team with races, genders, ages that all say they agree to that model an and so we service what is happening right now i don't know if you have seen if the executive i talked to it's like the loudest guy in
11:53 pm
the room gets his way and i think that we have to combat that by taking the invisible end up making it visible. when you have this perspective it becomes like an overload to talk about the decision intelligence for different perspective and say this point where the loudest and the richest doesn't say we gave you all the platforms you got to say your piece and now this is what we are going to do. >> so again, really, really good point.
11:54 pm
point. let me speak to one aspect of it which is a principal in the methodologies of one of the key best practices is the outside of the box versus inside of the box. you talk about this technology and agent-based system, don't let people talk about the mass insidinside of the product will overfill overfill your brain. information hiding is another good architecture principle. focus on the interface definition. the role is to go we don't need any more insight or answers we need more science i but we havea lot of scientific results that are sitting around unused because nobody has connected them to the chain so yes we can get overwhelmed but if we keep using these best practices that keep people focusing on servicing their mental model, we could start to overcome and that
11:55 pm
is exciting. they are trying t trying and i t them go on for a day and the moment they see it is now rendered in a picture. >> over here. >> [inaudible] so, you have talked about your approach. what is the normative side and
11:56 pm
being a technologist, i get bias towards that [inaudible] do you see the need for them to work together? >> let's take a behavioral economic space and look at the cognitive bias. that fits into the diagram some way and will help us learn how when we do this step if it caused that to be true so the way that i see these areas fitting in they informed the link is. so the picture is an integration for how we pull different understandings whether we are modeling how another company will begin for another person
11:57 pm
will behave, that becomes pretty important. i realize that you need the structure not necessarily insight of this approach and therefore that is helpful to understand the outcome. >> what is the scope of weapons of mass destruction that talks about the algorithmic model. i think it is chapter two where a woman was inspired as a great teacher and it's an ai system and its opaque. they don't know why. if they build a decision model ... here are the variables that this thing is using and here is how this prediction will be used in the larger context, i think they could have surfaced that
11:58 pm
much faster. if we understand the context, it isn't perfect but it makes it visible around the context. >> thanks for the question. that's great. >> i'm wondering if you can talk about the role that might play setting up outcomes. it depends on who was involved and in developing the methodology if you thought through what you do with these outcomes.
11:59 pm
the outcomes that appear to be in conflict actually aren't and if we do these kind of mass, this is pure mary work into the whole purpose was to resolve conflict. if you have two people with different opinions of the actions we can take and outcomes we are trying to achieve and you capture those on the common app than some of the technology can find the holy grail. it can find a set of actions. because we understand these links we have these complex links in our head in weekend but we don't want to admit that they can't. so we fall back on arrogance and assertiveness and yelling.
12:00 am
it's not used against me in our opinion. it is working on a common model. so it is not this conflict. it'.let's join together to get a shared view and it facilitates the process. great question. thank you. >> ..
12:01 am
>> those negative outcomes of the time and effort. >> and then to engage with each other and then to continue this conversation with each other and then failed to bring us together and then in the background. thank you very much. >> [applause]
12:02 am
>>host: it's an honor to sit down with howard bryant his book is called full dissidence. how are you doing. >> good to see you again. an interesting book i had a chance to read through it. and to touch on a lot of different topics and that mainstream america is not

55 Views

info Stream Only

Uploaded by TV Archive on