tv Public Affairs Events CSPAN December 30, 2016 11:37am-1:38pm EST
11:37 am
particular don't want to talk about that because that's "bad press." >> i once covered a guy who was building a repository of failures so other scientists could see that's what didn't work and i'm trying this thing. there was a social network, i'm going to try this. anybody else ever tried it? yeah, it failed, this guy in spain did it, forget it, doesn't go. >> cool, that would be so helpful. >> just the other thing is we're kind of covering delusional people, because you have to be. if you really knew the odds on what a startup is and how often they fail, you wouldn't get out of bed. and the technology they're making has been so transformative that they see shaping the world, they don't just see building a product. elon musk gets up last week. >> oh my goodness. >> and says, here's how i'm going to colonize mars and everybody's like, okay. >> but he might just be the lunatic to do it. >> no he's the far end of a true
11:38 am
thing. >> so #delusionaltechpeople, and then i apologize we're running a little bit short on time for questions but we have time for a couple of questions. there's a mike up here if anybody has a question for the panelists. and since this is being recorded if you would ask your questions at the microphone, we'd appreciate it. >> appreciate all of you being here. now we've got a variety of panelists here, two editors, two reporter answer i think emily mentioned waking up in the morning and wondering what am i going to cover today. for the reporters, how do you decide what to cover and for the editors, how do awe sign and how has that changed over the last few years? >> okay well i'll start. so i am the morning news editor at "wired" as well as the national affairs editor so there's always kind of two different time lines. there's one of the stories that
11:39 am
we're working on, on a longer time scale so we've assigned them, the reporters are working on them for weeks or months and fending on how much research they need to do, and that's something i have in the back of my mind, knowing when are those going to come in, but in terms of morning assignments and looking at how i decide what other people need to be talking about, you know, social media has really changed this equation for me and twitter has become a resource that i rely on almost assuredly too much for this moment, because there are two different rubrics. one is, is it important enough that people are already talking about it and it's something that is out there that we need to weigh in on because it's this moment that's happening or there's the thing that no else has discovered yet and we need to be the if, to bring it to their attention so i look at social media sort of for the former, and to discover the latter, i either depend upon my
11:40 am
beat reporters who will bring me pitches or i have my, you know, proprietary secret places of the internet and world that i am looking that i would never let anyone see all the tabs that i have because that's my secret way of trying to get us scoops. >> for me it's different because i'm not a national reporter focused solely on central indiana. that's the stuff that everybody has and if it is important important enough we'll weigh in. we pride ourselves on finding the distinctive stories nobody is telling. i give you an example a month or
11:41 am
so ago now, geopedia hot tech company out of chicago, they opened up an office in indianapolis last year and wanted to hire 300 people i want to say by 2018 but you can fact check me on that, and last month i got a text from a source saying did you hear what's going on with geopedia and come to find out they were laying people off. it's like okay if this company just signed a deal with the state to hire this many people and get this much in tax credits and now laying people off, this is a big deal. so i started calling and that source was, is largely unreachable. he was at some conference and couldn't talk, but i started calling folks that used to work at geofeedia and i looked at the linkedin pages and finally got the story and when i talked to the ceo of it all, this is obviously a story they're not going to want to push.
11:42 am
they're going to want to make the layoffs and keep it moving but when i talked to him, he was like how did you find out about this quickly? i just told the employees, and you know, i give credit to my sources for that one. those are the types of stories that i think to kind of go back to what you were saying earlier about cheerleading, you know, we need to cover the good stories. we need to cover the companies that are hot and do have a lot of promise but we also need to chronicle the ones that are struggling or that fail to really make sure that we're getting both sides of it. >> thanks, jared. we'll make one more question and make it quick. >> thank you so much for atending. sometimes you look at tech news it's chasing cool. as a bystander from lafayette, indiana, who have no involvement in terms of what uber is doing, do you think tech reporters also have responsibility when there's
11:43 am
new technology coming out in terms of infusing it, that it makes sense for people, somewhere in indiana. for example instagram or snapchat, they come up with new features. we say hey snapchat came out with like the new, what do you call it? >> filter. >> the snap glasses. >> exactly, and speaking to a 35-year-old male he's probably like well why should i care about that. is that also imposed upon you as reporters to kind of defuse it. sometimes when i read tech news it's like everything that is new but less so in terms of well how do i make sense about the technology. >> that's a great question. sort of gets nat, what you were saying earlier about the different audiences. >> yes, so the snapchat news comes out and we have the chunk of the population that uses snapchat can't wait for the features to come out and they're updated on the latest ones and before seeing it probably understand how they're going to use it, then we have people that really need a news story that
11:44 am
says like this is exactly how your physical app is going to change and what you're going to need to touch instead to see new things and then we're going to need to tell someone like okay, guys, this thing called snapchat and people are on it and talking to each other but all three are very, very valid storytelling that need to happen. people that don't understand how snapchat works are missing out on the fact that there is a segment of the population that is communicating with disappearance messages and the impact that that's going to have down the line is going to be important, so it is really important that we try to figure out how to break down the story for all of these. i guess the answer is yep, that's super important and we're trying to do it all the time. >> yes. >> i'm afraid that's all the time we have. would you join me in thanking our panelists as well as our graphic artists. [ applause ] thank you all.
11:45 am
tonight we continue our look at american history tv programs normally seen weekends on c-span 3 on the discussion of the origins of the cold war, then u.s. democracy and international relations, and then the legacy of world war ii. american history tv, prime time tonight at 8:00 eastern. new year's night on q&a. >> while people were starving, van buren he was having these fancy parties in the white house. it was part of the image making where harrison was the candidate, poor man for the poor people, and here was this rich man in washington sneering at the poor people. harrison had thousands of acres and estates so he was actually a very wealthy man, but he was portrayed a eed as the champione poor. women came to the parades and
11:46 am
gave handkerchiefs. they were criticized by the democrats saying these women should be home baking. >> "the carnival campaign" now the campaign changed presidential elections forever." sunday night at 8:00 eastern on c-span's q&a. join us on tuesday for live cover am amage of the opening d of the new congress. watch the official swearing in of the new and reelected house and senate and speaker of the house. our live coverage begins at 7:00 a.m. on c-span and cspan.org or listen to it on the free c-span radio app. a look now at the history and future of artificial intelligence and about machine
11:47 am
dialogue systems called chat bots. the speaker is jennifer neville, a purdue university computer science and statistics professor who focuses her research on machine learning and data mining. this is just over an hour. >> so that's good on my side of things. >> are we good? >> well good morning ladies and gentlemen and welcome. it's my pleasure, my great pleasure to introduce jennifer neville to you, professor here at purdue university.
11:48 am
professor neville, she's come with us even though her department is enjoying an external review right now, so she's giving up quite a lot to be here and i appreciate that. her research interests lie in the development and analysis of relational learning algorithms and the application of the algorithms to real world tasks. today she'll present a talk entitled ai easy versus ai hard. if you haven't done so i ask you to please silence your electronic devices but don't put them away. as we learned in the last session, 15 years ago, 10 years ago if you were heads down on your phone during a presentation, that was regarded as a bad thing, because you weren't paying attention to the talk. now it's regarded as a bad thing if you're not on your phone, because you're obviously not tweeting about it all the time and communicating, snapchatting or whatnot.
11:49 am
but please, do silence your devices. we hope to see you tweeting. #dawnordoom and any equivalent posting to facebook, instagram or snapchat, whatever social means you prefer. please join me in welcoming dr. jennifer neville. [ applause ] >> okay, thanks, gerry. is the mike on? you can hear me okay? can you hear me now? yes, good. okay. okay, thanks for having me here again at the dawn or doom symposium. this is one of my favorite places to come and talk because i can think at a higher level about talking about what i do to more general audience than my typical research conferences. i my area of research is data mining and i specifically focus
11:50 am
on very complex relational demands where there are, you need to take into account interactions between many entities in order to make efficient predictions, accurate predictions but today i'm not really going to talk about my own research.research, i'm goin really try to give you a sense of what's going on in the ai community right now. there's been a lot of interesting break throughs over the last few years. there's a lot of excitement about ai and what i'm going to try to do today is compare and contrast two events that happened earlier this year and talked to you about how to think about what's hard and what's easy in this space given those two occurrences. so really i want to contrast these two recent events that happened earlier in march of this year. so the first major breakthrough that happened was you can see here march 15 was the event was google's alphago system compute
11:51 am
ere game playing system that plays go won over a professional go player here, lee sedol and that was major breakthrough for the community. it was something that at least when i started in the field of ai was something that i thought about doing. i taught myself how to play go and realized i couldn't even learn how to play go let alone learn how to program a computer to play it. so success happened much earlier than people anticipated and so in the news this was touted as a major breakthrough and something that was going to transform what we saw in terms of successes in ai. less thank 10 days later we had another event happen which you may or may not have heard about in the news. here this is march 24. microsoft released an ai chat
11:52 am
bot on to twitter, it was called tay and it was designed to mimic a young teenage girl that an ai robot was supposed to interact with millennials and learn how to engage them in conversations on twitter and this experiment went horribly wrong, if you didn't see about this in the news. within 24 hours this pleasant teenaged chat bot turned into a racist abusive sexist entity and microsoft removed it from the internet within 24 hours of releagues it and apologized profusely about the event and the important thing here is that they didn't anticipate this was going to happen. some of the news that was explained what was going on blamed us as humans for being horrible people on twitter and turning this cute little chat bot into a horrible mess. what you might not realize is
11:53 am
that microsoft didn't do this unknowingly they had a similar type of chat baht called jao ice that was being used for two years previously very successfully in china and it was interacting with as many as 40 million people so what was the difference between when they rolled it out in china versus when they rolled something out on the general internet for the general public on twitter. that's what i'll talk about when i get more into the systems but the important thing i want to contrast from a technological standpoint is that al or go rhythmri -- algorithmically, there's a lot of shared programs. the computer that is going to play go would be different than a robot chat bot that is going to a intact conversationally with people but some of the -- many of the methods underlying these are the same and so why
11:54 am
was one system a major success and the other a major catastrophe? to understand this let's go back to the beginning of ai to learn about the history. ai has been around 60 years from now. it was started in 1956 when john mccarthy, marvin minsky and claude shannon proposed to have a two-month summer conference at dartmouth to explore and define what this field should be under the conjecture that all aspects of human intelligence could be encoded in a computer system, include manager aspects of learning so when they jump started the field with this conference, there was a lot of excitement, a lot of effort in many different directions in the field, i'll focus mostly on machine learning because that's my area but there are a lot of
11:55 am
other areas included that i won't touch on today. so the early work in the field was inspired considering how we as humans might be learning. one of the first machine learning methods was the percentron. this is a math math kohl model where they try to mick him neurons. once a neuron reached a threshold they would fire a signal and they tried to enfold this mathematically into perceptorn. another was samuels checkers program in 1959. it was one of the first game-playing systems where it would formulate the problem of how to learn and play a game as
11:56 am
a game tree with you have to look ahead as to what the outcome of the game might be in order to decide a strategy of what moves to play and the methods he used to develop the system were the precursors to the subfield of machine learning right now that is called reinforcement learning and in that case there's not an immediate signal that tells you whether you're right or wrong in terms of what decision you made but you have to wait for some amount of time before you get feedback as to whether you're headed in the right direction or not. and that is the case with games. finally the third thing that i want to point out here is at the same time there was another area of ai that was focused on developing dialogue systems so these were the precursors to what are being used in chat bots. in these systems there was input from users and the ai system would have to fake yoigure out make a response.
11:57 am
one of the first examples was the eliza system invented by weisenbaum in 1964. this was built as a parody, initially of a rogerian psychotherapist and so what would happen -- what the system would do is it would interpret some information about what the user put in through some pattern matching and processing the language that they had inputte . and then picked responses based on certain templates they had in the system and if you were going to parody a psychotherapist you can imagine that a lot of the answers were things like "how do you feel about that?" and "tell me more." and so at first when weizenbaum created the system, he created it to show people how difficult the problem of interacting with humans in the ai system with humans would be and he was surprised at how many people
11:58 am
actually were fooled into believing that this was a -- so this jump started work in that field. so how do these basic systems work? . let's consider the checkers scenario. the way we frame these as learning systems reflects how you might think -- if you self-reflected about how you yourself learn how to play a game this is really what you put into our computer algorithms as well so if you were going to teach somebody how to play checkers, you would tell them about the board, you would tell them about the pieces and what kind of moves they can make, how do you win the game and so on. so instead they watched other people playing the games to
11:59 am
figure out choices of moves, strategies or scenarios or you might play yourself and lose a lot at the beginning but you would start to understand which moves were good choices and which moves were bad choices so you can improve your strategy over time. so that's what we do when we put that into an algorithm and the way that you -- i'm sorry, i messed up my order here. so before i tell you about how the algorithm would do that, let me tell you about the history of the successes we've had. this is the area where a lot of the general public is very excited to see about what is being achieved by these computer programs. and in 1994
12:05 pm
. then you would billion able to solve -- you would be able to search over the entire tic-tac-toe tree in a 10th of a second. it's entirely practical to so what we need to do to solve these problem s problems with ar mood and a particular situation without having the entire tree. and the way we deal that is to do that. so how does the alphago system work. there are papers and many talks on this and i'll try to characterize it very simply here for you. it's really a common thing that evolved from these two historical methods here. so deep learning is a very
12:06 pm
complicated version of a perceptron so learning is much more complicated than it was with a single today in here in the middle but that's where deep learning has grown out of so what alphago does is combines together deep learning and reinforcement learning so the ideas that come from the checkers playing game where we have to wait and understand what the reward is going to be after we've won or lost the game are also combined into their system and the way they do this is by learning multiple models and combining them together. so in particular they have two neural networks that they learn with deep learning, and i think i have an an mission here. so here is the details from their paper. so one neural network, so you can see there's more complicated networks here and one neural network is designed to predict the next move given the current
12:07 pm
state of the board and that -- so that's a very immediate feedback because they're not trying to predict whether they're going to win or lose, they're just going to predict the next move. and they train their model on 30 million positions that they a acquired from traces of expert games. so human decisions about which moves to make and then they learned a second model that's going to predict the likelihood that they're going to win the game given the current state of the board. so this is a much longer-range prediction of how good is the situation they here in at that point. then they update these two models using reinforcement learning and combine the two together in a complicated monte carlo search procedure which -- so this is only to tell you that it's not a simple approach, it's a very complicated system to solve this problem, but it's really using many of the simple foundational methods that we've built up in the areas of machine learning and ai.
12:08 pm
so when you think about these game-playing systems, the two dimensions of complexity that decide whether the problem is easy or hard are -- depend on here the size of the search space as we go from small tractable trees here with tic-tac-toe to things that are intractable like the size of the state space for go. things get harder and also the amount of delay you have until you get some feedback that we can use in the algorithm to do learning. that's another another dimension of complexity. i might have forgot to say one of the factors in these games that impacts the amount of delay you have it depends on how long the game is so if your game requires 10 or 15 moves, you're going to get feedback after 10 or 15 moves but if its takes you on average 150 moves, then you have to go much further down in the game tree and your reward is
12:09 pm
much more delayed. so now let's contrast this with dialogue systems and what went on with tay. so in learning how to interact it's pretty much the same. so what are the experiences you're going snoto have? they won't be games with a fixed length of time and a clear signal of what you've won or lost in the end. you're going to have conversations with people and subtle feedback about whether they like you and want to engage with you or not so this is something maybe humans are not good at because you hear about things like social intelligence
12:10 pm
people who are able to interpret those signals from other people more effectively are values having more emotional intelligence. so you can interpret them and learn from the feedback good strategies of how to engage with people and how to extend and make that engagement more effective. so in terms of the history of dialogue systems, we've also had a number of successes along the way in the ai community so it's -- the history started w w eli saa, the psychotherapist model i talked about before. then in 1984 we had the alice system which was able to convincingly miingly mimic patt. this was inspiration for the spike jonze movie "her" that came out in 2013 and was aired
12:11 pm
at the dawn or doom conversation in 2013. and then you might think that this next one should have gone in the game playing list, in 2011 ibm's watson system won -- beat top players at "jeopardy." but much of the technology that it was using to play "jeopardy" are the same types of technologies you need for dialogue systems so they needed to understand the input from the clue and figure out how to answer the question using natural language processing and information retrieval techniques so although there wasn't the same sort of interaction, it was really a big success for the natural language processing community. so then something you may have heard about in 2014, there was this system called eugene gustman which passed the turing
12:12 pm
test. in a competition trying to convince a set of judges of the human, it was able to convince 33% of judges that it was human so they claim it passed the turing test. there's some people that disagree with that claim because this chat bot here, eugene gustman was developed as a teenaged ukrainian boy and they used a lot of particular tricks tricks to hide the limitations of its processing system so because it was a young boy who was from another country it was very forgiving to not quite understand the language or questions people were asking it and also to be more curious so to be able to ask more questions and deflect the conversation when they didn't know what was going on so this system, while people think maybe it hasn't really passed the turing test, it did use techniques like humor and deflection in conversation in a very interesting way that
12:13 pm
was able to convince these judges that it was human so really that should be considered as an achievement in this area. so then in 2016 i guess this is -- you could disagree whether this is a success or not but when tay was released by microsoft, really it was successful. in fact maybe it was a victim of its own success so tay acquired 50,000 followers and tweeted 100,000 times in the first 24 hours tay was released so there was a lot of interest in tay from many types of groups of the internet. but really it was vulnerable to this coordinated troll attack. so hundreds of users identified a way to interact with tay in order to change the language and structures the chat bot was using in a way that was not anticipated.
12:14 pm
so how do these work? i can't describe them in a simple game tree. there are much more complicated architectures that have components but the main component is that a user has a message, either typed in or verbally said. there's some sort of speech recognition and natural language processing that tries to understand the context and intent and the task that the user is trying to achieve. this goes into the dialogue system that has the guts of the natural language processing and response generation and there's two basic ways methods have tried to figure out how to generate responses. the older method is a rule-based matching sort of system that takes some words from the input and tries to match them to certain kinds of templates of the types of response that they
12:15 pm
should generate and then goes to a database of responses and figures out how to generate those and sends it back to the user. the more current methods are retrieval-based medis that are based on what happens in information retrieval systems used at google and bing to identify which documents to return for your queries. they take the same basic technology and take the input as information need and used language models in the contextual cues to match it to some database of responses and figure out from the relevant responses how to decide what to return to the user. so what's the complexity in how to develop these dialogue systems? the two major dimensions of complexity that contrast with a game-playing system are that although language has structural and rules like game it's not as clear-cut as the actions and the
12:16 pm
outcomes in games. for one reason it's continually evolving so words are being added to our language all the time, rules are broken. the way people speak, they don't necessarily follow those rules, they invent slang, there's colloquial terms. the use of sarcasm and irony and humor in ways that we use words in there they were not intended and so this means that the search space for what these algorithms have to consider and how they learn how to behave with this user input is effectively unbounded so there are an infinite number of possible scenarios the users could be good in and you have to develop a scenario, a system that's going to be robust and able to adapt to scenarios that i haven't seen before. the second type of issue is that the feedback about whether you're doing something correct is much more vague and unclear so -- and you would know that
12:17 pm
from your own social interactions and how you try to interpret who's happy with you, who's not, who likes you, who doesn't. the feedback is often unclear and often there's a much longer delay before you get such feedback and so you can see this as an issue in terms of what we put as objective functions into our systems. right now they would be trying to optimize something like the length of conversations that people have the chat bots in order to use that as a proxy for something like satisfaction or user engagement. so what does tay do as a system? so, again, just like with alphago, it's using deep learning, i guess i should say that microsoft people have not written papers that have been published in "nature" describing the entire system so the information i have that here is gleaned from many talks that i've seen from people at microsoft research about component technologies that they're developing as well as some of the information that's
12:18 pm
been discussed about tay. it's pretty clear they're using very complex natural language processing that is based on deep learning techniques as well. and so what they've done is moved beyond the simple matching and retrieval-based systems to use complicated deep learning models to predict what kind of responses are likely to be a good match to particular user inputs and, again, they're using massive amounts of training data like the alphago system so they're using millions of examples of interactions between users that they get from twitter or other kinds of online sources where you can see these kinds of engagements over time. but the big difference here is that in this chat bot system it's what i would call an open system versus a closed system. there is no clear bounds to the types of interactions that they might have, the types of behaviors they might see from people and this makes it very vulnerable to attack so, for
12:19 pm
example, they never anticipated that this kind of coordinated trolling attack would happen when they were developing the system. so the dimensions of complexity here with respect to dialogue systems are similar to the things that we talked about with games and in this case now we have a third size of space here which is an open system or an unbounded search space and now we have feedback that is not even just delayed but it's vague or unclear and so we see as we go on in the world in the development of ai we tackle harder and harder problems with these systems so so when we compare the two the game-playing systems and the chat bahts, we can see that when the problem can be formulated with clear immediate feedback in a tract ball search space then we're
12:20 pm
able to solve it easily so those are really the situations where it's considered easy and we have our major successes in the community but as we go further up to this upper right-hand corner, that's where our problems are still hard and where their current work needs to happen. so if we go back to discussing the difference between alphago and tay, when i said they were built on the same underlying technologies, really that's that they both use deep-learning methoding that have shown a very significant and impressive improvement in some areas of machine learning lately as long as they have massive amounts of data from which to train them on but one major difference between what alphago did and what tay could do is that tay can't learn from playing against herself, which is what alphago did. so in the alphago system,
12:21 pm
because the rules of the game are very clear and whether you win or lose is very clear, they could take different versions of their system and play them against each other to develop more types of experiences of the difference sequences of moves and outcomes you would see but that's impossible to do with tay or it's -- you could do it but it wouldn't be success to feel make the system robust to the kinds of sbrankss that you would get in the real world with real users because of the variety of kinds of behaviors you would see from users. so what tay needs to do to get the kind of feedback and experience that alphago is getting, tay needs to be out in the real world interacting with users and using that to figure out how to behave so that meant when tay was released on to twitter she was still learning and so what happened with this
12:22 pm
coordinated troll attack was that users created conversations with tay and used language in a particular way that eventually made tay think this is how people talk and this is what i should say and so eventually after enough interactions with these kinds of people she was turned into a horrible person and what i like to explain, use as an example to explain this to students is that you can think of -- although tay was very successful as an algorithm and had learned a lot of things about interacting with humans, you might think that her amount of knowledge would be equivalent to a seven or eight-year-old child who had had some experience with interactions with people in a very limited environment and you don't let a seven or eight-year-old child to interact with the general public on twitter. and so the difference when jao
12:23 pm
ice was operational in china is that in china there's more restrictions on what kinds of information is on the internet because of the cultural control and so they don't get really the same kind of tractions as you might on twitter and so if you think about this, you might say as i did from an algorithmic perspective, we should have known that this could happen and the algorithm should have been able to attack that people were specifically changing their behavior and how they talked in order to make this chat bot behave differently because of the adversarial setting where they know they can change the input in a way to change output in a terrible way. but this is very complicated to detect algorithmally because it's very easy to say in hindsight that there are hundreds of people that participated in this coordinated attack but when it started it
12:24 pm
was very subtle and it's very hard to detect and know that one individual interaction is not a valid one. that it's an adversarial one. so it's difficult to encode that in a program to figure out how to identify it and adapt and this is why when we have young children and we're teaching them how to interact with people, we don't let them go on twitter right away, we send them kindergarten and they act in a constrained environment with people who are kind and loving to them and maybe there are bullies in the class but maybe they have small bowess until they figured out how to interact. so we have to push in this direction where we're moving to unbounded situations.
12:25 pm
and this is something -- some people are able to do them easily -- but we expect people to adapt in new situations and learn how to behave with relatively few examples as they have -- after they have developed into an adult and so the way that microsoft is trying to encode this information in their chat bot system that they're working with improv actors so this might be surprising to you that this is something very non-computer sciency, right? what do improv actors have to figure out how to do? they have to be thrown into new situations all the time and they have to figure out how to keep the act going, right? which is exactly what they want to do with chat bots and so this is something that they're working with human who seem to have the skill to figure out how to algorithmitize that ability
12:26 pm
and maybe from how we view their interactions we'll put that into algorithms easily to deal with this open-ended system. the second issue is with the dimension here of feedback and so we really have to figure out how to deal with this subtle inconsistent maybe long delayed feedback but that's also something that we do fairly well, maybe people with emotional intelligence do it better than others but that's something we can figure out how to do ourselves so it should be something we can figure out how to do in the algorithms and so the new areas of computational social science or computational humor are the kinds of research directions that are trying to take these into account so compute stational social science is nominally the area i'm in so maybe we'll take into users and what i want to move to are complicated situations where
12:27 pm
people are interacting with other people and we like to learn how to predict their behavior in those situations but it's not as fixed an environment as a game-playing system where you have continual one-on-one interactions over time. people interact with different sets of people and then change groups and things like that. so the ideas from social science of interpersonal communication, impression management, ostracism, all of these kinds of ideas need to be put into our algorithms and learning methods. computational humor is another area where we're going to have at least a few speakers later on in this conversation, including bob in afternoon, talking about how do we understand what's funny? how do we know what's funny? how do we put that into -- how might we put that into an algorithm is another interesting direction to try to deal with the situations where there's not clear feedback.
12:28 pm
okay. so to wrap this up, we've made really great progress in ai over the last 60 years, we have things now that are started to become reality where we have self-driving cars that are able to automatically identify what they see from sensors and figure out how to adjust the car as it's driving. we have smart buildings that are able to automatically adjust temperatures and air flows based on optimizing energy consumption and we have the beginnings of personalized medicine where instead of deciding one treatment plan for large populations, subpopulations of the general public, we're starting to have methods that are being personalized to someone's genome or their history of illnesses over time and so this is really an exciting time for this field in -- so computer science in
12:29 pm
general, ai and machine learning specifically and so you might be tempted to say oh, well these dimensions of complexity that i talked about, it's not too hard, we will eventually go into that upper right-hand corner without much effort but i would caution you here that as a community we've always notoriously underestimated the difficulty of the problems we're addressing and so what i'll end with here is this story that all ai students know about but you might not know about, so in 1966 -- so ten years after the beginning of ai, seymour papert, one of the group of m.i.t. faculty that started the field, told his graduate students to solve the computer vision problem as a summer project. and why did he they they could solve it over the summer? he said because unlike many other problems in ai, computer
12:30 pm
vision was easy because it was clear what we were trying to solve and it was easy to encode that algorithmically. so 50 years later we're still working on the problem that was supposed to be a summer project. but maybe we're close to solving computer vision farley soon based on all the bourque going on in ai. this is -- so i'll close here with this image from the facebook's vision system which is able to take images that are uploaded on facebook and it can automatically identify components of the image and know what they are so it can produce a textual representation of what's in the image and that can be used to produce a verbal description for blind people. and so this shows you the capabilities that we have with our systems now in terms of how much can be identified. so from this image here, these
12:31 pm
are going to be identified as sheep so the little blobs are surrounding the objects in the image and not only can we identified and segment the image and identify the important components, we can also decide what those things are and produce labels for them so that the image can be described to people who can't see it. and obviously we can use that information for many other aspects of the facebook system but the benefit to larger society is clear from this. so i will stop there. thank you very much. [ applause ] >> thank you, that was a great presentation. jennifer will take questions from anybody to ask. because we're recording the event, i'd ask you to come up like i did to the microphone. we'll go until quarter of. okay? thank you. >> hi, jennifer. >> hi. >> so you mentioned emotional
12:32 pm
intelligence and i have heard some scientists talking specifically this biologist, george church who fancies himself a futurist, and is a futurist. he's mentioned of trying to teach ai emotional intelligence and it seems like if we could have taught tay to have morality or a sense of emotion that maybe she wouldn't have fallen into the trap she fell into it and i wonder, is there a way to pre-program morality or is that itself something we still don't understand that we can't do that? >> yeah, that's a very good question. i think that issue of morality also comes into play when you think about the self-driving cars and the current requirements on how safe the self-driving cars will have to be before they're allowed on the road is a threshold of safety that we as humans can't even meet because somehow we think
12:33 pm
that the system has to be much, much safer, we can't possibly release a system out in the world that we think will eventually end up killing someone even though we as systems might go out and get into a car accident and have somebody die. so i think the complexity in that, absolutely there are people who are thinking about how to al or go rit my ties that and it's like we we can but we have to think very differently about it than we would with a game-playing system. i think with game-playing systems everybody agrees on which outcomes are good and which outcomes are bad because you've either won or last and when you think of the supervised learning systems that we develop to predict credit card fraud or to detect spam in your e-mail, there is still a fairly clear signal as to whether a transaction is spam or fraudulent. but when it comes to something as complex as morality, we dent even all agree on what the answer is is and so that is
12:34 pm
something that we could encode very easily if we were willing to be satisfied with that kind of encoding. so if you somehow decided on how to value human life and what kind of action you would take if you were about to get into a car wreck with, say, 10 people in front of you if you're driving you would get into a car wreck and possibly kill 10 people or there's one person on the side of the road but it's your grandmother, what would you decide to do? we would all make different decisions and if we could answer what decision we would make we'd probably like to think that we don't want to come up with the answer but we could look at a distribution of what humans would find acceptable and we could put that into algorithms but we have to think a lot more carefully about acquiring that feedback and encoding it in order to use it. so i think it will come but i
12:35 pm
think there's going to be some tough philosophical discussions amongst us as humans as to how do we value these things because what the algorithms are doing is they're simply valuing different kinds of outcomes so once you put a value on things we can optimize for it. so if you as a general public can value human life then we can start developing systems to optimize it. >> so various points in your talk the ideas came to me of the possibility of probability of things like computational propaganda and computational surveillance. i wonder if you'd comment on that. >> absolutely, yeah. so those are two good issues to bring up so computational propaganda would be the
12:36 pm
situation where we would inject bots into the world that would try to persuade people to do what we wanted them to, pretending to be, you know, just regular people, just like the way humans would try to propagate that propaganda, trying to be robust to that or detect that is as difficult as detecting the patrol attack because we really have to understand what's valid behavior, what's propaganda and the line between what's valid and what's propaganda is very tricky to define because i could be trying to persuade someone to vote for a particular person in the election but i wouldn't necessarily think of that as propaganda but eventually if i go too far in doing that persuasion you might think of it as propaganda so i think related to what we just talked about, trying to probable listally identify these scenarios is going to be much more robust
12:37 pm
than making a decision yes or no and having the system self-monitoring or interacting with humans to say this is a situation where maybe i think this is going on, let's detect it or not. that's something that happens and fraud detection right now when systems are -- some systems are detecting automatically whether particular credit card transactions are fraudulent and they might immediately shut down your card if it's very clear it's a fraudulent transaction but in other cases they'll notify you as a user to get your feedback about that. so what was the second topic? >> computational surveillance. >> surveillance, yes. that's a very important thing to discuss as well. so you might not realize that these systems are really tracking everything about what you do, everything you say on line, every search that you do and it's very easy the security and privacy people have shown to
12:38 pm
reidentify you from the trace -- your electronic traces of behavior online so if you had access to my search history from my computer there would be very few people that live in west lafayette, travel to the bay area a lot, look at machine learning topics rand vegan. right? so very quickly from just a few aspects of my pro file with you will narrow down it's probably me from that search history and so it's sort of scary thing and i guess my answer to that is when that data is out there we can choose to use our powers for good or evil. there are always people that will are going to be trying to use them for evil but that doesn't mean we shouldn't be developing the computational methods because if we understand how things are happening computationally, maybe we can
12:39 pm
identify this type of thing is going on. and i would say the privacy preserving machine learning and data mining community is focused on trying to have these methods, developing methods that can be applied for learning in a situation where personal information is aggregated but yet you can guarantee that there's no leakage of information about any one individual this is that setting and i think those are the types of methods that should give some solace to people so i also say to to my students. you enjoy being target ed. so in your systems when it gets personalized to your own specific behavior, you enjoy the fruits of that analysis all the time. so your e-mail is figuring out which people are valid senders and receivers for you so it can identify what's spam. the search engines are looking
12:40 pm
at your history so it already knows what you're searching for so this is making your life easier without you maybe even knowing it so that is the good from it. but if people get to the point where they deny you health insurance because you searched for something particular online and they've identified you might have a pre-existing condition, that would be a bad scenario. >> that was a really great talk. thank you. and to me you're describing a kind of 50-year process of interaction between data and computation/algorithms where the data sets and the rules move from a relatively finite area to an increasingly more complexity and richness in terms of the data sets and the computation comes up because of moore's law and better algorithms and that dynamic kind of holds and explodes with search which kind of gives birth to cloud computing but also an incredible level of unstructured data that
12:41 pm
you can play with and social media does that so we've had these big watersheds over the last ten years. with tay, you get into this whole other interesting dynamic which you touched on in terms of the social research which is not simply the activity of speech but speech as it is received and speech in its social context which is this next level of complexity we have to take on for algorithms to handle. it seems like there's almost two fixes to this. one is the elementary one is recognize that it's bad to be a nazi. so cross that stuff out. the other one is to look at social circumstances. this is coming from the gamergate guys, suspect it. or even more, create countervailing messages to that that will correct tay, because part of the tay problem was once she started to get a little nazi, all the nice people stopped talking to her. >> that's a good point, yes.
12:42 pm
>> so it became a self-reinforcing loop. so one counter would be more nice people come here and talk to this poor lady. i guess the question is as we move into this kind of ai, aren't we also reprogramming ourselves? >> yeah, that's a very good observation. i think that the observation you made about that we should have more nice people talk to tay affects not only the algorithms but humans as well so you hear about people leaving twitter after they have horrible interactions with people and so we can't even fix that problem for humans but maybe just maybe if we understood how to fix it for the algorithms we would be able to improve things for ourselves as well and so that's -- some of the social research, social science research that i was alluding to focuses on things like
12:43 pm
encouragement, positivity, creativity and how to foster those kinds of feelings and behaviors in people and if we understood better how that worked in humans we might then be able to put it into algorithms but at the same time if we come up with solutions algorithmically that would fix tays divergence into this horrible state, that might also inform the social science to think about the structures that they've been looking at more computationally and so something like 10 positive interactions a day might make everybody feel happier. i don't know. but i think it's an important thing to think about and i guess my point of interacting more with people in social science and philosophy and humanities is that as engineers we tend to not think about these things. we like to put it into a
12:44 pm
mathematical equation. i guess i do not put any math in this talk but usually every talk i have has equations. i just want to distill everything in a precise mathematical equation but all of these issues are maybe hard to codify in a mathematical equation and so it's really from discussion with these people with more social and emotional intelligence than we as engineers have that would be able to help us put that into our algorithms and our systems. i think we will have -- if we have the right kind of back and forth between those two communities i think we will make great progress in the systems while at the same time learning a lot about ourselves. >> hello, professor. i was wondering your opinion on tay. what if she was trained --
12:45 pm
isolating it to just kindergarten conversations like where kind people talked to it and then it learns from them and then you can track progress like first train it with people that you usually talk to when you're a kid and then you grow up, teenager and finally adults. what would have happened if we tried to train tay adds a perfect human rather than just letting it out open in the twitter stream? >> well, so i guess -- i'm not sure quite how to answer that. absolutely one of the issues with tay is that tay was doing continuous learning and so there are two things that we could have done. we could have had more training to reinforce tay's behavior in kind, trustworthy environments before rolling it out to the general public and then stop our learning. so we could say we've learned only in these very fixed
12:46 pm
environments and then we're going to stop so we'll be robust to these attacks moving forward. but that kind of system would be brittle because it would not be able to adapt to new situations that it hasn't seen before so we really need to be able to adaptively learn from the new situations that we see. but one of the issues that could be adjusted and probably is being adjusted in new developments of these systems is that the algorithms learn based on the data that we have. so you could imagine a scenario where we wait the training data coming to us based on trustworthiness so that for particular kinds of interactions if they involve words like "nazi" "genocide" maybe we say that's fairly untrustworthy so we should weight those very low and we won't update the aspects of the model very much when we see those interactions but if
12:47 pm
there's particular people we trust, like our parents our our siblings or relatives and we see new interactions with them we would upweight those and say now adjust to those new interactions because those are probably valid interactions. so work on figuring out -- there is work in the area of -- with reviews and online systems in trying to automatically identify how trustworthy something is so we could potentially use those kinds of things to make the system adapt and learn more robustly by basically understanding what kind of data is coming in. so yeah. >> good morning, professor. i had a question. the difference between artificial intelligence and machine -- intelligent machines is very gray. you never know exactly where the line is drawn. sol, for example, people here, like, you have been discussing about tay the whole talk and we have ideas like constraining her to specific talks, specific
12:48 pm
people, specific situations but the thing is if we do that can it really be called artificial intelligence because if we are constraining it, then it's just another intelligent machine and not exactly thinking for itself. >> yes, sure. the definition of what is intelligence, what is learning is very fuzzy so these early systems like eliza was not doing anything you would think of today as being intelligent but at the time they talked about it as an ai system. there are things we do in machine learning that i had this discussion with my grad students last week that if we simply write out an equation and optimize it but we hand crafted that equation to reflect a particular scenario are we learning? i don't know. we have a system that does something based on optimizing that equation but maybe it hasn't learned anything about the environment. and so in everything that we develop as computer scientists
12:49 pm
we tend to go back and forth between the two so really when we focus more on the engineering and making a system that works we often put in things that are hand coded or manually specified in order to just get the kind of outcome we're looking for then we tweak it to be heat wave well in the scenarios we want it to and then as we try to abstract that theoretically we try to move to a more general concept so i would say those two kinds of things are always there mutually hand in hand in these scenarios and when we're making better progress of understanding at a higher level we're pushing too these more theoretical abstractions but we also make a huge amount of progress by just making specific decisions so i would think they're both useful systems but maybe a philosoph philosophical issue to decide whether you think of them as actually intelligent or not. >> so professor neville always
12:50 pm
gives a fantastic speech. i think my next career will be providing consolation to social media users so i would like to get an early like to get an ear on that by telling you you're better looking and smarter because you attended this talk. with no fear of contradiction, i'd like to give a very warm thank you to jennifer. [ applause ] tonight we ten our move at american history tv programs normally seen on weekends on c-span. it begins with a discussion on origins of the cold war, they be u.s. democracy and international relations. and then the legacy of world war ii. american history tv, prime time tonight at 8:00 eastern.
12:51 pm
>> sunday in-depth will feature a live discussion on the presidency of barack obama. we're taking your phone calls, tweets, e-mails and facebook questions during the program. our panel includes april ryan, white house correspondent for american urban radio networks, and author of the presidency in black and white. my up close view of three presidents and race in america. prince to be university professor, eddie glaude, how race enslaves the soul. pulitzer prize winner david maranes, author of barack obama. watch on c-span2. ♪ ♪
12:52 pm
>> the presidential nomination of donald trump is friday january 20th. c-span will have live coverage of all the day's events and ceremonies. watch live on c-span and c-span.org and listen live on the free c-span radio app. ben & jerry's co-founder jerry greenfield keynote speaker at university of michigan entrepreneurship in november. he talked about the origins of his oregon-based company and its mission beyond making ice cream. the annualent repalooza brings together from around the company to share insights. this is 45 minutes. >> so it is now an absolute pleasure of mine to introduce our guest for this morning. he is known for a couple of
12:53 pm
different things. iconically for the brand of ice cream that has helped many of us get through long nights of depression when you need a tub of ben & jerry's to help you through that rough spot but also for really becoming one of the faces of social entrepreneurship and positive business. and so all the way from vermont, please help me welcome jerry greenfield. [ applause ] >> so swag aplenty. we've got t-shirts, sunny's for you. so welcome. >> wonderful to be here. thank you so much. >> thank you. >> really thrilled to be here. >> you were here last night scooping ice cream. >> i was not actually scooping
12:54 pm
ice cream, so ben & jerry's has a local franchise scoop shop, and i was doing a scoop shop appearance, a jerry appearance. i told stewart, i was jerry for an hour last night. it was very exciting. >> and there is so much in your story. but i think for many who haven't heard origin story of ben & jerry's, i'd love you to share that. >> should i talk with you or the people? >> yes, some combination of those things. >> so just briefly, ben and i are friends from junior high school. we met in seventh grade gym class running around the track where we were the two slowest, fattest kids in the class. i went to college and was rejected from all the medical schools i applied to. ben went to three or four
12:55 pm
colleges, which he dropped out of. the last program ben went to -- so ben had gone to colgate, he dropped out. he went to skidmore, he dropped out. he went to nyu, he dropped out. he signed up with a progressive program called university without walls. so at university without walls, you don't have to go to class because the world is your campus. and you don't have to take tests, you get credit for learning. and ben dropped out of there, too. so there is the entrepreneur that we're all looking for. the person who drops out, doesn't finish whatever. so ben and i were essentially failing at everything we were trying to do. we just decided to open up a home made ice cream parlor, thought we would do it for a couple of years and then move
12:56 pm
onto something else. we learned how to make ice cream from the $5 correspondence course. we started with $8,000. we borrowed another $4,000. we opened up in 1978 in burlington, vermont, in an abandoned gas station and essentially had no idea what we were doing. >> when you hear those in the same sentence, all right, i'm going to go to one of the coldest cities in the country and sell ice cream in what used to sell gas, if somebody pitched you on that today, how would you respond? >> we would respond the same way all the banks responded to when we asked them for money. it doesn't sound like a really good idea. >> despite that, despite that ben & jerry's still became ben & jerry's. how did you get past that?
12:57 pm
>> not only despite that but in some reason because of that, the fact that -- so ben & jerry's was a real artistic success as a home made ice cream parlor. however, it's a very short ice cream season in burlington, vermont, and we were not really sustainable. so we ended up packaging ice cream in pint containers and selling it to grocery stores locally and eventually found some other markets to go to. and so it was actually the idea that we couldn't sell enough ice cream in the summer in vermont to stay in business that forced us to look for other markets. >> so from that initial beginning, you know, sales through restaurants, sales through mom and pops, sales
12:58 pm
through national distribution, there was a point at which you were no longer two guys and a hand cranked ice cream maker in a small shop, you were becoming a business, an enterprise. how did you manage that transition. >> we were becoming a business. >> you were becoming a business. >> we thought we were going to be ice cream guys. sure enough -- and you know, it was shocking to us, because, you know, ben and i wanted to have a little ice cream shop. we wanted a make and scoop ice cream to our customers. when we found ourselves becoming a business, we really didn't like it. for you guys who were students here of that age, you may have
12:59 pm
grandparents who lived through the '60s. you guys have read about the '60s in history books and peace and love and hippies and all that stuff. that was ben and me. we had all these negative feelings about business. and so we were actually going to get out of the business and then ran into a friend of his who convinced him if there were things we didn't like about business, if we thought as businesses as these entities that took advantage of employees or spoiled the environment or exploited communities, that's the way we saw business, clearly. that's who we were. that if we don't like those things, we could just change it. we could essentially make our business anything we wanted it to be. so we made a conscious decision to try to have our business
1:00 pm
evolve to something that supported employees, supported the community and did not destroy the environment. >> when a lot of especially growing companies that run into those transitions of growth, one of the first things they need is more money. often having that extra layer of owners, directors leaning in and guiding the path of the business makes it hard for the entrepreneurs to stay on the path they wanted to be on. how did you deal with that pressure? >> we were actually lucky, because we were able to combine these values we had with our business, along with the raising money. and the way we did it, ben discovered this law in vermont that said you could raise money for people in vermont, just in
1:01 pm
the state of vermont. and so we decided to do that, to raise money from the local community and do it in a way that had a very low minimum purchase for people. thane way -- we had a minimum purchase of $126 so vermonters could buy stock in ben & jerry's at the time. we found a way to get the local community to become owners of the business so that we didn't have to go to a big financial institution, didn't have to go more traditional route, find venture capitalists. we raised $750,000, which was remarkable to us. and at the end of the offering, 1 out of every 100 families in vermont had bought stock in ben
1:02 pm
& jerry's, so we were truly a community owned company at that point. >> and the next stage. then you went full public. >> yes. >> then we decided to have a national public offering about a year and a half later when we needed more money. in conjunction with that, we decided to establish the ben & jerry's foundation as the charitable arm of the company, because, you know, to us at that time businesses were essentially machines for making money. so we felt like if we were going to be this business, we needed to give away some money. so we set up the foundation where the foundation would get 7.5% of the company's pretax profits, which was the highest percentage of any publicly held company. for you guys out there in case you're curious, the corporate
1:03 pm
average for contributions is around 1.5%. so we were essentially at five times that to see if we could really make a difference with that much money, and we couldn't. that's what we learned. we set up the foundation, despite giving this high percentage of money to the foundation, the foundation was overwhelmed, as all the foub dayses in the country a-- found were. we needed more social and environmental impact, we couldn't do it with giving the money away, how could we really do it. it was at that point we realized the real power of the business was in how we did our business, how we did our day to day business activities.
1:04 pm
>> can you give us some examples of that. what does it look like when running a business for profit, still kicking money into the foundation, what other sorts of tradeoffs do you have to make to stay on that path? >> you know, that's a really good question, stewart. we had no idea. this is a great thing to know about entrepreneurship or a business. there are so many points have you no idea. you come up to these questions and you do not have an answer. for us, what it was was trying to figure out the the answer. that usually involves trying some things. sometimes they work. usually they don't work. so you go back and try again. so we started off by thinking about the normal day to day
1:05 pm
business activities that a business does. whether it's for us sourcing ingredients, selling ice cream, marketing ice cream, doing your finances. that whole range of things. i think there are a lot of businesses now that actually have an advantage in that the core of their business is involved in either producing products or services that have social benefit. we didn't really have that. we were making ice cream, which you could make the case that there's social benefit for ice cream. it's a little bit of a tough case to make. but you know, i think you see a lot of entrepreneurs in businesses who are starting out as social entrepreneurs, who are trying to meet a social or
1:06 pm
environmental need. for us it meant going back and kind of reengineering what we were doing. >> it seems like there was a couple of things that happened at the same time. one you were able to connect back to your core values of what you were doing. at the same time you were also becoming the face not just of your company and your brand, these iconic friends from junior high that had done this but also the face of this social business moveme movement. >> we were one of a handful of pioneering entrepreneurial companies that were really pushing the idea of socially responsible business. you guys are probably familiar with patagonia, which is still doing amazing things. the body shop at that time was pioneering, stonyfield farm
1:07 pm
yogurt. ben & jerry's has always been more visible than other companies because -- probably because we make ice cream. i think the other things about ben & jerry's is we tend to be fairly outspoken about what we're doing. you know, that brings up another interesting issue around say doing social responsible business or whatever. we would sometimes get criticized, you guys are talking about all these things you're doing, aren't you just patting your selves on the back and trying to take a lot of credit? ben would always say, you need to be willing to talk about what you're doing so that you give consumers a choice to either supporting this kind of business
1:08 pm
or supporting a more traditional business. you know, consumers are always supporting one business or another, whether they like it or know it or not. when you're buying somebody's product, you are voting with your dollars. >> at some point there were also critics saying you're just cynically using this pro social approach to drive marketing dollars -- drive customers to your business. >> yes. so here is another thing to know. i hate to be getting into less obstacles. whenever you do anything that is not the normal mainstream way to do things, you're going to get criticized. that is the nature of the beast. and you're either going to do everything the way everybody else does it and not get criticized or blaze your own
1:09 pm
path. the whole mainstream environment is always pushing you to do things the way everybody else does it. so for ben & jerry's, we started getting criticized that we were trying to get customers to buy ice cream by talking about doing good things. and we simply said that it's who we are, it's what we do, and that, yes, it is marketing. yes, if you are trying to be community-based and community helpful, that helps to market who you are. and if you're doing bad things, that markets who you are also. if you want to be honest and transparent and genuine, that markets your company. >> let's talk about the ice
1:10 pm
cream, too, for a minute. >> yes. >> how about we stop for a second. are we doing all right? just want to check in. >> check in. >> just like our midterm. just got the grade. ice cream, you have, you know, loyal, loyal customers, iconic flavors. >> heavy users. >> heavy users. dedicated users. >> ben & jerry's defines a heavy user as averaging one pint a week. >> that's a lightweight. that's nothing. >> in the world of ice cream a pint a week is a heavy user. >> you've got heavy users. >> the flavors, how much of that was sitting around in the kitchen throwing stuff in a bowl. where did that come from? >> ben came up with all the flavors. i alluded to it, ben is the
1:11 pm
entrepreneurial guy between the two-of- us. the classic entrepreneur. loves to try new things. always making mistakes. always said he would rather fail fail than do something that's already been done before. i like a manageable agenda ben did everything erntial, established the social mission for the company. ben was outspoken about social and political stands. ben came up with big chunks of cookies and candies. ben can't really taste so he came up with strongly flavored
1:12 pm
ice cream. that's the flavor story. so ben & jerry's has all these unusual creative flavors that other companies weren't willing to make, because they are more difficult to make. it's hard to put big chunks of cookies and candy into ice cream. for a lot of manufacturers, those guys thinking of manufacturing out there, a lot of manufacturers make what is easy to make as opposed to what customers want to buy. it's a bizarre thing. >> how did the two of you become the team you did? it's one thing to meet in junior high but to spend 40 years in business is an incredible
1:13 pm
accomplishment. >> yeah. we're good friends. ben and i have i think a very similar world view, but we also have very different skills. as i said, ben is very creative. i have in over 40 years of ben & jerry's never created one flavor. some murmur out there. how can -- ben & jerry's. i used to make the ice cream. i would manufacture the ice cream. i would scoop the ice cream. and we have a lot of trust. i think that helps. ben did all the sales and marketing, all the creative stuff. i worked on distribution, manufacturi manufacturing. i think i am smart enough to
1:14 pm
know when ben came up with good ideas that were pushing the envelope of what's on the edge, that it was okay to do. you know, i think that's been one hall mark of ben & jerry's, it's been willing to be on the edge both of business practices and the issue of social justice and some of those stands. >> now, one thing we see a lot with startups as a goal is often to grow the company large enough that it can be taken public or acquired. you've gone through both those experiences. you've taken your company public and then was subsequently acquired by unilever.
1:15 pm
that wasn't your goal? >> no. when ben and i started, our goal was to do an ice cream shop for a couple of years. make $20,000 a year apiece, and then moving onto something else. we talked about becoming cross-country truckdrivers together. we were not looking for a career. >> when has moment arrived and you've got this giant multinational cooperation that wants to buy ben & jerry's, how did you guys respond to that? >> for context for folks, we started in 1978 and ben & jerry's ended up getting acquired in 2000 or 2001. we first went public in the state of vermont in 1984. we went public nationally in 1985. so we've been a public company
1:16 pm
for about 15 years. we were what i would describe as fiercely independent. at that time -- about 10 years into the business, we crafted a mission statement that formalized a three-part mission, so there was a product mission, an economic mission, and a social mission. and the social mission explicitly talked about using the power of the business to innovatively address social and environmental issues. so that was part of the mission. and we actually started doing social impact reports starting in 1985. so we did those annually. at that point, we were measuring
1:17 pm
our success on both financial and social impact. and so in 19 -- in 2005, unilever and another company came along wanting to purchase ben & jerry's. and our response was, we don't want to be owned by anybody, because we were not convinced that any acquirer would be committed to the social mission. it was nothing about unilever, nothing about anybody, we just felt like any big company was going to be concerned about the financial bottom line. so we tried to find ways to stay independent. ultimately our board decided that the money unilever was offering was compelling. we sold the business. i can just tell you that emotionally for ben and me it
1:18 pm
was horrible. it's been up and down every since. >> but you've remained with the company, both of you. >> we remained with the company. we remained employed at the company. up until that point, we had been on the board of directors. there was -- when unilever acquired the company, they set up an independent board that was made up essentially of the same board that had been the board of ben & jerry's. ben and i declined to continue on that board. and every since -- even prior to then we've been on year to year employment agreements with ben & jerry's that renews annually unless either party decides do not renew. >> do you want to hear my job description? >> yes, i'd love to. >> so beb an and i work at the
1:19 pm
company, no responsibility, no authority. that's the deal. for better or worse, right? so we essentially do whatever we want. in our employment agreement -- this is preunilever, in our employment agreement, it recalls for us to provide similar service services at a similar level to what we had been doing before. >> that's nicely vague. that's a job. so when you look at the landscape of the business today. >> yes. >> are you optimistic about the direction the world is moving in from a positive point of view, neutral, negative? >> you know, i think if i had to tell the truth -- usually i say optimistic because i think it's
1:20 pm
a better answer. but i'm pretty neutral about it. realistically, the most powerful force in the world is business. it's more powerful than governments. it's more powerful than religion. businesses pretty much control governments. if you look at our country, businesses have huge impact on elections through unlimited campaign contributions. businesses essentially control legislation through lobbying. businesses control all the mainstream media through ownership. and businesses have huge impact on how we're all treated as
1:21 pm
employees and consumers. and big business essentially has one goal. who knows what the one goal of big business is. maximize product, right? so by and large oversimplifying generally speaking what business is trying to do is make more money. it's not that business is good. it's not that business is evil. business is just trying to make money. it uses this power without regard to social impact, without regard to environmental impact, because it doesn't measure those things. it measures one thing, the financial bottom line. so am i optimistic?
1:22 pm
i'm optimistic that people who are starting their own businesses or people who are going to work for other businesses have a larger picture in mind. the young people today going into business are going to change business. >> going back to that moment when you and ben said, well, let's try to do some things differently. the idea you get what you measure. it's hard to identify and keep track of those things but you did it. a couple of examples just to give people a sense of, a, the challenges you had and -- >> okay. so a few examples. i mentioned ben & jerry's looks at how it sources ingredients. one thing we did was came up with an ice cream flavor that uses brownies from a bakery
1:23 pm
outside of new york city that works with people who are formerly homeless, or people who are formerly in prison or people who have all sorts of issues. the good news is we have that flavor here today. chocolate fudge brownie. ben & jerry's sells ice cream. i mentioned some of the ice cream shops. ben & jerry's has some number of franchise ice cream shops that are owned and operated by nonprofit social service agencies. so those are some examples of trying to integrate into the product or into the sales these social or environmental concerns. i think other businesses do a good job at that. i think one area that ben & jerry's has been unusual in and
1:24 pm
continues to be unusual in is speaking out about issues. if ben were here, what he would tell you is probably the most powerful thing business has is its voice. that when businesses talk, people listen, politicians listen, media listen because business people -- business people, people like me are well respected and well regarded. we are people that know how to meet payroll. we look at a bottom line. we know what's going on. so businesses are looked at. so ben & jerry's has used its voice to talk about political issues. sometimes controversial issues. ben & jerry's has been publicly supportive of occupy wall
1:25 pm
street, which was essentially an anti-corporate movement. ben & jerry's has been outspoken about marriage equality. ben & jerry's have been outspoken in campaigning about mandatory labeling of genetically modified ingredients in food. ben & jerry's is outspoken about voting rights and voter suppression. and as you might imagine, not everybody agrees with all these positions. so ben & jerry's is actively speaking out on issues that is losing it customers. amazing. [ applause ] >> why would a business do that, "stewart? >> i know there are some folks in the odd i'm sure have questions. i've had the pleasure of
1:26 pm
monopolizing you for a few minutes here. one at the front. holly? a couple of these. >> can i answer one question first, why would a business do it? >> yes, please. >> it turns out it's not really bad for business. it turns out that as ben says, you're never going to get 100% market share. not everybody is going to buy your product or service, and it is much more powerful to connect with your customers, with your employees over shared values of trying to make the world more humane, make the world a better place. if you lose some customers in the process, that's okay. what people want is businesses that are genuine, that are honest, you know, that have some
1:27 pm
edge. that's what everybody is talking about, right? you want to have edge? stand for something. tell the truth. make it part of your business. >> okay. thank you. question. >> hello. >> thank you so much. this talk took some unexpected turns that i really enjoyed. my question is brief. i'm curious about how when you were acquired by u.nilever shapd or unshaped the conflicts that came up. >> did everybody here the question in how the acquisition shaped or reshaped the mission. so, it didn't reshape the mission at all. but what that mission is really based on is it's based on the people at ben & jerry's and the people within unilever wanting to carry out the mission.
1:28 pm
so the first several years after ben & jerry's was acquired, i think most of the energy from unilever's end went into integrating ben & jerry's into the unilever family and making it more like everything else, so the social mission was not really as active. and within the last five years, there's been a real resurgence of the social mission. the ceo at ben & jerry's is committed to the values of the company. the ceo of unilever has a real strong sustainability, focus. so to a certain degree it matters less what's written on the mission statement, and it matters much more about people's commitment to making it happen. >> another one over here.
1:29 pm
>> hi. it was really nice hearing you talk. so a quick question, what is your favorite flavor of ben & jerry's ice cream? i'm just curious. >> vanilla ice cream with caramel swirl, fudge-covered pieces of waffle cones. for you buys that are interested, i think within the top five flavors, cherry garcia, half baked, tonight moved into the top five. i think chunky monkey is in the top five. oh, and cookie dough always up there. >> we have a few of those today. >> we do. for those of you who are curious, cookie dough, chocolate fudge brownie, cherry garcia the sooner we stop talking. >> so no pressure but we're going to make this the last question, so it will be the one we remember. >> hi.
1:30 pm
enab [ inaudible ]. >> i was wondering if you have a flavor you think embodies you or something you're really passionate about like stands for a social mission you care about. >> so as i mentioned, ben & jerry's has a campaign about voter suppression and reauthorizing the voters rights act, which was recently overturned at the supreme court, so that flavor is called empowerment, which is a peppermint ice cream with fudge brownies and a chocolate swirl. can i tell one last -- i'll tell one story. i talked about ben & jerry's using its voice. this is around a product. and it was really part of the
1:31 pm
transition within ben & jerry's of becoming this company that was willing to use its voice and take on issues. so this is probably in the early '90s, ben & jerry's was coming out with a chocolate covered ice cream bar on a stick, a chocolate bar. ben wanted to call it a peace pop. this was during the height of the cold war, which you've also read about in your history books. there was this big military buildup between the united states and the former soviet union. and ben wanted to call this a peace pop and wanted to use the copy on the packaging. instead of talking about what an indulgent delicious product this was, to talk about redirecting money out of the military budget and redirecting it to peace through understanding
1:32 pm
activities, so that people in the united states and soviet union would get to know each other. as they got to know each other, they would realize that they had the same interests, they wanted to take care of their families, and they would be more interested in each other and less interested in bombing the crap out of each other. so ben brought this to the company. it was this huge internal debate, right? we're going to call it peace pop. we're going to be talking about the military budget on the packaging. we're going to be talking about redirecting money to 1% of the military budget to this nonprofit called 1% for peace, which conveniently was going to be set up by ben, and he was on the board of directors. and you know, so people were saying, we're going to be seen
1:33 pm
as criticizing a government program. we're going to be seen as unpatriotic. we're going to be seen as people going to boycott the product, whatever. and ben as the entrepreneur, as the founder -- all you guys out there, entrepreneurs, founders, are you ready for this? he shoved it down the company's throat. he said we're going to do this. and we did it. nothing bad happened. right? nobody boycotted the company. sales were fine. even people who didn't agree with the position respected that ben & jerry's was willing to take a stand on an issue that was not in its own financial self-interest, right? it was for the common good. and that was the big difference.
1:34 pm
business is always taking political stands. business is a very political animal. it's always lobbying to not raise the minimum wage. business is always lobbying to not have any more environmental regulation. business is always looking out for its financial self-interest. but when business takes a stand for the common good, people stand up and take notice. that's what is really different. that set ben & jerry's on the path to being a different kind of company. as ben said, soon thereafter the cold war ended so it was very successful. [ laughter ] >> well, i must say there are most days i have to say i love my job. today is one i especially love my job. i got to have this amazing
1:35 pm
conversation. on behalf of all of us that put this together, a little for you. so thank you very much. >> look, i've got to show me -- this is where it's at. right? >> where do you find the ice cream? head down the hallway, ben & jerry's ice cream is waiting for you. >> i'll be hanging out, happy to chat. >> follow the transition of government on c-span as
1:36 pm
president-elect donald trump selects his cabinet and the republicans and democrats prepare for the next congress. we'll take you to key events as they happen without interruption. watch live on c-span. watch on demand at c-span.org or listen on our free c-span radio app. >> tonight we continue our look at american history tv programs normally seen weekends here on c-span 3. it begins with a discussion on the origins of the cold war. then u.s. democracy and international relations. and then the legacy of world war ii. american history tv prime time tonight at 8:00 eastern. >> new year's night on q&a. >> while people were starving, van buren was having fancy parties in the white house. it was part of the image making where harrison was the candidate poor man for poor people and
1:37 pm
here was this rich man in washington sneering at the poor people. harrison had thousands of acres and estates, so he was actually a very wealthy man but he was portrayed as champion of the poor. women came to the parades and waved handkerchiefs. some gave speeches, wrote pamphlets. it was very shocking. they were criticized by the democrats who said these women should be home making pudding. >> ronald schaeffer, author of the book "the carnival campaign" how the campaign changed presidential elections forever sunday night on c-span's q&a. join us on tuesday for live coverage on opening day of the new congress. watch swearing in of new and re-elected member of the senate and the election of the speaker of the house.
62 Views
IN COLLECTIONS
CSPAN3Uploaded by TV Archive on
![](http://athena.archive.org/0.gif?kind=track_js&track_js_case=control&cache_bust=1622660089)