Skip to main content

tv   Witness  LINKTV  March 13, 2013 4:30pm-5:00pm PDT

4:30 pm
[ blows ] >> boxcars! what are the odds i'll roll a pair of sixes again? [ blows ] six. not this time. how can we make sense out of the seemingly random results of
4:31 pm
throwing a pair of dice or even the haphazard flow of heavy traffic in the city? how can we talk meaningfully about any situation that is unpredictable or has an uncertain outcome? well, welcome to the mathematics of probability. >> boxcars! my point, my roll. >> you're right, mr. jerry. you're one lucky guy. >> no, no, no, my friend. really luck has nothing to do with this. these dice, they love me. >> okay, sure, whatever. you're up three to one. >> yes, and as we agreed, the first one to four points wins it all. which means, odds are, i'm about to win. >> right, right. >> hard eight. so roll again. [ knocking on door ] >> open up in there! >> the police? i thought you said this game was protected. >> hey, hey, hey. what are you doing there? >> i'm taking my half. i'm out of here. >> what? no, wait a second here, i was winning, i was winning this! >> game's over. [ knocking on door ] >> i said, open up!
4:32 pm
>> talk about a situation with an uncertain outcome. ancient people believed that randomness and chance were the work of the gods, and in some cultures, they cast dice to answer questions of inheritance. in others, they actually chose their rulers in this random fashion. but despite the fact that chance has been a part of life from the beginning of time, it wasn't until the 1600s that the idea of chance was seriously considered in mathematical circles. there it becomes the subject of probability. the study of probability was born on the gaming tables and, in particular, with games of dice. dice go back thousands of years. now, these early dice were made from the bones of animals, often the knucklebones of sheep, from which we get that phrase "roll 'dem bones." the knucklebones approximate a mathematical shape that's called a tetrahedron, a four-sided solid object, so that these early dice were often four-sided as opposed to the six-sided cubes that we usually see today.
4:33 pm
of course, animal bones are not perfectly regular objects like perfect tetrahedrons or the perfect cube, so you might have thought that the egyptians, who were fairly sophisticated both scientifically and mathematically, would begin to notice that generally, some sides came up more than others and that you might be able to quantify this and also win a few more bets. but instead, it was centuries later, in the consideration of games involving "fair dice," that is, dice where it's equally likely that each side will come up, where probability first appeared. now, one of the first to record mathematical ideas about probability was gerolamo cardano. cardano was a 16th-century italian doctor, an eccentric, and a genius mathematician, famous for his skills in solving algebraic equations. cardano was also a compulsive gambler. the combination of a keen mathematical mind and a taste for the gaming tables made probability a natural interest. and his book, liber de ludo, contains many of the basic ideas
4:34 pm
of probability. almost a century later, blaise pascal and pierre de fermat expanded on cardano's thinking, and they came up with a method to actually calculate probabilities. fermat is probably best known for the famous "fermat's last theorem," a simple-to-state problem about a generalization of the pythagorean theorem that took the world of mathematics 350 years to solve. his day job was as a lawyer, but his passion was mathematics and physics. pascal was the son of a tax collector, and to help his father, he invented the first digital calculator. his deep interests were in both philosophy and mathematics. together, fermat and pascal were widely regarded as two of the most powerful mathematical intellects of their time. although they never met, in 1654 they exchanged a series of letters that most academics today agree was the foundation for the modern theory of probability.
4:35 pm
their correspondence started after a friend, the gambler chevalier de mere, called upon pascal to get help with a nagging problem. de mere asked, "how would two gamblers fairly split the pot if their game was interrupted before being played to the finish?" this became the famous problem of points. until fermat and pascal took it on, the various solutions that had been proposed had been pretty unsatisfactory. in their correspondence, fermat and pascal addressed the problem by considering all the possible future plays of the game. by assuming them all to be equally likely, we can compute how much each person deserves when the game is interrupted, like for jerry and aldo. [ knocking on door ] >> i said, open up! >> go ahead, let him in. ah, ray! awful nice of you to stop by. >> hello, jerry. how come you didn't answer the door? >> uh... >> i heard you had a game going.
4:36 pm
i wasn't invited? >> well, um, we were, but -- hey, you know that aldo guy? >> yeah. >> well, i was on a roll, and he took half the pot and he left! >> really? >> yeah. >> well...he can't have gotten far. so what happened, exactly? >> well, it was me and aldo, a private game, we each laid down $500, and the first one to four points wins it all. okay, so a seven gets you a point. snake eyes, boxcars, minus a point. i was up three to one. >> and? >> and i had just rolled a hard eight, was just about to roll the come-out, and he thought you were the cops, he grabbed half the pot, and he was gone. >> it seems to me -- other than not inviting me to your little game -- that what you have here is a simple problem of points.
4:37 pm
>> that's what i was saying. >> not the points you have, the points you could have. you only needed one point to win; aldo needed three. now, if we examine all the possibilities, had the games been played out -- pardon me, sir. now, follow along. three being the minimum number of games that would give you an equal chance for both of you guys to potentially win the whole pot, assuming each of you has a 50-50 chance of winning or losing each roll. >> i got it. each of these branches is one way the rounds could go. if i win the first, it's over. aldo has to win the first, the second, and the third rounds. >> and that means you have seven ways to win; aldo only has one. which means, to be fair and just, you are due 7/8ths of the pot, and aldo's due 1/8th. >> i knew it! the cheater! >> hey, mr. ray. nice evening.
4:38 pm
>> aldo, you came back to see me. you, my friend, have a problem of points. >> a problem with points? >> wonder what the odds are aldo's going to come out of this with any money at all? let's go ask raissa d'souza, professor at the university of california at davis. not yet. hi, raissa. thanks for coming. i hope we can clear up the confusion that we saw with aldo and jerry. i have a feeling jerry's getting the short end of the stick in this deal, is that right? >> definitely. and to understand how to clear up the confusion, we're going to have to go back in time, back to the 17th century. >> so fermat and pascal were trading these letters back and forth, effectively trying to understand how to fairly divide the pot in a game of chance that had been interrupted. so aldo and jerry are in the same situation, i guess. so now we know that jerry has three points, aldo has one, and there's a 50-50 chance at every
4:39 pm
roll now that either one of them will win, is that right? >> exactly. so now we have to go forward and iterate all possible outcomes. so what we're going to use is going back to the pizza box and look at the tree graph, which lets us calculate all the possible outcomes of a game. >> so the entire future is on ray's pizza box over there, is that right? >> exactly. so jerry wins in seven possible futures and aldo only wins in one possible future. so to split the pot fairly... >> right, 7/8ths for jerry seems right, and 1/8th for aldo. >> exactly. >> a problem with points? >> that's what i said. >> yeah, you cheated. i knew it, ray proved it. >> but, mr. ray, i was trying to get away. the police were -- >> that was us, dummy. >> my apologies. i was just trying to be fair and split the money in half. >> fair, ha! you owe me 7/8ths of the pot. i was winning, see? >> what's this? >> this is a tree chart. >> okay.
4:40 pm
>> it's based on the great mathematicians pascal and fermat. you've heard of them, right? they worked in the field of probability. now, had you continued to play, you would've had only one chance in eight of winning the whole pot. >> one out of eight? but that's not fair! >> hey, hey, hey, you got a problem with my math? >> absolutely not. >> good. because we're going to continue to play until we get a nice, normal distribution. ante up. come on, boys. and listen, the traffic's so bad, i might as well play a little bit myself. that's a lot of lettuce. >> well, so there's probability in action. of course, the casinos are interested in what happens when many, many people play these games -- maybe the slots, maybe something else -- over and over and over again. and they, of course, want to know that in the long term, they're going to win money and we're going to lose money, actually. >> exactly. so when we looked at the game between jerry and aldo, that was
4:41 pm
one single game. and that was the work of fermat and pascal. but we're interested in what happens when games are played over and over. >> the first person to think about this was jacob bernoulli, and he was the first one to really think about the sort of long-term behavior, is that right? >> that's correct. and that's sort of related to something called the law of large numbers. and if we have an event that we repeat over and over, a random event, and we do it independently many, many times, we find that it's going to eventually converge to a finite number that's between zero and one, and we call that the probability of observing the event. so the law of large numbers basically tells us that if we do a random event, a random outcome, over and over, it'll eventually converge to a well-defined probability. >> so we could take a very simple example, right? we could just do coin tossing. so i toss a coin, and if i get heads, i get a dollar. if i get tails, you get a dollar. so if we did this a million
4:42 pm
times -- a million is a pretty big number -- we would expect that after a million times i'd get about half of them i'd win, half of them you'd win, so that we should be even. >> exactly. and if you think about it, the first time we play the game, there will only be one winner and one loser, so there will only be one outcome. so we can't use our probability equation, because the probability is, one, that it's heads or tails, but we don't converge to the 50-50 until many, many coin flips. >> so if you look at the game and you look at all the possible outcomes, and you look at the probability of each possible outcome with its associated payoff, and you take that simple sum of the outcome times the fraction of times that you achieve that outcome, then you get this quantity called "the mean." and the amazing fact that bernoulli proved is that if you actually play the game a large number of times, you know, in the limit, an infinite number of times, and then you compute your average payout for a very large number -- >> over that collection.
4:43 pm
so what you were saying is that if we look at the tree graph and we multiply the probability of an outcome times its payoff and we look at all outcomes, we'll compute the mean payoff expected. and what bernoulli showed is that we can -- that we have the theoretical mean from the tree graph, and what bernoulli showed is that if we look at a collection of games, so i'm actually going to play out a series of games and i'm going to calculate the average payoff over that collection of games that i've just played. >> so just look at how much money did i win after 1,000 games and divide that by 1,000. >> and what he showed was that as the number of games becomes infinite, that average that i've calculated over that collections of games that i've just played -- so it's my real payout -- actually converges to the theoretical mean that i calculated from the tree graph. >> gets closer and closer and closer to that ideal number. >> so that's the law of large numbers. >> so probability theory was born on the gaming tables
4:44 pm
hundreds of years ago. but, in fact, it's still important on the gaming tables today. and now we're going to go visit with anthony baerlocher of international gaming technologies to see how probability makes its way in the real world of today's casinos. [ slot machine playing electronic music ] >> the fun part about my job is that i actually get to use the math that i really truly enjoy doing and i also get to be creative. i'm anthony baerlocher. i work for international game technology. i do product creation and design games, and more importantly, the mathematics behind the games that make them fun for the players and profitable for the casinos. there's a long history of the slot machine. it started in the early 1900s in n francisco with mechanical reeled machines that were designed using levers and gears and springs.
4:45 pm
those machines lasted for many decades until electricity and some technology started to be introduced into the games in the 1960s. that technology progressed into more full electronic machines, which the big revolution was the introduction of the game processor that would allow for complete control of the game, instead of through mechanical means, by a computer program. with that, the games changed dramatically because we could now vary the odds, whereas on an old mechanical reel, the number of symbols to create these different combinations was fixed. now we can use what we call "virtual mapping" and have a random number generator create different odds of symbols coming up, which allowed us to take the number of combinations into the millions and offer a wider variety of pays. we use bernoulli's theory in a few different ways. a lot of it is in determining --
4:46 pm
creating our formulas to do the calculations of the game. its very helpful, especially a series of events coming together. we look at it and we'll determine what we expect the return to be for a series of games. it'd be very difficult to go through and in a rote manner figure out every possible combination. we can use some of the equations available to us to simplify that and determine what an expected payout would be over a given set of rules over a time frame. the slot machines are based off of a normal distribution of outcomes which is associated with the random probabilities created. although anything can happen on any given game, we can expect over a long run the payout percentage will progress towards its expected return, or its mean. this is a wheel of fortune
4:47 pm
game, which is one of our most popular games and has been for over ten years, since the first versions of it were created. we have a couple different dynamics we play with, and that really makes it more of an art than just a pure science of calculations. on the base game, again, each symbol has a probability that we define that the game processor controls. it picks random numbers through a random sequence once i spin the reels and will determine which one of those symbols it wants to come up based on the probability we assign it. so...in this case, i was very close to getting 7-7-7, which would be a good combination and would have paid 120 coins. but unfortunately, the symbol next to the 7 was selected on the first and second reel, and i only got the 7 on the third reel. on any given game, i have no idea what's going to come up. its completely randomly determined each time.
4:48 pm
but over the long run, we know on average how often blank-blank-7 will come up based off the probability of blank on reel one, blank on reel two, and 7 on reel three. the longerhe player plays or the more they play on average, they probably will end up losing more. we're always looking for new, creative concepts and ways to create gambling events so that the player has the chance to win some real good money, have fun, and be entertained. >> dan, you found a galton box. >> i did find a galton box. and, the fact is, is that this is going to give us a very interesting physical model of what happens in a very simplified casino. because what we've been studying up to this point is just one measure of uncertainty, one measure of long-term behavior, that is, the mean, or the expected value, something that we got out of the law of large numbers. but a casino, as you know, is much more interested in not just
4:49 pm
one number but the variety of possibilities. they want to know that there's not too much fluctuation around that average behavior, because they don't want to go broke in finite time, for example. >> exactly. so we might think about a simplified casino: you flip a coin, and it's either heads or tails, with 50-50 probability. >> so this is our casino in the sense that when it hits the peg, if it moves to the right, we can think of that as being a head, let's say, and the casino wins on a head. and if it moves to the left, then i win, that's a tail. and so what we're seeing at the bottom is really a summary of the number of moves to the right and to the left that tells you how many of those bets the casino won and how many of those bets i won. and so we've got our galton board here. and this is -- each drop of a ball here -- why don't you drop one there. >> sure. so it ends in the center. and if i start dropping several of them, i still haven't seen anything go too far from the center. and if you wanted to ever
4:50 pm
observe something hit the right-most column, it would be the equivalent of hitting heads every time it hit a peg. so we'd have to get about 30 sequences of only heads in this case to make it go all the way to the right, or only tails to make it go all the way to the left. so this is basically a physical illustration of how likely it is to get 30 heads in a row or 30 tails in a row. >> uh-huh, right, right. and so now we could play this game, and bernoulli probably played this game with a -- you know, a casino where we do many, many, many, many coin tosses, and so the number of pegs, the number of levels of pegs goes to infinity. and ultimately what you get out of there is a very specific shape. >> and what it is, is a bell curve, also known as a gaussian distribution. >> so the point is that these many, many, many decisions, again, in the limit all give us a distribution of possible outcomes that look like the bell. >> exactly. so we started out with total randomness, and now when we step
4:51 pm
back, we have a probability distribution -- >> right, which has a very simple description. >> so we started with total disorder, and now we have a well-defined shape that characterizes what the randomness will converge to. so we started with randomness, and now we have a beautiful finite description of it. so that's an example of the central limit theorem, that we started out with random events, and after a collection of many such random events, we find that it converges to a well-defined gaussian probability distribution. >> right, so this beautiful bell curve shape. and there are various bell curves, and each one of them will characterize what are so-called repeated identical and independent events. >> and it's not just for that simple example where we looked at heads and tails, 50-50, it's for many other kinds of probability distributions that they eventually converge to this bell curve. >> as much as we think the casinos and games of chance are
4:52 pm
places where probability is really important, that it's things like traffic where the frontiers of probability research lie at these days. >> exactly. it brings us to some modern approaches. and there's a really simple model of traffic invented by three physicists -- biham, middleton, levine -- called the bml model. and it's a really simple probabilistic model that shows beautiful self-organization and phase transitions and is really connected to car traffic. so think about having an infinite grid, and we're going to have a simple model. there's only two kinds of cars: you either want to go to the east and you're red, or you want to go to the north and you're blue. and we're going to let them alternate time steps, so on even seconds, all the red cars will try and move east, and they succeed as long as the site they want to occupy is empty. and on odd time steps, all the blue cars will try and advance in the same manner. so it's almost like a model of gridlock, where you have a traffic light at each site, and
4:53 pm
the red cars go and then the blue, and the red and the blue. and what's really amazing is if we start with low density of cars, we're just going to populate the grid at random. >> so just at every site where a car could be, we either put a car or we don't. >> so we'll flip a coin with some probability, "p." and if p is really low, we find that all the cars manage to get out of each other's way. and they all move with unit velocity, so they move -- every time they try to advance, they succeed. so they self-organize onto these stripes that don't interact with each other anymore. >> okay, so if the probability of putting a car at every place is small, then we actually get these interesting patterns in movement, is that right? >> exactly. and you can see everybody getting to their destination with no congestion whatsoever. >> uh-huh, and that's something that we can prove mathematically? >> no, we can't prove anything about this model. it's really a puzzle. and what's interesting is that if we start at very high
4:54 pm
densities, we don't get that kind of behavior at all. we find that everybody gets clumped into one big traffic jam and no one can ever move again. >> so this is a phenomenon begging for a new kind of mathematics, is that right? >> it is. and it's an example of a phase transition where we had free flow for low density and jam for high density. and it's really a function of what probability we threw the cars down with at random. at low density, free flow; at high density, jam, so that's the phase, free flow or jam. but there's a whole intermediate regime where we're finding tremendously interesting organized structures that organize from the cars themselves. >> like moving jams somehow. >> exactly. interfaces of jams that move throughout the space. so what's interesting is that we can actually achieve flow of traffic in a regime where people previously thought everything jammed. >> oh, because you get clumps, but they're still moving. >> exactly.
4:55 pm
>> so you're still getting movement, not gridlock. >> so it's really -- one of the challenges to probability theory is trying to understand how random initial conditions -- and you saw how everything else was deterministic, so the only randomness here -- >> deterministic in the sense of once you know where you start, then sort of what happens to you is fixed: if there's a space, i move, if there isn't a space, i don't move. >> so the dynamics is totally deterministic, and the only randomness came into how i flipped a coin to populate the lattice. and those kinds of problems are really major challenges to us right now in probability theory. >> well, raissa, thanks for taking us from the past of probability to the future of probability. and now why don't we see how things play out for ray and the gang. >> seven! >> traffic's clear, boss. >> well, gentlemen, i think i'll call it a night. good game, huh? >> yeah, ray, pleasure having you by. >> you're a real lucky player, mr. ray. >> well, i know the odds.
4:56 pm
>> it's an odd thing, thinking about how to quantify uncertainty. none of us likes to think we're reducible to a single number or a single ball in a machine. but probability says, in fact, our actions aren't predictable, not on the individual level. but it also says we can make a mathematical model of uncertain behavior, attach formulas to it that can be processed on a computer, put numbers in, and come out with an understanding of a real-world event that we usually think of as random. today probability is a living, breathing, evolving field, not just a tool for game players. from mathematicians to stock analysts to scientists in all sorts of disciplines, they all work with probability, helping us make sense of the uncertainties in life. boxcars, finally! captions by lns captioning portland, oregon www.lnscaptioning.com
4:57 pm
4:58 pm
>> for information about this and other annenberg media programs, call... and visit us at...
4:59 pm

167 Views

info Stream Only

Uploaded by TV Archive on