tv Earth Focus LINKTV March 4, 2023 6:00am-6:31am PST
6:00 am
(soft chiming music) - [narrator] this is what the future of hyper intelligence looks like. most people no longer own cars. instead, artificial intelligence operates fully electric network self-driving vehicles. as a result, air pollution and traffic congestion plummet across the planet. self-navigating aerial drones are on the front lines for disaster responsand search and rescue missions. most people live and work side by side with self-aware androids. these ai companions boost productivity and libere humans from tedious tasks, completely revolutionizing modern life. - feel like i'm in a superhero movie.
6:01 am
today, scientists are blazing a trail to this very future. - the fact that we're enabling the system to make its own decisions, i don't even know where to begin with that. - [shivani] i want to know what breakthroughs are being made. - 's talking, and it's having this dynamic conversation with you. that's the wonder. - machines can be self-aware in ways that we can't. - [shivani] that will force the future to. oh my gosh, it's looking at me. hyper intelligence. (bright orchestrated music)
6:02 am
- my name is shivani bigler. as an engineer and neuroscientist in training, i'm obsessed with artificial intelligence. as a kid, my father took me to tech and robotics trade shows where i became dazzled by science. every year, the inventions were smarter and smarter. artificial intelligence has come a very long way in the last couple of years. most ai technologies are programmed to think for themselves, to learn from examples, kind of like simulating human intelligence in a way that it learns from past experience. but how does ai actually work? in the future, will ai achieve human traits like emotion, consciousness, or even free will? and how will humans and robots work together? (humming music) today, the clearest road of the future is the self driving car. unlike a regular car, which is just a machine, a self-driving car is a robot that can make decisions.
6:03 am
in the future, will every car on the road become driverless? to find out, i've come to a hotbed of self-driving car research, pittsburgh, pennsylvania. every single person has started to have conversations about self-driving cars, because essentially, they're the future. but in order to understand it, we have to look under the hood. making decisions on the fly, even simple ones like these, does not come easy for computers. to discover the inner workings, i'm meeting a true pioneer in the field. - please get in. - thank you. dr. raj rajkumar of carnegie mellon university. carnegie mellon is the birthplace of self-driving car technology. thanks in large part to the work of raj and his colleagues. they've been the leading innovators in this field for more than 25 years. so how does his self-driving car make decisions to safely navigate t world like a hum driver? should we get started?
6:04 am
pretty eager. - yes, we can. - since raj is distracted by our conversation, for safety reasons, the state of pennsylvania requires another driver in the front seat to monitor the road. this is so cool. i'm nervous but excited. what's the longest you've ever driven a vehicle, autonomously? - we have gone hundreds of miles. - [shivani] awesome. - i'm going to go auto by pressing this button. - oh my gosh (laughs). it really is driving itself. while most self-driving cars are built from the ground up, raj just bought a regular used car and hacked it with powerful onboard computer systems, making it more adaptable than other regular cars. - we've installed a bunch of sensors in there. it is able to shift the transmission gear. it is able to turn the steering wheel, apply the brake pedal and the gas pedal. there's really a software that runs on the computer that makes this capability really practical. and there are some very key,
6:05 am
fundamental artificial intelligence layers that tries to mimic what we humans do. - [shivani] to mimic human decision making, most self-driving cars use a combination of cameras in advanced radar to see their surroundings. the ai software compares external objects to an internal 3d map of streets, signs, and transportation infrastructure. - [raj] a map is something that is static in nature, traffic, and people, and objects are dynamic in nature. the dynamic information figures out on the fly. - [shivani] comprehending dynamic information allows it to understand where it is heading objectively in space and react to changes in traffic signals. ah-huh, it recognize the stop sign. - yes. we have pedestrian. we definitely should not be running to that person. (shivani laughs) the ai challenge to make a vehicle drive itself is not an easy task. safety is priority number one, and priority number two and number three as well. - but what happens when the ai system doesn't understand
6:06 am
specific objects in its surroundings? - [narrator] a pedestrian in tempe, arizona was killed last night by a self-driving taxi. it's believed to be the first fatality caused by an autonomous vehicle. - [shivani] a tragic accident happened because a self-driving vehicle didn't recognize somhing in its environment, a jaywalker. in the future, advanced self-driving cars will have to make life and death decisions on the fly. if avoiding the jaywalker means crashing head on with another car, potentially killing the driver, what should it choose? how will scientists address monumental problems like these? the first wave of artificially intelligent robots were programmed by engineers with static sets of rules to aieve their goals. these rules are called algorithm, but not all rules work in all situations. this approach is very inflexible, requiring new programming to accomplish even the smallest changes in any giv task. a new approach called machine learning
6:07 am
has changed everything. with machine learning, computers can absorb and use information from their interactionsith the rld to rewrite their own programming, becoming smarter on their own. to see machine learning in action, i'm meeting another carnegie mellon team at an abandoned coal me. dr. matt travers leads a group that won a challenging subterranean navigation competition called by the department of defenses research agency, darpa. - they're affectionately known as r-1, r-2, and r stands for rot. (all laughs) - [shivani] these robot twins are designed for search and rescue missions too dangerous for humans. and unlike the self-driving car, they operate without a map. to achieve this, they have to learn to identify every single object they encounter on the fly. - they are programmed to go out and actually act fully autonomously, and they will be making 10 of their own decisions. so they're recognizing objects,
6:08 am
they're making the decision of where to go next, where to explore. - [shivani] to see this in action, the r-2 robot is starting on a simulated search and rescue mission to find a stranded human dummy in the mind. - imagine having a map of a collapsed mind before you sent a team of people to go rescue someone in that mind, they're like it's a game changer. - [shivani] how the robot discerns elements in this environment parallel is how an infant learns about her environment. a three month old uses her sense to cognitively map out her environment and learn to recognize her parents. she ultimately uses this map to interact with everything in her world, ju like this robot. - okay, so we ready to roll. (bright chiming music) - [shivani] artificial intelligence makes this learning curve possib, but how does it create its own map and identify a human on its own? and without an internal mapping system, like the internet? test engineer, steve willits, shows me how the r-2 robot can detect a stranded person.
6:09 am
- [steve] when you're in a search and rescue scenario, that's the kinda situation where you'd want to deploy one of these. - [shivani] as it explores and maps the cave, it drops devices called signal review to create a wifi network trail. - it drops those just like bread crumbs along t path. - [shivani] using this network, the robot sends data back to home base to create a map. at the same time, the robot must look at every single object to identify the stranded human. - so the lidar system is giving a full laser scaning. - [shivani] lidar stands for light detection and ranging. similar to its cousin radar, which uses radio waves, lidar systems send out laser pulses of light and calculates the time it takes to hit a solid object and bounce back. this process creates a 3d representation of the objects in the environment, which the onboard computer can then identify.
6:10 am
this process is similar to how the eye feeds visual data to the brain, which then recognizes the objects by tapping into our preexisting knowledge of what things look like. by fully understanding its environment, r-2 can then make better decisions about where to go and where not to go. - [steve] what our robot's doing right now is exploring. so the robot came to a junction and off to the left it could see that it couldn't get past. - [shivani] right. - [shivani] so it saw the opening to the right, and that's where it went. (dramatic music) - it kind of looks like it's making decisions about whether or not to climb over these planks and obstacles all up in this area, right? - [steve] that's exactly what it's doing at this point. - just like a baby, r-2 learns through trial and error. it's like a little dog wagging its tail. but the's no one here to rescue, so it moves on.
6:11 am
(bright dramatic music) as r-2 continues to map out the mine, oh my god, a human. it stumbles upon its intended target (laughs). - that is randy, rescue randy. - hello reue randy (laughs) scared me. with the discovery of rescue randy, the r-2 robot can not only alert emergency personnel, but also give them a map on how to find him. that is incredible (laughs). just knows what it's doing. these incredible rescue robots are clearly paving the path to the future of hyper intelligence. - [narrator] in the future, autonomous exploration vehicles perform search and rescue missions in every conceivable disaster zone. even in avalanches at top mount everest, incredibly, intelligent off-road vehicles are also mapping deep cave systems previously unknown to science, discovering a vast supply of rare earth elements essential
6:12 am
for modern technology. - [shivani] artificial intelligence will clearly save human lives in the future, but there's a lot of terrain on earth that's too difficult to navigate on wheels. how will intelligent robots make their way over rainforests, bodies of water or even mountain tunnels? in philadelphia, jason derenick of exyn technologies is wking to overcome this problem. - what we focus on is autonomous aerial robotics to enable drones to safely navigate in unknown or unexplored spaces. - [shivani] jason's team has built the first industrial drone that can fly itself anywhere. incredibly, these autonomous robots navigate without gps and map their enronment they go - [der] we focus on all aspects of automy, which includes perception, orientation of during flight, motion planning, and then finally control. - [shivani] but going from two dimensions
6:13 am
to three dimensions requires an increase in artificial intelligence processes. the mission for their drone is to flight indepdently through a three dimensional path from one end of the warehouse to the other. - starting mission 3, 2, 1. (dramatic music) - [shivani] now to mess with its computer mind, jason's team, places new and unexpected obstacl in its path. will the drone recognize these unexpected changes? will it get lost? will it crash? - essentially, we have a gabbled lidar system that allows the vehicle to paint a full 360 sphere around it in order to sense its environment. - [shivani] like the robot in the mine, this aerial robot uses lidar to see. - it actually generates a virtualized representation of t space, which you see here. and each one of these cubes in the space, it's trying to determine whether that cube is occupied or whether it's free space. (drone engine humming)
6:14 am
- [shivani] the robots onboard computer makes real time decisions of where to go based on its visual input. kind of like a siemens. (drone engine roars) incredibly, the drone recognizes the whiteboards and flies around them. (chiming music) - one of the things about this system that make it particularly special is that it's actually being used in the real world to keep people out of harm's way. - [shivani] autonomous drones like these are already at work in hazardous industries like mining, construction, and oil exploration. they safely conduct inections in dangerous locations and create detailed maps of rough terrain. - from a technological perspective, the fact that we're ableo do everything that we're doing on board self-contained and enabling the system to make its own decisions, i don't even know where to begin with that. (dramatized music) - [shivani] self-flying robots like these will revolutionize search and rescue and disaster response.
6:15 am
they could also transform how packages are delivered, but there are limits to what large single drones can do. more complex tasks will require teams of small, nimble autonomous robots. dr. vijay kumar at the university of pennsylvania is working with swarms of drones to perform tasks like playing music. with obstructions cooperatively. he's also developing technologies to tackle some very big problems, including world hunger. - in a couple of decades, we'll have over 9 billion people to feed on this planet, of course, that's a big challenge. - [shivani] to take on a task this big he's building an army of small flying robots with the ability to synchronize. - we think about technologies that can be mounted on small flying robots that can then be directed in different ways like a flock of birds, reacting to a predator
6:16 am
or a school of fish. you have coordination, collaboration, and it all happens very organically. - [shivani] using ai to get robots to work as a coordinated collective group is a daunting task. - three, five years ago, most of our robots relied on gps like sensors. today, we have the equivalent of smartphones embedded in our robots and they sense how fast they're going by just lookinat the world, integrating that wit the inertial measurement unit information, and then getng an estimate of where they are in the world and how fast they're traveling. - [shivani] this, i gotta see, ani'm gonna check it out virtually as a robot. (upbeat music) i'm at u. penn, remotely in the jay kumar's lab. sample my surroundings. oh, i hit something. hello? - hi. - [shivani] vijay's apprentice dinesh thakur is my guide.
6:17 am
- today, we are gonna show robots playing in a formation. - [shivani] great, can we see how that works? - sure, yeah. - [shivani] the first step dinesh takes in coordinating the drones is to provide them with a common point of reference. in this case, a visual tag, similar to a basic qr code. using only the onboard camera, these drones reference the code on the tag and visualize where they are in states. using sophisticated bio-inspired algorithms, the drones can figure out where each other drone is within the collective swarm. these drones are communicating with one another, right? - [dinesh] yeah, right now they're communicating more wifi. - [shivani] so cool (giggles). future versions of these drones will create their own localized wireless network to communicate. but for now, this swarm is a proof of concept. you've defined a formation, and then they're assuming that formation? - yeah, i just say, i want to form a line and drone themselves figure out where they shoulgo. - [shivani] once they figure out where they are in rationship to each other,
6:18 am
they can then work together to accomplish shared goal like ants working as a collective entity. - once they can coordinate between each other, we can send them out and doing specific missions. - [shivani] that's really cool (laughs). swarms of flying robots have their advantages. unlike a single drone self coordinating swarms can perform complex operations like mapping much faster by working in parael and combining their data, and losing one drone in a swarm doesn'do the whole operation. vijay imagines employing his advanced swarm technology to work on farms. this precision agriculture will hp feed the world's owing population. - we'd like robots to be able to roam farms and be able to provide precise information about individual plants that then could be used to increase the efficiency of fooproducti. that would be a huge impact in a world. this is our duty as responsible citizens
6:19 am
and as responsle engineers. - [shivani] this high flying approach towards resolving the problems of the future is definitely a path i can get on. (whooshing mic) [narrator] in the ture, artificial intelligence, coordinates flocks of drones to protect the environment and boost the food supply. the gative effects of climate change on agricultural crops, robotic bees assist with pollination in orchards and on farms, making them more sustainable and productive. fish-shaped underwater robots automatically deploy at the first sight of an oil spill. these drones create a barricade to rapidly contain spills, saving marine life and oceans across the world. - [shivani] modern society has a long history of building robots to do work that's dangerous, difficult, or too repetitive for humans. ai is poised to automate all kinds of humans work ranging from factory work, to taxi driving,
6:20 am
to customer service. while some are worried that smart robots will replace human labor, that's not necessarily the case. as a sector, artificial intelligence is expected to generate 58 million new types of jobs in just a full years. so what will the future of human robot interaction mean for our work and livelihoods? i'm at the massachusetts institute of technology, to meet dr. julie shah. she's leading groundbreaking research in human robot collaboration. - my lab works to develop robots that are effective teammates with people. - julie and her team are creating software that helps robots learn from humans. even giving them insight into different human behaviors. by being aware of real people, robots can directly work and interact with them. how do you teach these robots or machines to do these human-li tasks? - the rst step as it would be for any person, the first thing they do is become immersed in the environment and observe,
6:21 am
and then we need an active learning process. the robot needs to be able to communicate or show back to the person what it's learned. we don't want the robot to learn the direct sequence of actions, we want the robot to learn this more general understanding. that's ultimately like our challenge. - [shivani] but getting a robot to grasp the bigger picture concept in order to understand the basics of its task in the first place requires a lot of observation and well handholding. - my research is focusing on trying to make robot programming easier by trying to teach robots how to do tasks by demonstrating them. - julie's colleague, ankit shah, shows me how this robot is learning to set a table. so this is all the silverware and the plates, the bowls, the cups, and this is the table that it has to set. - yes, that is good. - okay. as any parent knows, the first step in helping a child to learn is to model the desired behavior. it's the same with machine learning. in this case, the ai robot recognizes the objects
6:22 am
with a visual text similar to a qr code. and for two weeks, it observes on kids setting a table. so did you pick up an item and then place it on the dinner table? - that's basically what we did. and based on that, the robot learns what it means to set a dinner table. - [shivani] dynamic tasks, like setting a table or doing laundry are easy for humans, but incredibly hard for bots. the software has difficulty with so many variables and even subtle changes in their environment can throw them off. - one of the things which i like to do is to actually hide some of the objects. so it's not going to see the spoon. and the reason we do this is we want to show that the robot is robust to some of the disturbances in the task. - [shivani] the robot software has learned what each object is and where it goes. now, let's see if it's learned the concept and can think dynamically to set the table. - [ankit] you can just pick up the card. - here we go. i've revealed the spoon.
6:23 am
incredibly, the robot recognizes the spoon and instantly places it next to the bowl. this reveals that the robot has learned the concept and executes the right action dynamically. in the process, the software is continuously writing and revising its own computer code. basically, it's learning. (humming music) if like humans, robots can grasp the bigger picture context and not just the mathematical tasks, will ai driven robots of the future spell the end of having to work? - the key aspect is not developing ai to replace or supplant part of the human work, but really interested in how we fit them together as puzzle pieces. people work in teams to build cars, to build planes, and robots need to be an effective team member. - it's real teamwork, as if you're in a basketball game, you have your goal, right? and (chuckles) you have to think spatially, who am i gonna pass the ball to? and at what time you do that, so that everything matches up.
6:24 am
- the analogy of a basketball team is outstanding because we actually need to know spatially where they're going to be. and the timing is of critical importance. and so we need to develop the ai for a robot to then work with us. (chiming music) - [shivani] one of the most difficult aspects of creating hyper intelligence, it's actually something that even we humans sometimes get wrong. and that is anticipation. anticipating what a teammate or coworker might do requires understanding of contextual information on a much more sophisticated level and predicting what will happen next. can robots make predictions as accuraty as we can? - abbey's our industrial robot. - [shivani] pem lasota is giving this abbey machine, the intelligence necessary to help it anticipate a human coworker's action. - this is our simulated manufacturing task that we have set up to simulate some sort of task that a person or robot could feasibly woron together.
6:25 am
- [shivani] for safety reasons, actual human robot interaction is at present, fairly minimal. - [pem] typically, in a factory, you would sethese guys behind a metal cage and you wouldn't have people working with them. so what we're trying to do is make something that a person could safely interact with. - what is human and robot supposed to do together in this task? - on this task, a person's placing fastness in some surface of a plane and a robot applying like a sealant over to seal i - okay, can we see it happen? - [pem] sure. - in order to work together, the robot must first be able to see and recognize the actions of its human counterpart and adjust to the person's every move. ooh, feel like i'm in a superhero movie. so the cameras in the room can see these lights and track your hand so that your hand doesn't get cut off by the robot. - that's right, yeah. so the cameras and the lights basically work as eyes for the robot. so that's how the robot knows where i am. - [shivani] the monitor shows the visual representation of the room that's inside the robot's mind.
6:26 am
- so this is what the robot might be doing if you know i'm not in its way. and the robot's just sealing and i'm not supposed to be here. - [shivani] pem does something the robot has no way of expecting - [pem] if i put my hands in a robot's way-- - oh wow. by quickly understanding this human action, the ai software reacts accordingly by stopping. - it's important to be able to share the workspace. - [shivani] building on this sense of teamwork, pem's next step is helping abbey anticipate where he will move next. based on subtle contextual cues. - so in this case, the robot will not only track which actions i've done so far, but also anticipate which portion of the space i'm going to be using. and when it's planning its own motions, it'll avoid those locations so that we can more seamless work together-- - together, oh, okay. - so, what you'll see now is after i place this built, and the robot's gonna predict, i'm gonna go to this one next. so what you'll see is it'll behave ia differt way. so now that i place this built, the robot kinda takes a more roundaboupath that allows me and the robot toork more closely together.
6:27 am
and i don't have to kind of worry about it crashing into me because i can see that it's trying to avoid me. soimilar, on lidar's side, i place this bolt. i see the robot takes a more kinda like roundabout path. - [shivani] yeah, because yore gonna go there next. - [pem] (indistinct) it get slowed down, 'cause it's close to me. - [shivani] right. - work together at the same time. so not only is the interaction more efficient in that the robot's not spending too much time standing still, it's safer because the robot's not constantly kinda almost hitting me and also feels nicer for the person working with the robot. - i really love this same theme teamwork. programming robots to coordinate with us and anticipate where we will move won't only revolutionize the workplace, but it will also change society large. (chiming music) - [narrator] in e future, the coordination of men and machine is so advanced that this collaboration increases productivity and accuracy in most industries.
6:28 am
ai robots now accompany surgeons in hospitals across the globe. they anticipate the doctor's needs and hand them the appropriate medical tool just before it's needed. this dramatically reduces surgery times and human error. (dramatic music) - [shivani] as machines become smarter in their interactions with humans, will they ever develop consciousness? and will artificial intelligence actually surpass human intelligence? while some machines have exceeded human ability in gains like trivia. - now we come to watson-- - [shivani] and chess, these ai systems were designed to master just a single skill. these programs use brute-force computer processing power, and especially tailored software to beat their human opponents. to achieve the wholly grail of hyper intelligence, scientists must develop systems with flexible human-like abilities to both learn and think this form of smarts is called artificial general intelligence.
6:29 am
i'm back in new york city on my own campus at columbia universi to meet with dr. hod lipson. hod's lab is developing creative robots that paint original artworks, self assembling robots, and even robots that len about eir world without human assistance. but his ultimate goal is even more ambitious. - can a machine think about itself? can it have free will? i believe that in fact machin can be sentient, can be self-aware in ways that we can't. - [shivani] as a neuroscientist, i know we've only scratched the surface of our scientific understanding of how consciousness works in humans. how could one possibly use computer code to put this transcendent feature into a robot? - our hypothesis is actually very simple. it ithat self-awareness is nothing but the ability to simulate one's self. to model one's self, to have a self image.
6:30 am
- [shivani] the first step towards creating robotic consciousness is to teach the software, touild an image of its physical mechanical self inside its computer mind. we humans take consciousne like this for granted, even in simple moments, like understanding our own image reflected in a mirror. humans start to develop awareness of their own emotions and thoughts around the age of one. this helps babies understand their self image in their minds, and it helps 'em to learn about their environment, and their role in it. - when a robot learns what it i it can use that se-image to plan new tasks. - [shivani] in both humans and robots awareness about the physical self is called proprioception. neuroscientists, sometimes call this self-awareness of our bodies a sixth sense. - we use the same test that a baby does in its crib. a baby moves around, flails around, moves its arm in ways that look random to us,
42 Views
IN COLLECTIONS
LinkTV Television Archive Television Archive News Search ServiceUploaded by TV Archive on