tv Artificial Intelligence CSPAN June 27, 2018 8:09am-10:04am EDT
9:00 am
how would you define most economically valuable work? >> so i think that, again, and first of all, i just -- you know, the question of, you know, agi is something that the whole field has been working towards for, you know, really since the beginning of the field, o50 yeas ago. so the question of how to define it i think is something that is not entirely agreed upon. our definition is this, when we think of it, we think -- you think about things like starting companies or very high intellectual work like that. and also to things like going and cleaning upisaste sites or things that humans would be unable to do very well today. >> okay. i noticed that in your disagreement of congressman lipinski referred to, the report, and you call them
9:01 am
silicon valley upstarts. at least you didn't call them young upstarts. so that's an advantage. thank you for doing that. but you're literally looking at a new industry that even though the shift -- bless you. even though the shift is going to be changing, you're actually creating jobs for another industry, and going back to dr. li's example with her mom in th iu, talking about how much the nurses do. how do you train for those jobs if it's moving as fast as you think it is? >> yeah. and so, you know, one thing i think is also very important is that i don't think that we have much ability to change the time line to this technology. i think there are a lot of stakeholders, a lot of different pieces of the ecosystem. and what we do is step back and look at the trends and say what's going to be possible when. and so i think that the question of how to train, again, that's going to be something we're not thnly ones that are going to have to help answer that question. but i think that the place to start, it really comes back to measurement. if we don't know what's coming, if we can't project well, then
9:02 am
we're going to be taken by surprise. and so, you know, i think that there are going to be lots of jobs and already have been created jobs that are surprising in terms of you think about with autonomous vehicles, we need to label all this data. we need to make sure these systems are doing what we expect. and that all of that -- there's goin to be humans that are going to help make these systems. >> we would all agree, i hope, and this question is for all three panelists, all three witnesses, that the jobs they're going to create are well worth the transformation into all of that technology. dr. persons, would you agree with that? >> i would agree with that. let me give a quick example, if i may. speaking with a former secretary of transportation recently, just a simple example of toll booth collectors. the easy pass, you drive through, and you have less of a work force there that could have had an impact at that time for a short period on the number or loss of jobs for toll booth collectors, yet it freed them up
9:03 am
and enabled them to do other things that are needed. >> you were shaking your head. you agree with that statement. >> absolutely. i think the purpose of technology is improving people's lives. >> dr. li, i see you sking your head too. >> yeah, absolutely. in addition to the example mr. persons provided, i think deeply about the jobs that are currently dangerous and harmful for humans from fighting fires to search and rescue to, you know, natural disaster recovery. not only we shouldn't put humans in harm's way if we can avoid it, but also we don't have enough help in these situations. and this is where technology should be of tremendous help. >> very quickly, i'm out of time. just yes or no. if we lose dominance in ai, that puts us in a really bad spot in worldwide competitives, would you agree? >> yes. >> yes. >> yeah. >> thank you.
9:04 am
>> yes. >> madam chair, i yield back. >> thank you. good question. now i recognize mr. veasey for five minutes. >> thank you, madam chair. we have heard about already from your testimony some of the advantages of ai and how it can help human kind. how it can help advance us as a nation and a country. but as you know, there are people also thatabout ai. there's been a lot of sort of doom's day-like comparisons about ai and what the future of ai can actually mean. to what extent do you think this scenario, this sort of, you know, worst-case scenario that a lot of people have pointed out about ai is actually something that we should be concerned about? and if there is a legitimate concern, what can we do to help establish a more ethical, you know, responsible way to develop ai? and this is for anybody on the
9:05 am
panel to answer. >> so i think thinking about artificial general intelligence today is a little bit like thinking about the internet in maybe the late '50s. if someone was going to describe what the internet would be, how it affected the world and you have this thing called uber, you would be very confused. it would be very hard to understand what that would look like. and the fact that, oh, we forget to put security in there, and we would be paying for that for 30 years worth of trying to fix things. and now imagine that whole story, which played out over the course of the past 60, almost 70 years now, was going to play out in a much more compressed time scale. and so that's the perspective that i have when it comes to artificial general intelligence. is the fact that it can cause this rapid change, and it's already hard for us to cope with the changes the technology brings. so the question of is it going to be malicious actors, is it going to be that the technology itself just wasn't built in a safe way or is it just that the deployment that who owns it and the values that it's given aren't something that we're all very happy with. all of those i think are real risks.
9:06 am
and, again, that's something we want to start thinking about today. >> thank you, sir. so i agree. i think the key thing is being clea abo wha the risks actually are. and not necessarily being driven by the entertaining and yet science fiction type narrative sometimes on these things, projecting or going to extremes and assuming far more than where we actually are in the technology. so it's -- there are risks. it's understanding the risks as they are. and they're always contextual risks. risks automated vehicles are going to be different than risks in this technology in financial services, let's say. so it's really working, again symbiotically with the community of practice and identifying what are the things there? what are the opportunities? and there's going to be opportunities. and what undesirable things do we want to focus on and then optimize from there on hoto deal with them. thank you. >> mr. brockman, in your testimony, you referenced a report outlining some malicious actors in this area.
9:07 am
could you sort of elaborate on somef your findings? >> that'sri soas a collaborator on this resear report, projecting not necessarily today what people are doing, but looking forward, what are some of the malicious activities that people could use ai for. and so that report -- let's see. i think maybe the most important things here, you start thinking about things around information, privacy. the question of how do we actually ensure these systems do what the operator intends, despite potential hacking. think about autonomous systems that are taking action on behalf of humans that are subverted and whether, again, it's -- you know, this report focuses on active action. you think about autonomous vehicles, and if a human hacker can go and take control of a fleet of those, some of the bad things that could happen. and so i think that this report
9:08 am
should really be viewed as we need to be thinking about these things today before these are a problem, because a lot of these systems are going to be deployed in a large-scale wnd if you're able to subvert them, then, you know, all of the problems we have seen to date are goi to start having a very different flavor, where it's not just privacy any more, it's also systems deployed in the real world that are actually able to effect their own well-being. >> thank you. madam chair, i yield back. >> thank you. i now recni -- let's see, mr. rohrabacher. >> thank you very much, madam chairman. this this, as in all advances in technology, can be seen as the great hope for making things better or the new idea that there might be new dangers involved. and/or the new technologies will help certain people, but be very damaging to others. and i think that where that fear
9:09 am
would be most recognizable is in terms of employment and how in a free society people earn a living. and are w talking about here about the development of technology that will help, you know, get the tedious and remedial or the lower-skilled jobs that are reay not really -- that can be done, you know, by machine? or are we talking about a loss of employment by machines that are designed to really perform better than human beings perform in high-level jobs? what are we talking about here? >> okay. so i can -- i still am going to use health care as an example, because i'm familiar with that area of research. so if you look at recent studies
9:10 am
by mckenzie and other institutions, employment and ai, there is a recognition that we need to talk a little more nuance than just an entire jo but the tasks under each job. the technology has a potential to change the nature of different tasks. again, for example, take nurse -- a job of a nurse as an example. no matter how rapidly we develop hard to imagine that entire profession of nursing would be replaced. yet within the nursing jobs, there are many opportunities that certain tasks can be assisted btechnology. for example, a simple one that costs a lot of time and effort in nursing jobs is charting. our nurses in our icu roo or
9:11 am
patient rooms spend a lot of time typing and charting into a system, into a computer, while that's time away from patients and other more critical care. so those are the kind of tasks under a bigger job description that we can hope to use technology to assist and augment. >> so are we talking about robots here, or a box tha thinks and is ato mak decisions for us? >> s ai technology is a technology of many different aspects. it's not just robots. in this particular case, for example, natural language understandingnd speech recognition. possibly in the form of a voice assistant would help charting. but maybe delivering ofple tool on the factory forill be in the form of a small, simple delivery robot. so there are different forms
9:12 am
of -- >> i see. >> -- of machines. >> there are many dangerous jobs that i could see that we don't -- we would prefer not having human life put at risk in order to accomplish the goal. and, for example, at nuclear power plants, it would be a wondrous thing to have a robotic response to something that could cause great damage to the overall community. bu kill somebody if they actually went in to try to solve a problem. i understand that. and also possibly with communicable diseases, where people need to be treated, but you're putting people at great risk for doing that. however, with that said, when people are seeking profit in a free and open society, i would hate to think that we're putting out of work people with lower
9:13 am
skills. and we need the dignity of work anf earning your own way. once we know now -- whenou taway, iteally has a major impact -- negative impact -- on people's lives. so i want tnk you all for giving us a better understanding of what we're facing on this. and let's hope that we can develop this technology in a way that helps the widest variety of people, and not just perhaps a small group that will -- they keep their jobs and keep the money. so thank you very much. >> thank you, and i now recognize ms. upon meechy for five minutes. >> we have the best scientists and researchers and engineers in the world. but without stronger investments in research and development, especially long term foundational research, we risk falling behind, especially in this important area.
9:14 am
i hope the research continues to acknowledge a socioeconomic aspect as well a integrating ai technologies. in my home state at the university of oregon, we have the urbanism next center. theye doing se greatwork, bringing together interdisciplinary perspectives, including planning and architecture and engineering and urban design and public administration. with public/private and academic sectors to discuss how leveraging technology will shape the future ofur communities, the research has been talking about emerging technologies like autonomous vehicles, and the implications for equity, health, the economy and the environment and governance. dr. pearson, can you discuss the value of establishing this type of partnership between industry, academia and the private sector to help especiallyifynd address some of the consequences intended and unintended of ai as it becomes more prevalent? and i do have a couple more questions. >> sure. i'll answer quickly.
9:15 am
the short answer is yes. our experts and what we're seeing is the value in public/private partnerships, because, again, it would be a mistake to look at this technology and sort of isolate its stove pipes, and it would need to be an integrated approach to things. the federal government having its various roles, but key -- you're menti university of oregon, economic and research qu the's many things to research and questions to answer. and, then, of course, industry, which has an incredible amount of innovation and thinking and power to drive things forward. >> terrific. thank you. dr. li, i have a couple questions. you discussed the labor disruption. and i know that's brought up a couple of times, and the need for retraining. we really have sort of a dual skills gap issue here, because we want to make sure there are enough people who have the education needed for the ai industry, but we also are talking about the workers, like you mention the workers in toll booths, who will be displaid.
9:16 am
b what knowledge and skills are the most important for a work force capable of addressing the opportunities and the barriers to the development? i serve on the educational work force committee, and this is a really important issue. how do we educate people to be prepared for such rapid changes? >> so ai is fundamentally a scientific and engineering discipline. and to -- as a educator, i really believe in more investment in s.t.e.m. education from early age on. we look at -- in our experience ai for all when we invited these high school students in the age of 14, 15, 16 to participate in ai research, their capabilities and potentials just amazes me. we have high school students who have worked in my lab, and winning best paper award at this country's best ai academic conferences. and so i believe passionately
9:17 am
that s.t.e.m. education is critical for the future for preparing ai. >> thank you. and as eveon thi committee knows, i alws talk about steam, because i'm an a believer in educating both parts of the brain. dr. li, in your testimony you talk about how ai engineers need to work with neuro scientists and cognitive scientists to develop aoreum i know dr. carbonal is not here today, but he wrote ai is the ability to create machines who perform tasks normally associated with human intelligence. i'm sure that was intentional choice to humanize the machine. but i wanted to ask you, dr. li, about -- he's not here to explain. but i have no doubt that was intentional. in your testimony, you talk about the laws that codify ethics. how is this going to be done? can you go into more depth about
9:18 am
how would these laws be done? who would determine what is ethical? would it be a combination of industry, government, determining standards? how are we going to set the stage for an ethical development of ai? >> yeah. so thank you for the question. i think for the technology as impactful as ai is to humane society, it's critical that we have ethical guidelines. and different instituons from government to academia to industry will have to participate in this dialogue together and also by themselves. >> are they already doing that, though? you said they'll have to. but is somebody convening all of this? >> so there are efforts. industries in silicon valley, we're seeing companies starting to roll out ai ethical principles and responsible ai practices in academia.
9:19 am
we see that ethicists and social scientists coming together with technologists holding seminars, symposiums, classes, to discuss the ethical impact of ai. and hopefully government will participate in this and support in investing in these kinds of efforts. >> thank you. i see my time is expired. madam chair, i yield back. oh, mr. chairman, thank you. >> i thank the gentle lady. the gentle lady from arizona is recognized for five minutes. >> thank you, mr. chair. an to thank the testifiers today. very interesting subject. and something that kind of spurs the imagination about science fiction shows and those types of things. i do have a question on what countries are the major players in ai, and where does the u.s. rank in competition with them? and that's to any panelist or
9:20 am
allanelis. >> so, you know, today i think that the u.s. actually ranks possibly top of the list. you know, i think there are lots of other countries investing very heavily. china is investing heavily. lots of countries in europe are investing heavily. a subsidiary of a u.s. company located in london. and i think it's very clear that ai is going to be something with global impacts. and i think the more we can understand what's happening everywhere and figure out how we can coordinate on safety and ethics in particular, the better it's going to go. >> yes. thank you for the question. i think wherever there's large amounts of computing, be large amounts of data and a strong desire to innovate and continue to develop in this sort of fourth industrial revolution that we're moving on, it drives toward certainly china and then our allies and colleagues in western europe developed worlds. thank you.
9:21 am
>> and is there -- did you want to answer? >> sorry. if i could just add that the most important thing to continue to lead in the field is really about the talent. right now we're doing a great job at bringing the talent in. at open ai, we have a mix of national backgrounds and origins. as long as we can keep that up, we'll be in very good shape. >> thank you. and mr. chair, i have one more question. and that is, what steps -- i think this has been asked in different ways before, but what steps are we guarding against, espiage from -- let's say you said china is involved in this. and that's basically my question. espionage, hain those types hings. what guidelines are currently taking place, and who is preventing this? is it the private companies themselves? is government involved? . one thing that's very
9:22 am
atypical about this field is because it really grew out of an academic -- a small number of academic labs that the overarching ethos in the field is actually to publish. and so all of the core research and development is actually being shared pretty widely. and so i think that as we're starting to build these more powerful systems, and this is one of the parts of our charter that we need to start thinking about, safety and keeping, you know -- think about things that should not be shared. and so i think this is a new muscle that's being built. it's right now kind of up to each company. and i think that that's something that we're all starting to develop. but i think having a dialogue around what's okay to share and what things are kind of too powerful and should be kept private, that's just dialogue starting now. >> and certainly ip -- inllectual property protection is a critical issue. i th of one former director of the national security agency you mentioned. at the time it was unprecedented theft of u.s. intellectual
9:23 am
property at that time. just because of the -- it's the blessing and curse of the internet. the blessing is it's open and the curse is it's open. andai is going to i think be in that category in terms of what's being done in terms of cyber security. it is something our experts pointed out and said it is an issue. as this committee well knows, it's easier said than done. and who has jurisdictions in the u.s. federal system about -- particularly a private company and protection of that. the role of the federal government versus the company itself. in an era where it is -- i think mr. logman pointed out, data are the new oil. yet we want to be open at the same time so that we can innovate. so managing that dialectical tension is going to be an issue and there is no easy answer. >> thank you. mr. chair, i yield back. >> the chair recognizes ms. esty for five minutes. >> thank you, mr. chair, and i want to thank the witnesses for
9:24 am
this extremely informative and important conversation that we're having here today. i hail from the state of connecticut, where we see a lot of innovation at u-conn, yale, lots of spinoffs on the narrow ai question. but i think for us, really, the issue is more about that general ai. and mr. brockman, your discussion of the advances which makes us look puny in comparison is really where i want to take this conversation. about dr. li, your discussion, which i think is incredibly important about diversity. we saw what happened with lehman brothers by not being diverse. i am extremely concerned about what the impcations are for teaching -- if it's garbage in, it's going to be garbage out. if it's a narrow set of parameters and thought patterns and life experiences that go into ai, we will get very narrow results out. so first i want to just talk --
9:25 am
get your thoughts on that. and the second is on this broader ethical question. we've looked foranhen i was a yg lawyer working on bio ethical issues, the hastings center got created to look at these issues. this committee has been grappling with crisper and the implications of crisper. i think about this being very similar that ai has many similar implications for ethical input. so if you can opine on both of those questions and recognize we've got, you know -- two minutes, three minutes left. about both the ethical -- whether we need centers to really bring in ethicists as well as technologists, and then the importance of diversity on the technology side so that we get the full range of human experience represented as we're exciting -- are exciting new entry into this fourth revolution. thanks. >> yes. in fact, just now -- thank you for asking that question. just now when someone is using the term doom's day scenario, to
9:26 am
me i think if we wake up 20 years from now, whate years, and we see theack of diversity in our technology and leaders and practitioners, that would be my doom's day scenario. so it's so important and critical to have diversity for the following three reasons, like you mentioned. one isheer jobs that we're talking about. this is a technology that could have potential to create jobs and improve quality of life. and we need all talents to participate in that. second is innovation and creativity, like you mentioned in connecticut and other places. we need that kind of broad talent to add in the force of ai development. and the third is really justice and moral -- and moral values that if we do not have this wide representation of humanity representing the development of this technology, we could have
9:27 am
face recognition algorithms that are more accurate in recognizing white male faces. and these are -- we could have dangers of biased algorithms making unfair loan application decisions. there are many potential pitfalls of a technology that's biased and not diverse enough. which brings us to this conversation dialogue of ethics and ethical ai. you're right. previous disciplines like nuclear physics, like biology, have shown us thempe this. i don't know if there is a single recipe, but i think the need for centers, institutions, boards and government committees are all potential ways to create openness, this dialogue.
9:28 am
and we're starting to see that. but i think you're totally right. it's critical issues. >> so i agree completelith my fellow witness. diversity is crucial to success here. we have a program called open ai scholars, where we brought in people from a number of underrepresented backgrounds into the field, and they're working on projects and spinning up. one thing we found that i think is very encouraging is actually very easy to take people who do not have any ai or machine learning background and to make them into extremely productive, first-class researchers and engineers very quickly. and that's, you know, one benefit of this technology being so new and nays ant, is that in some ways we're all discong as we go along too. so becoming an expert, there isn't that high of a bar. so i think that everyone putting effort in places where the expertise is, i think it's on them to make sure they're also bringing in the rest of the world. on the ethical front, that's really core to my organization.
9:29 am
that's the reason we exist. that we do think that for example when it comes to the benefits of who owns this technology, who gets -- where do the dollars go, we think it belongs toveryone. so one of the reasons i'm here is because i think that this shouldn't be a decision made just in silicon valley. i don't think the question of the ethics and how this is going to work should be in the hands solely of people like me. i think that it's really important to have a dialogue, and, again, that's something where, you know, i hope that will be one of the outcomes of this hearing. >> thank you very much. >> the gentleman now recognizes mr. mcnerney. >> thank the chair for holding this and the ranking member. and i think the witness has some really interesting testimony and diverse in its own right. one of the things that i think that's important here is -- with this committee is how does the government react to ai? do we need to create a specific agency? does that agency report to congress or to the administration? those sorts of things i think
9:30 am
are very important. dr. brockman, you said i think one of the most important things was that we need a measure of ai progress. do you have a model or some description of what that would look like? >> yeah. yes, i do. thank you for the question. and so first of all, i don't think that we need to create new agencies for this. i think that existing agencies are well set up f this, and i was actually very encouraged here that people are talking about giving -- remit to think about these problems. again, gao and diux are already starting to work on this. for example, diux had a satellite imaginary data set and hosted a competition, the kind of thing we think would be great for government to do, as ll, is to help standardize environments where economics and private sector can test robotic approaches, setting up competitions towards specific problems that various agencies and departments want to be solved. all of those i think could be done without any new agency, and i think that that's something that you can both get benefits
9:31 am
directly to the relevant agencies, also understand the field and also start to build ties between private sector and public sector. i'm one of the founders of the grid innovation caucus. what are the most likely areas we'll see positive benefits to the grid to electric stability and resiliency? who would be the best? mr. persons? >> sure. thank you f the question. i think one of the ways gao has done a good deal of work on this issue, but it's just protection of the electrical grid in the cyber security dimension. so as one of our scenarios or profiles we did in this report, what our experts and folks were saying, and, again, based on the leadership of this committee and the importance of cyber is that it's without which nothing ai is going to be a part of cyber moving forward. and so protection of the grid and the cyber dimension is
9:32 am
there. also i think as the chairman mentioned earlier, the word optimization. so how we optimize things and how algorithms might be able to compute and find optimums faster and better than humans is an opportunity for grid management and production. >> so ai is also going to be used as a cyber weapon against infrastructures or potentially used as a weapon, is that right? >> there are concerns now when you look at a broad definition of ai and you look at bots now that are attacking networks and doing scripted -- what are ddos or distributed denial of service attacks and things like that, that exists now. you could, unfortunately, in the black hat assumption, you're going to assume that as ai becomes more sophisticated in the white hat sense, so too, unfortunately, the black hat side of things, the bad guys, are going to also become more sophisticated. and so that's going to be the cat and mouse game moving forward. >> another question for you, dr.
9:33 am
persons. in your testimony, you mentioned that there's considerable uncertainty in the jobs impact of ai. what would you do to improve that situation? >> our experts were encouraging specific data collected on this. again, we have important federal agencies like bls, bureau of labor statistics that work on these issues, what's going on in the labor market, for example. and it may just be a -- an update to what we collect, what questions we ask as a government, how we provide that data, which is, of course, very important to our understanding of unemploymentetrics and so on. so there are economists that have thoughts about this. we had some input on that. there's no easy answer at this time, but the idea that there is an existing agency doing that sort of thing is there. the key question is, how could we ask more or better questions on this particular issue on
9:34 am
artificial systems? >> thank you. dr. li, you gave three conditions for progress in ai being positive. do you see any acceptance or general wide acceptance of those conditions? how can we spread the word of that so that the industry is aware of them and the government is aware of them and that they follow those sorts of guidelines? >> thank you for asking. yeah, i would love to spread the words. so i think i do see the emergence of efforts in all three conditions. the first one is about more isciplinar approach to ai and ranging from universities to industry. we see the recognition of neuroscience, cognitive science, to cross pollinate with ai research. i want to add, we are all very excited by this technology, but as a scientist, i'm very humbled
9:35 am
by how naysent the science is. it's only a science of 60 years old, compared to traditionally classic science that's making human lives better every day. physics, chemistry, biology. there is a long, long way to go for ai to realize its full potential toelp people. so that recognition really is important. and we need to get more research and cross-disciplinary research into that. second is the augmenting human and, again, a lot of academic research startup efforts are looking at assistive technology from disability to, you know, helping humans. and the third is what many of us focus on today is the social impact from studying it to having a dialogue to having -- to working together through different industry and government agencies. so all three are the elements of human center ai approach, and i
9:36 am
see that happening more and more. >> thank you. i yield back. >> the chair now recognizes the gentleman from new york. >> nope. >> the chair now recognizes the gentleman that's not from new york, mr. palmer. >> thank you, mr. chairman. i'd like to know if ai can help people who are geography-challenged. [ laughter ] >> the gentleman's time has expired -- [ laughter ] >> i request that that question and response be removed from the record. i do have some questions. in my district, we have the national computer forensics institute which deals with cyber crime. and what i wonder about is with the emergence of -- and evolution of ai, how does -- what are we putting in place? because to the potential that creates for committing crime and
9:37 am
for solving crime. dr. persons, do you have any thoughts on that? >> well, certainly. and one of the areas we did -- thank you for the question. one of the areas we did look at in general was just criminal justice. so just the risks that are there in terms of the social risks, making sure the scales are balanced exactly as they ought to be, thattice i blind and so on was the focus of that. however, in terms of criminal forensics, ai could be a tool that helps us out, you know, in a rec tree speculative sense what happened. it's helping the forensic analysts who would know what things look like and the algorithm and the machine learning sense of things would need to learn what the risks might be going forward so that you perhaps could identify things more proactively and perhaps at near or real-time. so that's the opportunity for this. again, ai is a tool and cyber was a key machine we heard moving forward. >> any thoughts on that?
9:38 am
>> so today, you know, we're already starting to see some of the security problems with the methods that we're creating. for example, there's a new class of attack called adversarial examples, where researchers are able to craft like a physical patch that you could printout, put on any object. they'll make a computer vision system think it's whatever object you want it to be. so you could put that on a stop sign and confuse a self-driving car, for example. so these sorts of ways of subverting these powful systems is something we're going to have to solve and going to have to work on, just like we've been working on computer security for more conventional systems. and i think that the way to think about if you could successfully build and deploy an agi, what would that look like. it's like the internet in terms of being deeply integrated in people's lives but also having this increasing amount of autonomy and representation and taking action on people's behalves. and so you'll have kind of this question of how do you make sure -- first of all, that's something that could be great for security if these systems
9:39 am
are well-built and have safety in their core and are very hard to subvert. but also if it's possible for people to hack them or to cause them to do things that are not aligned with t value of the operator, then i think that you can start having very large-scale disruption. >> it also concerns me in the context -- it was announced a couple weeks ago that the united states plans to form a space core. we know that china has been gressi i militarizing space. if you have any thoughts on that discussion of how artificial intelligence will be used in regard to space. communications systems that are highly vulnerable already, i think that there's some additional vulnerability there that would be created. any thoughts on that? and any one of the three of the panelists? . >> yes, sir. so in terms of the risks in
9:40 am
space, obviously one of the key concerns for ai is weaponization. which i think is part of that. and so much less the space domain or any other one. and so i know our defense department has key leadership thinking on this, and working strategically on how do we operate in an environment where we have to assume there's going to be the adversary, might not operate in the ethical framework that we do. and to defeat that. but there is no simple answer at this time, other than our defense department is thinking about it and working on it. >> and he's not here, visly, to testify. but in dr. carbonal's testimony, he made a statement we need to produce more ai researchers. and i think that kind of plays into that issue of how do we deal with ai in space.
9:41 am
that's one of the reasons why i've been pushing for a college program like an rotc program to recruit people into the space corps in these areas, start identifying students when they're maybe even in junior high, and scholarship them and through collegeo get them into these positions. any thoughts on that? >> i'm just answer quickly and say i think as dr. li has elegantly pointed out before, this is an interdisciplinary thing. i think there is going to be a need for the s.t.e.m., stooem specialists that particularly focus on this. but i think any particular vocation is going to be impacted one way or another. just like you could imagine rewinding a couple decades or a few decades, i'll date myself. but when the advent of the personal computer, the pc coming in, and how that affected, now today we talk into any vocation and someone is using a pc and it's not unusual. but at the time you had to learn how to augment yourself or your task with that. and i think that's --
9:42 am
>> if i may, mr. chairman, just add this final thought. we've had to deal with some major hacks, federal government systems that are hacked. and what we're faced with, we're competing with the private sec to are for the best and brightest in terms of cyber security. we're going to find ourselves in the same situation with ai experts. the truly skilled people. and that's why i'm suggesting that we may need to start thinking about how do we recruit these people and get them as employees of the federal government. and that's -- that was my thought on setting up a -- an rotc-type program, where we recruit people in, we scholarship them, whether it's for cyber security or for ai. and with a -- you know, four or five year commitment to work for the federal government. because there's going to be tremendous competition. and the federal government has a very difficult time competing
9:43 am
for those types of people. so with that, mr. chairman, i yield back. >> now the chair recognizes the gentleman from new york. >> it's okay. we're patient. i thank our respected chairs and rankingem for today's very informative hearing. weld ahas to our witnesses. i'm proud to represent new york's 20th congressional district, where our universitys are leading the way in artificial intelligence, research and education initiatives. suni poly tech in this case institute is developing neuro morphic circuits whi could be used for deep learning, such as pattern recognition. but are also useful for ai or machine learning. in addition, the institute has established an ongoing research program on restive devices. rpi is pushing the boundaries of artificial intelligence in a few different areas. in the health care front, rpi is focusing on improving people's lives and patient outcomes by
9:44 am
collaborating with albany medical center to improve the performance of their emergency department by using ai and analytics to reduce the recurrence of costly er visits by patients. rpi researchers are also collaborating with ibm to use the watson compung platform to help people with prediabetes avoid developing the disease. in our fight to combat climate change and protect our environment, researchers at rpi and earth and eironmental science are working with computer science and machine learning researchers to apply cutting-edge ai to climate issues. in the education space, rpi is exploring new ways to use ai to improve teaching, as well as new approaches to teaching ai and data science to every student at rpi. with all that being said, there are tremendous universities across our country that are excelling in ai research and education. and what are some of the keys to helping ai institutions like
9:45 am
them to excel? what do we need to do? what would be the most important? that's any one of our panelists. >> so thank you for asking that question. i think just like we recognize ai reall is such a widespread technology, that i think one thing to recognize is that it is still so critical to support basic science research and education in our universities. this technology is far from being done. of urse, the industry is making tremendous investment and effort into ai, but it's a naysent science and technology. we have many unanswered questions, yincluding socially relevant ai, education, health care and many other areas. so one of the biggest things i see would be investment into the
9:46 am
basic science research into our universities. and encouraging more students thinking in interdisciplinary terms, taking courses, you know, that can be s.t.e.m. students, s.t.e.a.m. students. ai is not just for engineers and scientists. it could be students for policy-making mind, for students with law interests and so on. and so i hopo see universities participating in this in a tremendous way, like many great schools in new york state. >> thank you. dr. persons or mr. brockman, either of you? >> so i agree with dr. li. but i also point out that i think it is also becoming increasingly hard to truly compete as an academic institution. ou look at what's happening, it's very different from most scientific fields. and that the salary disparity between what you can get at one of these industrial labs versus
9:47 am
what you can get in academia, it's very, very large. and there's a second piece, which is in order to do the research, you need access to massive computational resources. for example, the work we just did with this s.t.e.a.m. breakthrough, that required a giant cluster of something around 10,000 machines. and that's something where i an academic setting, it's not clear how you can access those resources. and so i think for the playing field to still be accessible, i think that there needs to be some story for how people in academic institutions can get access to that, and i think that the question of, you know, where is the best research going to be done and where are the best people going to be, i think that's something that it's -- you know, playing out right now i think in industry's favor. but it's not necessarily set in stone. >> thank you. dr. persons? >> thank you for the question. i would just add to my fellow panelists, the fact that our experts said that real-world test beds are important to this. you don't know what you don't
9:48 am
know. not only in addition to adding access to data, but being able to test and do things. one thing for sure, and i learned -- in fact from open ai, a lot of the times these things come out with surprising results. and so that's the whole reason of creating safe environments to try things out and derisk those technologies. and that's something that i think is going to be important to enable that basic research, to have an avenue to perhaps move up the technology maturity scale, possibly into the market. and certainly hopefully to solve critical complex real-world problems. >> thank you. very informative. mr. chair, i yield back. >> the gentleman now recognizes -- the chair now recognizes the gentleman from illinois. >> thank you, mr. chairman. and thank you for coming to testify today. you know, i've been interested in artificial intelligence for quite a long t back in the 1990s, working in particle physics, we were using neuro classifiers to trying to classify particle physics
9:49 am
interactions. what i couldn't stand the government shutdown, i went and downloaded tenser flow and works through the -- part of the tutorial on it. and theorhms are not so different than what we were using back in the 1990s. computing power difference is breath taking. and i very much resonated with your comments on the huge increase in dedicated computer power for deep learning and similar. and that is likely to be transformative, given the recent. and so that, and we have to think through that. even with no new brilliant ideas on algorithms, there's going to be a huge leap forward. so thank you for that. that's key obserti h coress, i the co chair of the new democrats coalition on future of work task force, where we have been trying to think through what this means for the workplace of the future. and so i'd like to -- mr. chairman, i'd like to submit for the record a white paper
9:50 am
entitled "closing the skills and opportunity gaps," without objection. >> without objection, so ordered. >> thank you. and i will be asking for the record, if you could have a look at this and see if, you know -- what sort of you think this document has for the near term policy responses because it's -- you know, this is coming at it i think faster than a lot of people in politics really understand. and also i will be asking for the record. i guess you don't have to respond right now, where the best sources on information on how quickly this will be coming at us. there are conferences here and there but you will attend a lot of them. i will be interested in where you think you come together to get the economic experts, the labor economists, people like that, all in the same room. i think it's something we should be putting more effort into. on another tick, i have been very involved in trying to
9:51 am
resurrect something called the office of technology assessment. what the j.o. did here is very good, which to bring yet a conference of the experts. you brought in a good set of experts. a year later now we are getting a report on this. you need more bandwidth in congre on that on all technological issues. a year old group of experts and a.i., those are opinions that are sort of dated a little bit. even a year in the past. so the office of technology assessment for decades provided immediate high bandwidth advice to congress on all sorts of technological issues. so we are coming closer and closer every year and getting it refunded after it was defunded in the 1990s. and, so, i think -- well, to ask you a question here, is there anyone on the panel that thinks that congress has enough
9:52 am
technological capacity as it currently stands to deal with issues like this? >> so -- >> i can answer that. yeah. no, it's a huge problem. and it's been aggravated by the fact that people have decided, in their wisdom, to cut back the side and staff. we talked about the difficulty of getting top of the line professionals here and we're saying members of congress are willing to do anying but give them the salaries that will be necessary to actually compete for those jobs. let's see. let's see. oh, mr. brockman, i would advocate everyone have a look at the reference five, which is your malicious look at a.i.,
9:53 am
which i stayed up late last night reading that, and it is real. along the same lines, members of congress have access to the classified version of a national academy's of science study on the implication of autonomous drones. and this is something that i think, you know, has to be understood by the military. we're about to mark up a military authorization bill, an appropriations bill, that is spending way too much money fighting the last war and not enough fighting the wars of the future. and then finally, mr. lee, in the educational aspects of this, one thing i struggle for, i guess if you think through the bios of people that are the heroes of artificial intelligence, they come from physics math.
9:54 am
i was wondering, is a.i. like that? are there a small number of heroes that really do most of the work and everyone else sort of fills in thing? >> so, like i said, dr. foster, a.i. is a very nascent field. soven though it collecting a lot of enthusiasm worldwide societily. as a young science it starts with a few people. i ls trained as a physics major and i think about early days of newtonian physics. that was a smallish group of people as well. it would be too much to compare directly. but what i really do want to say is that we may be in the early, almost pre-newtonian days of
9:55 am
a.i. so the number of people are still small. having said that, there are many, many people that have contributed to a.i. their names may not have made to it the news, to the blogs, to the sweets, b the tweets, but we remember them. and i want to say many of them are members of the underrepresented minority group. there are many women in the first generation of a.i. experts. >> yeah. two or three clicks down in the references cited by your testimony, you look at the papers there a the author list, it is pretty clear that our dominance in a.i. is due to immigrants. okay? and dr. lee, i suspect you might not have come to the country under the conditions being presented by the president. but it is important.
9:56 am
9:57 am
9:58 am
watch live thursday at 1:00 p.m. eastern or listen with the free c-span radio app. this past week with the help of our cable partners, the c-span bus traveled to juno in alas alaska. we travel to our next stop in fair banks. >> c-span programming is valuable for alas ckalaskans. gci is proud to carry c-span for a number of reasons, especially for their emphasis on education. from lesson plans to handouts to timely teachable videos and educator conferences, the c-span classroom program offers so many resources to teachers and adds a great deal of value to today's classrooms. >> thank you for being part of it, bringing your awesome bus to
9:59 am
fair banks. i heard story of driving up from the folks who plaugt the bus up here and the things they saw on the way coming to alaska, it was a nice trip from what i heard. and i understand it. i have driven it a few times myself. it is an awesome trip. we're so glad your bus came here and using it as a tool to bring fairbanks nationwide. >> c-span is 40 years old, much older than me. that's a joke, by the way. you can laugh. what i appreciated about c-span is it is not partisan. you watch the sparring that takes place. you watch your delegations talk back and forth. it is extremely informative and very educational. one of the best things on the bus, and i'm a tech geek. so i hope they take me with them on their tour because i would just spend hours on that bus. but if you go in and look at the video screens, they're interactive, people can learn and kids can learn about
10:00 am
government. i mean, government doesn't have toe a bad word. >> be sure to join us july 21st and 22nd when we'll feature our visit to alaska. watch alaska weekend on c-an, c-span.org or listen with the free c-span app. >> friday the conversation with the chief justice of the united states, john roberts. from the judicial conference of the fourth circuit live friday at 3:30 p.m. eastern on c-span, cspan.org or listen on the free c-span radio app. c-span, where history unfolds daily. in 1979, c-span was created as a public service by america's cable television companies. and today we continue to bring you unfiltered coverage of congress, the white house, the
10:01 am
supreme court and public policy events in washington, d.c., and around the country. c-span is brought to you by your cable or satellite provider. we're live on capitol hill this morning as housing and urban development secretary ben carson will be testifying before the house financial service committee. as that committee looks at how hudd is executing federal policy. members will also review both the agencies's successes and challenges since its creation in 1965 and the effectiveness of its programs. three programs accounted for hudd's programs last year. the hearing due to start any moment now. live coverage here on c-span 3.
10:04 am
106 Views
IN COLLECTIONS
CSPAN3 Television Archive Television Archive News Search ServiceUploaded by TV Archive on