tv Discussion on Higher Education CSPAN April 13, 2016 5:04am-6:44am EDT
5:04 am
terms like return on investment and now the u.s. department of education is publishing information about how much graduates at individual colleges make. and from my perspective, that makes a lot of sense, but at the same time, i don't think anyone would assert that the value of a higher education can entirely or even substantially be encompassed only in a dollar figure. so, if you're going to measure the value of college in some way other than how much a diploma's worth in the labor market, it leaves you to wonder how to do that and how you can gather information about how much students learn. if you're dissatisfied with the accreditation system as many people here in washington, d.c., seem to be -- there have been hearings in congress over the last couple of years criticizing or denouncing the accreditors in different ways because of a dissatisfaction with the way they handle quality control and other aspects of higher education. again, it leads you to a question of, if not accreditation, then what? if you want to think about
5:05 am
unoivation -- innovation in higher education and maybe finding new ways to use the federal financial aid system to give entrepreneurs or people the ability to create new systems or new ways of approaching higher education, but at the same time, you're mindful of the potential for fraud and abuse, it raises the question, how would you find out whether or not some new, innovative higher education operator is really doing a good job? and more broadly, even beyond the kind of policy-type questions that we like to debate here in washington, just the central importance of our colleges and universities, the millions of students moving through them, the very high-stakes nature of that process ends in the traditional lack of a really solid research base around college student learning. there is research out there,
5:06 am
there are studies out there, but surprisingly little, given how many people go to college, given implicitly the value proposition and the promise that is made for higher education. we don't know all that much, really, i would say, about how much students learn, particularly at the individual student level or the departmental level or the institution level. so, for this and many reasons, new america invited fredrik deboer, who is with us today, to write a white paper that was released last week about the state of college assessment today, where it is, where it's going, and what it means. and what i think you find from reading the paper, which is both an excellent synthesis of where things are now and really a provocative look at where things can be going, you'll find that there's more out there than people realize. there are actually some very smart people, and we have some of them in the room today to talk about this who have been doing great work over the last decade and even more developing new ways of assessing college student learning. people have become very interested in these assessments.
5:07 am
the broader idea of using standardized assessments in learning is one that is actually more familiar to us from the k-12 arena, where it has been and remains very controversial, yet very much a part of the fabric of our k-12 schools. there's a lot of i think trepidation, some of it warranted, some of it not, in higher education about whether or not it's either possible or appropriate to assess people at a more consistent or standardized means. there are a lot of very complicated technical questions about how to do that, and then there are a lot of, in some ways, broader philosophical questions about the meaning of assessment and how it ought to relate to higher education. so, that's what we're going to talk about this morning. we are grateful for you in the audience and everyone watching out there on c-span, and we are going to start with a presentation from fredrik deboer, who's going to talk to us about his research and the paper that he's written. so, fredrik, thank you.
5:08 am
>> good morning. thank you for coming. i'd like to thank kevin and new america for bringing me here and for commissioning the paper. it's really a great opportunity. i try never to read too much when i present, but i do like to hear myself talk, so if i'm going on a little bit long, you guys can throw something at me. so, i guess the thing that i really want to talk to you all about is why i would happen to come to write a paper like this when i consider myself someone who's still within the liberal arts or the humanities and someone who opposes the sort of corporate turn in higher education. you know, why would i come to write a paper like this? it is the case that now most of my research is quantitative in nature, social scientific and empirical, but i grew up in the humanities and consider myself fundamentally a humanist. so, i want to tell you a little it about where the research came from and how i can try to synthesize those sort of parts of myself. so, like most research, my interest in standardized testing in college came from my own local context and my own life.
5:09 am
when i was getting my doctorate at purdue, which i completed last may, a controversy erupted there over proposed implementation of the collegiate learning assessment plus, which is one of the major tests of -- standardized tests, excuse me, of college learning today. the mitch daniels administration -- so, mitch daniels is the former republican governor of indiana and is now the president of purdue -- wanted to implement this test at wide scale in the university. they wanted a very large portion of the incoming freshmen and the outgoing seniors to take this test in order to monitor undergraduate learning. the daniels administration has made value their sort of keystone word. for example, in indiana, it's now dotted with billboards that say "education at the highest proven value," to sort of sell purdue. this was bound to be controversial for a few reasons, the first of which is that mitch daniels' administration has been controversial at the school. this is for a variety of
5:10 am
reasons, maybe the biggest one being that he does not have an academic background, which made many of the faculty unhappy when he was hired on. it's also the case that the way in which he was selected was controversial given that he appointed all of the trustees who, in turn, appointed him. so, when he was governor, he appointed the trustees that then appointed him as the president, which was also controversial. but assessment in particular became a linchpin of a lot of other issues that have been sort of bubbling along on the college since his appointment, and that's because the assessment in a very deep and real way asks what we value in the university, and it is inevitable that assessment will to some degree control learning. i think that you can have minimally invasive assessment. that's part of why i wrote the white paper. it's part of what i'll talk to you about today. but there is no doubt that assessment is always going to have some impact on how learning happens on college campuses. if it didn't, why would you do it? and so, the question is then was
5:11 am
this in the mind of the faculty a way for the daniels administration to sort of take control, to wrestle control of undergraduate learning away from the faculty senate where it had always been sort of invested? and that was sort of the proxy issue. so, the fight was about this test, but it was also about faculty control of learning and the gradual deprofessionalization of the professory that's happening nationwide. so, when i was doing my dissertation research, the point that came out again and again from faculty is we want to assess, make sure our students are doing a good job, but the question is who controls it. and one of the biggest issues with standardized testing is that it is, in fact, standardized. and what many faculty members don't like is it removes the local control and local definition of what success means. and yet, i still think that standardized tests can be useful, even though i myself believe in and understand a desire for faculty control and
5:12 am
for local context. for a little bit of background. so, i've spent sort of the past five or six years becoming versed in educational measurement, statistics, psychometrics and related topics within assessment. i would prefer you not to try to quiz me on that stuff right now, but i've done a lot of work, and i came up with a very conventional kind of liberal arts background. i got a ba in english and philosophy, for example. so, 10, 12 years ago i was reading thomas hardy, and these days, for some reason, i'm staring at a spreadsheet all the time, which wouldn't ordinarily be what i enjoy. but i wanted to acquire this at least basic literacy and quantitative skills, because it seems clear to me that some people -- and i don't think everybody should be this way, not even a large proportion of us -- but some of us within the humanities have to be able to speak the language of numbers in the social sciences, because that is the language of power. it is abundantly clear that the policy world speaks in a certain
5:13 am
kind of language, and that language is statistics that language is validity, reliability, and i became concerned that that was not a skill set that was in the hands of most humanists. and so, one of the things i ended up finding in my dissertation research is humanities professors were constantly complaining that they were cut out of major policy initiatives and think tanks and commissions, but then the question becomes, you know, what would you talk about if everyone else is talking about numbers? that's a bigger fight, but that was sort of why i'm here, and i still maintain to myself a sort of core belief that i'm pursuing what the humanities are all about. anyway, so, the story of purdue had kind of an anticlimactic resolution. it became a very big kind of local controversy. the local paper actually ran a front-page story that said "daniels and faculty in battle of wills," which, of course, made that true. like, if it wasn't a battle of wills before, as soon as the paper said that it was, both sides kind of dug in, because
5:14 am
now they had to save face. what ended up happening is that they delayed, they sort of -- they did what, you know, universities always do, which is they had committees with subcommittees and those subcommittees had subcommittees, and there was kind of a delaying action. eventually, what happened is that the students wouldn't sign up to take the test, so they had this sort of big kind of battle going on, but one of the major impacts and what actually happened was that you couldn't get an adequate sample size of the size that the daniels administration wanted. so, they're still going forward with the cla at the school and it's going to be interesting to see what happens, but in much smaller numbers than they had originally proposed. you know, 18 and 22-year-olds aren't exactly eager to take a standardized test that they didn't have to take in the past, so that's one of the issues with this sort of thing. but i do think that the controversy's really interesting and important, and i think we're going to see these debates play out in many schools across the united states, because assessment is not going away and
5:15 am
standardized assessment is not going away. we've had a success in presidential administrations, both the george w. bush administration, the barack obama administration, who have had educational officials who have made a strong call for more standardized assessments going into college, and this is an issue on which the political parties, although they have disagreements about who will do what and the sort of dynamics of how it's going to happen, there is great agreement between them on this issue, and people within higher education cannot just sort of close their ears to this. although this particular plan was eventually scuttled, the obama administration was going to try to tie performance on standardized assessments and college rankings to availability of federal aid. that is a very big stick indeed. that is the kind of thing that no college, even the really elite colleges, can afford to ignore. so, if we're going to move forward as a system of higher education that can define what's going to happen to itself, rather than have assessment happen to it -- so, there's, in
5:16 am
my field, in composition, there's a guy named edward white, and his white's law is assess or be assessed, meaning that if you don't perform assessment yourself and if you aren't willing and eager to get involved in assessment, then you will end up finding assessment being done to you. and the reason why i'm so invested in this is for that exact reason. i think that faculty can take an active role in assessment, that we can get out ahead of these problems and that we can become a major force in shaping how assessment happens. if, on the other hand, faculties simply say we are not going to do this, then it's going to happen anyway, and it's going to happen in a way that does not reflect faculty interests. i think that that's reality. some people see that as fatalism, but. i want to make a few things clear to people within the humanities world who disagree with me. to begin with, we're already assessing in many ways. there's already all kinds of assessment that have happened on campus. the problem is that many of these assessments are ad hoc, they're idiosyncratic, they're lacking in validity and reliability evidence, they can't talk to each other from one
5:17 am
campus to the other. i mean, so, the entire process of accreditation that kevin mentioned, that is supposed to and has always supposed to have had an assessment function. the idea of assessing colleges is not some new neoliberal enterprise. it's always been central to the process of accreditation. the fact that the accreditation process is seen as toothless is not, you know, really a good thing. if we're going to have an accreditation process, it should mean something, right? and the fact that so many schools have become used to a context where the accreditation process doesn't threaten them means we have to reform the system. but i'll name another form of assessment. rankings like the u.s. news and world report rankings, in a real sense, that is an assessment. i think that it's a very wrong-headed kind of assessment and i think it's casually destructive to what we really value on college campuses, but in a very real way, they are saying let's assess the quality of these schools. and they put out a list of rankings that many students and parents pay a lot of attention to.
5:18 am
and one of the things that means is that through a lack of assessment data, we simply perpetuate the elite. there's many things that elite colleges do well -- harvard, yale, stanford, university of chicago. i don't mean to demean them. they do a lot of things well. but the notion that we know that they teach undergraduates better has no evidentiary basis, right? so, the sense that harvard is the best university in the world has almost nothing to do with how well it teaches undergrads. that data does not exist. and because they skim off the top and they take only the truly elite high school students, you could probably put those students into any particular university and see them excel. and so, when they sort of report how well their students do after graduation, you can fairly ask, well, does that come from your undergraduate education? in my hometown, i went to public high school, the local private high school always bragged about the standardized test scores, like s.a.t. scores, that its graduates got, which they would ignore the fact that you had to do well on a standardized test to get in, so it's like, you know, having like a height minimum and then bragging about
5:19 am
how tall your student body is, right? that is sort of the system that is perpetuated right now. a lack of assessment data allows elite institutions to maintain the fiction that they teach better without having to provide any evidentiary basis to do so. and this is one of the reasons why assessment is a social justice issue, because if we want to make colleges true vehicles for equality, we have to be able to generate data that says, in fact, what is perceived as being a sort of elite university has no solid basis for saying that it is. so, i mean, you guys -- you know, every year there's a controversy about one school or the other slipping or rising or falling a little bit in the rankings of "u.s. news & world report," but it's not like harvard's going to show up tomorrow and be at 50, right? that doesn't happen. so, we're just shuffling around these very elite institutions at the top. but it's also the case, and i think that, you know, in the simplest terms, higher education is under really profound threat right now. we have had massive amounts of
5:20 am
defunding at the state level. we have a lot of people in the policy world who think that physical colleges in the traditional liberal arts should be replaced altogether with online-only programs, certifications like that. i think that it is profoundly naive to think that online education is going to sweep in and within a generation we'll see a reduction of colleges of 90%, something i read a lot about. for one thing, i think that just underestimates the persistence of institutions and the inertia of how hard it is to change these large institutions. but i also think that we do a much better job of educating than online-only education. i think what limited evidence we have now suggests that that's true. but if we're going to say that, we have to generate data to say it. you know, the same people who tell me that we shouldn't be assessing, hate the idea of online-only education. they hate the idea of the demise of the liberal university. well, if that's the case, then you need to be able to say to the rest of the world, we do
5:21 am
something very well. i think that we need to assess more in college, and i think this can have a lot of benevolent institutions. and the message should be to everyone involved, we take a lot of resources from society. college is very expensive. we are draining hundreds of thousands of dollars on the backs of young people who then graduate into an uncertain economic climate. colleges receive hundreds of millions of dollars from the federal government. and we have an invested college with this function, somewhat unfairly, but this function of being an absolutely key linchpin of having a healthy economy. and there's no enterprise in the world where we pay hundreds of millions of dollars and no one bothers to ask how well we're doing, besides the defense industry. so, i think that we can do more assessment, but i also agree with skeptics that this kind of assessment is very hard, okay? and one of the things that i want to insist to everybody, i think it's useful in a context like this in the policy world,
5:22 am
is, these problems are not just political or theoretical in nature. one of the frustrations of being a humanist and talking about assessment is when i'm in the other world, in the policy world or the educational testing world, the psychometrics world, assumption is that resistance to these instruments is always about political resistance or self-interested resistance of the faculty. that when we say these are hard to do, it's really about we just don't want anyone to check our work. but from a pure social science, it's hard to run the large-scale assessments. i think there is a lot of data we can collect to guide decisions, but we can't underestimate that. one of the biggest things, for example, we know that colleges have profoundly different populations coming in every year, right? part of the reason why elite colleges invest so many resources in their admissions
5:23 am
process to make it truly exclusive is because they know that it works to pull sort of the cream off the top who are going to go on and excel. so, one of the things that we see that's true of these standardized tests in college is that the best predictor of how well both your freshmen and your senior populations will do is their s.a.t. scores coming in. in other words, we know very well with a great deal of certainty, with some variation, of course, in the world of variability, but we know very well how most colleges are going to stack up in rankings simply based on how they stack up on their average s.a.t. score incoming. so, we need to use value-added models and things like that in order to correct for differences in incoming population. purdue is a fantastic public university with a great student body. we still accept about 60% of our applicants, which, by the way, you know, that is actually quite elite in context. so, i don't think most people understand this. going to an exclusive college is extremely rare. this is one of the things that people in journalism and the policy world i think often
5:24 am
underestimate. there is over 3,000 accredited, or about 3,000 accredited four-year institutions in the united states. maybe 125 of them -- and that's a generous estimate -- reject more students than they accept, okay? so, the vast majority of colleges are taking a majority of the students who apply. and in fact, most colleges take essentially any student that applies. they need to simply for pure economic reasons. so, if we're going to compare colleges, we need to bear in mind that they have very different populations, and we need to have a way to correct for that. there's also all kinds of issues with scaling, traditional sort of testing things. we know how to address those things pretty well, but let's be clear that they are empirical problems. they are not just political problems. they're not just self-interested professors saying we don't want to be tested. it's a false choice is my major point i want to make here, between sort of invasive testing, testing everyone all the time, constantly testing these students, having them have a sort of teach to the test kind
5:25 am
of attitude, dramatic in college or no assessment. this is one of my great frustrations, is conversation so often boils down to either we enact something like no child left behind for college or we continue to do almost nothing that's replicable. that's a false choice. i believe that we can have minimally invasive college standardized assessment that can still provide a lot of good data. we don't need to test all the students all the time. one of the frustrations with k-12 debate is that it is often premised on the idea that we need to do a census style for testing. in other words, in k-12, almost all students are tested almost all the time, right? and this is part of what parents hate about it. but we have the power of inferential statistics. one of the things that we know how to do very well is to form strike thatified samples that are representative of a student
5:26 am
body and to be able to extrapolate from that sample. we know how to do that. we can take a sample of students from the average college, make sure it is adequately diverse in terms of the racial makeup, gender, the majors involved, whatever you want, and we can develop a sample, have students take these tests and understand very well how our student body are doing. that's an ability that we have, and it is frustrating that that ability from both people who push more testing and those who resist it is often ignored. we don't have to test everybody all the time. we have the beauty of a stratified sampling. i want to say it's essential to involve faculty at every stage of the process, and that's true whether you like faculty or not. luckily, there is still some power invested in faculties, even in a lot of institutions where the higher administration has clawed back a lot of power. at purdue, for example, you know, president daniels has a real strong mandate -- and i should say that although i'm critical of him on some things, i have agreed with some of his initiatives, by the way -- but he has a strong mandate. he has the backing of the board of trustees.
5:27 am
some people can say he's still the most powerful politician in indiana, even though he's not the governor anymore. but he still was unable to just sort of implement what he wanted. you have to work with the faculty. in my paper i talk about this at length, and you can read my recommendations that you let a faculty-controlled disciplinary assessment. i think academically adrift is a deeply flawed text for a variety of reasons. one of the things that bothers me about it is they didn't even really try to assess disciplinary knowledge. so, these tests that we're going to talk about today, these are tests of general ability, so often defined as like critical thinking or academic aptitude. they don't attempt to assess what you learned in your major classes. in other words, they are not even attempting to say how well did a nursing major learn nursing, how well did a computer scientist learn to code? you have to do that because obviously, you can't have a standardized test that works across an institution, testing
5:28 am
major knowledge for people who aren't in that major. but what that means is that when people say there is limited learning on college, often they're talking about not looking at what most people are considering the most important thing you learn in college, your major. that's the important way to involve faculty. say you will always control disciplinary assessment. the computer science program gets to define success for computer science. we need that to be more standardized. we need that to be more interoperable, so we need people to talk with other kinds of assessment, but you are in control of that. that's a great way to involve faculty. finally, let me close by saying this while i have maybe the attention of the testing industry. i think that we have to have greater access to data and information and mechanisms of standardized testing instruments if we're going to implement these at scale. we need to open up the books on these tests a little bit. to their credit, major test developers do a lot of internal research and they have proven themselves willing to publish research that is critical of their own instruments.
5:29 am
so, ets, for example, who's often a boogieman in these conversations -- and i have a list of problems with them -- but they're very progressive in having independent researchers investigate their instruments and say, you know what, this is the problem. they're very willing to do that, and the testing industry in general is willing to do that. the fact of the matter is internal research can never replace truly external review. even the most principled researchers can't audit themselves. and so, some information is made available by testing companies. it's my opinion that not nearly enough is made available and that what is made available frequently has requirements for access that are too onerous. i'll tell you a story. when i was still in course work, i was taking a seminar in language testing, and i needed a data set. my professor said, well, you can use this data set i have from a testing company that will remain nameless. so, i did research, i did the papers, turned it in. i turned it in. she thought it was very good. i said, well, i'd like to publish this paper. she said, well, here's the problem, that data set is a proprietary data set of this
5:30 am
educational testing company. so, you have to write up the paper, you have to submit the draft to the testing company. the testing company then will decide if they want to review it or not for you. if they decide to review it, that could take six months. they would send it back and say this stuff has to change. i would change it and send it back for review again. if they thought the reviews were good enough, they'd send it back to me. them i could submit it to an academy journal, which would take three months or longer to referee the paper. the journal would send back the paper to me with their recommended revisions. if i made those revisions, because the draft had changed, i would then need to submit it back to the testing company. the testing company could then sign off on the revisions or give me revisions to the revisions. once the testing company was satisfied, then i could submit it back to the journal, but of course, i would have revised it again in ways that the journal hadn't advised me to, so they could come back and tell me again. it can easily take three years or more for this process to play out when you're trying to
5:31 am
publish, using this kind of data. early career academics, time is of the essence. if you are a grad student, you need to publish and get stuff on your cv to get a job. if you are pretenure, you need to get stuff in there when your tenure clock has run out. so, this poses a powerful disincentive for people to publish. there's got to be a better way to do that. i think that it's possible -- now, the concerns of the testing industry, which i recognize, are number one, test security. so, the fear that giving people data will make it easier to game or cheat on the test and industry trade secrets. i do think that we can provide data and still sort of address those things. i mean, a test like the s.a.t., people have been trying to game it for years, and still, it's remarkably durable to cheating. so, whatever "princeton review" or another test prep company tells you, they have not been able to demonstrate that they can really game those tests up. and i think that, you know, the core of assessment is saying trust us is not enough, right?
5:32 am
i mean, what we're talking about here is universities saying, trust us, our students are learning, and policymakers and stakeholders saying, trust us is not good enough. if that's true for the faculty, it has to also be true for the testing companies. in other words, testing companies can't expect for us to accept trust us either. okay? and i think that there's all kinds of data that you can put out there. so, for multiple-choice tests, we should, you know, external researchers should be able to do traditional item analysis, things like facility index and discrimination indices and stuff that really test developers can tell you much better about than i can. for written responses for essay tests, i think it's appropriate for test developers to provide a research corporates, so a collection of real essays that have been generated and scored internal to the organization. give us those scores, give us the essays, give us the prompts that were used, make the machine readable so we can use corpus linguistics and things like
5:33 am
that. ultimately, what is appropriate and fair for test companies to reveal and put out there will be a matter of negotiation and they're going to tend to err on the side of giving less and we'll ask for more, but i think we need to have more access to information than we do now. and that can help the test companies, too, because one of the major bones of contention with faculty is they say we can't look behind the curtain. i'm an expert in educational testing or i'm an expert in developmental psychology or i'm an expert in statistics, a professor might say, why can't i look at your mechanism? and i think we can both sort of serve the interests of test security and trade secrets and open up the books a little bit. anyway, i'm going to stop talking now. but i want to close by saying, again, the notion that we either have to have no expansion of assessment or we have to have a high-stakes, perpetual testing regime that dramatically changes the university is just wrong. there is plenty of opportunity, there is many opportunities for us to gather data, to make that data publicly available, to
5:34 am
better understand how our students are doing in college, how well our institutions are serving students who are graduating again with hundreds of thousands of dollars of student loan debt. we can gather information and still keep all the good things about college, but there has to be external pressure on institutions because the default sort of institutional response of colleges and universities to never change, and we're in crisis, so we need to change. thank you. >> that was terrific. in your remarks, you mentioned both at purdue university, the cla plus being the instrument there, also the very well-known and controversial book "academically adrift," also based on the cla plus. we are very pleased to have roger benjamin, the president of the council for aid to education, the man who in many -- who more than anyone is responsible for the cla plus,
5:35 am
joining us today. i have to also note that we invited alberto asereda from ets to speak today. he was on his way down from new jersey and there was a problem with amtrak and so he's not going to be able to make it, but he did send me his remarks and i am going to reflect some of those. and i suppose you can be a little tougher on ets now, not that we wouldn't have been anyway. so, roger, i would love to start by hearing some of your thoughts on freddie's white paper and his presentation today. >> well, i enjoyed it. and first let me say that i am the director of cae. i'm a political economist, and i also was an academic for a long time and still think the title professor is the one i like the best. and then in that role, i was dean of the arts and sciences,
5:36 am
and i view myself as fundamentally a product of the liberal arts. and i'm married to an extremely distinguished art historian, and her world is illuminated french manuscripts, and she wouldn't be bothered to attend this subject because she's totally focused on the data in that world. and let me just say something on behalf of ets. one of the problems that we've got is a paucity of measurement scientists, psychomatricians, there are only a handful each year being turned out by iowa, iowa state, illinois, minnesota, places like that. and yet, as kevin noted in his intro, increasingly assessments
5:37 am
being recognized as a very important subject. but we just don't have the numbers, let alone the quality of people in that field that we're going to need going forward. the importance of ets cannot be overstated. i think the -- for example, the director of research for cae, steve kline, who just retired is a young person worked under sam messic to design the nape and and the mate -- metric sampling approach. the nape is the gold standard for the kind of test that makes a lot of sense. and richard shavelson was heavily involved with us. he is a professor at stanford has ties to ets.
5:38 am
ets is doing excellent work. lydia lu is a distinguished researcher there working in -- with assessment in higher education. dan mccaffery who used to be a colleague of mine and steve kline's when we were at rand in the late '90s, now has a chair -- he is one of the most thoughtful people, a statistician who really knows a great deal about the issues that fredrik was talking about. i want to say a little bit -- the bottom line is fredrik's paper is a serious critique and he poses problems that are important. and what i want to do is just make a couple of comments that are stimulated by his argument and then i want to just tell you
5:39 am
a little bit about a new way we might think about framing research on higher education, leading perhaps with educational assessment but going beyond that a little bit. i think that accountability did have a lot to do with the ramp-up of assessment in higher education. i'm talking about the spellings commission and so on. but i want to note that in the case of our group, reform and not accountability was the principle motivator. steve and i, some other colleagues at rand in the late '90s began to think about introducing assessment to higher education because we felt that it was a good idea.
5:40 am
and steve had introduced performance assessment to the bar association via the clinical part of the bar exam, which is in all the major states in the late '70s, early '80s. and we thought that in the knowledge economy it's very important for the next generation students to be able to improve their critical thinking skills, their ability to access structure and use information, because you can google for simply facts and you really need to be able to be stronger critical thinkers. and my colleagues and i set out with that premise and we still hold it. the board, for example, led by richard atkinson, former president of the university of california, kathryn lyle, university of wisconsin system,
5:41 am
doug bennett, michael lomax, sarah tucker who ran the national hispanic association, michael crow and others, really believe that this was a worth white undertaking and we still do. we focused on no stakes value added approach because we thought the improvement question was a good way to start for post secondary education. we were starting to do research and development at a period in which no child left behind and the test corruption in k-12 was an issue. and so we decided to really take this value-added approach to scale. and it is a worthwhile endeavor.
5:42 am
it turns out that there's about a .44 standard deviation of growth, at least based on the cla across 1,200, 1,400 test administrations. and that's a very significant growth in social science. controlling for entering competencies through the s.a.t. or the a.c.t. and now a matching model that we created. and the amount of growth in similarly situated institutions varies as much as 1.0. there's a large number -- there are about 20% of the colleges that really do demonstrate best practices in important ways. obviously those in the middle and then 20% at the bottom are
5:43 am
somewhat problematic. about four years ago we began to develop a version of the cla that was reliable and valid at the institutional -- sorry, at the student level. and the no-stakes approach did not cause problems for motivation -- motivational problems that affected the institutional level results, but it did affect student motivation. so we've created badges, mastery levels for the cla plus results for students, and we have cla plus career connect and employers beginning to actually ask students whether or not or -- indicate that they'd be interested in getting these results. so we're trying to solve the motivational issue that way.
5:44 am
i think it's true that we could have more -- much more reliability and validity data, but the truth of the matter is also we have a paper on our website, the case for critical thinking test and performance assessment and it has about 100 external publications. and all peer reviewed papers that are published by cla, ets papers and there are many, many dozens of them are peer reviewed several independent reviewers take a look at them. that's on our website. now, the idea -- it's under the label of going forward, pastors
5:45 am
quadrant, to give you something to think about this morning. pasters quadrant in higher education. it's a name of a paper that i've just drafted. if you're interested, i'm going to leave my card somewhere and i'll be glad to send you the draft. donald stokes wrote an intriguing book called pastry quadrant, the relationship between basic research and technological change. and in it, he reverses the time honored assumption that basic research drives applied research. that's the way it goes. in fact, he has numerous -- he points out numerous cases of which practical problems drive research and pasters narrative is kind of his major exhibit. he was passionate about doing something about tainted milk that was killing millions of babies. and along the way in his professional career, he invented the building blocks for
5:46 am
microbiology. and stokes himself talks about its importance, democracy, use-inspired research to help frame the way we think about public policy issues. and i came up with three or four historical cases in which this approach focused interdisciplinary research using the value system of science, peer review transparency and the ability to replicate results as guiding principles. we in the middle of the civil war, congress founded the land grant university and the purpose of the land grant university was to make agriculture more of a science. and over time we've done a very good job at that. we've had the green revolution, et cetera.
5:47 am
yes, there are complaints about modified genetic crops today, but i think it's a success story. health after the flexner report in 1910, there was a commitment to make health much more of a science. and that's been a steady climb. and in the recent decades with molecular biology, i think basically most -- the tide has turned. there are tensions between the clinicians, the practitioners and the scientists, but again it's been a good story. finally rand, which i've just noted where i was for a decade, they're at the end of the war when we got worried about the soviets, congress founded rand and the goal was to task a group of researchers to come up with better objective tools of decision making for national security decision makers to make better decisions.
5:48 am
and in the first decade they invented game theory, cross benefit analysis, prototype for the internet. not bad. and i just then would say that my argument in this paper is that higher ed is the next obvious candidate for this kind of focused approach. why? well, first of all, i think there is increasing agreement that human capital is our principal resource -- any nation's principal resource, and the k through 16 is the venue for preserving and enhancing that capital. and there is, therefore, reason that policymakers at every level are going to be more and more focused on how to make sure that the system improves. and the hiring sector is very
5:49 am
important, because it's the apex. it's where it sets the standards for parents, students, and teachers to move toward. that's why it's important. what evidence of problems are that make it warranted specifically? i really won't go into them, but they're the usual suspects. but i would say if you just look curve, the hepi index, the higher education price index, and its growth over the last 40 years, compared to the cpi, it shows bombels cross disease problem abundantly in evidence. and therefore any increase in funds that come into the sector for public or private institutions goes for the inflation, and i think the primary symptoms that we read about -- the student loan debt -- is a huge problem.
32 Views
IN COLLECTIONS
CSPAN3 Television Archive Television Archive News Search ServiceUploaded by TV Archive on