Cal State
Northridge

1999 Conference on Standards-Based K-12 Education

California State University Northridge



Transcript of Sandra Horn
(edited by the speaker)

. _

 Go back to transcript of Jimmy Kilpatrick

Mr. Herr: Good morning. My name is Norm Herr from Cal State Northridge, a professor in science education. It's my pleasure this morning to introduce three speakers on the implementation of standards and how we implement them. The first speaker will be Sandy Horn. Sandy just arrived from Tennessee, she will be going back tomorrow for a triathalon so we need to allow her time to catch her plane. She will be talking about value added teaching in Tennessee. She has done research for the University of Tennessee and also is a high school teacher. Come on up, Sandy.

Ms. Horn: Hi. Yesterday I did a full day's work as Head Library Media Specialist at South Doyle High School in Knoxville, Tennessee before catching my flight to here. The reason I say that is because I want you to understand that the stuff I do at UT is very much informed and is part of the loop that my experience in the classroom informs which is informed, in turn, by my experience with value added assessment. I don't know how many of you have heard of the Tennessee Value Added Assessment System developed by William Sanders at the University of Tennessee. I worked with Bill for about the last 7 years, since TVAAS was put into the Education Improvement Act in Tennessee as part of the accountability measures that went along with a bunch of extra funding because, as you know, these days when you get extra money you are expected to be accountable for it. Fortunately, in Tennessee they picked Bill's method.

I'll tell you a little bit about TVAAS, but first I'm going to state some things that may be obvious or self evident to all of us. The first is "The measure of whether education is successful is whether a student learns." The second is "Parents have a right to expect that their child will progress at a normal rate, AT LEAST, every year." Thirdly, "It is not excessive or unreasonable to expect that a teacher will produce normal academic gain in his or her students over the course of a year."

It should go without saying that educational programs must be able to demonstrate that they produce gains in academic learning of students.

But teachers are evaluated on how they look. Programs are evaluated on how good they feel. Data are met with suspicion. Education cries "Spare me the facts." And for these reasons it's been a very interesting 7 years with TVAAS. What TVAAS is is a system for estimating the effects of schools, school systems and teachers on the academic gains of students. It is a statistical mixed model methodology fitted to a massive, longitudinally-merged data base of educational data. It's unique in the field of education. You are looking at a woman very much in love with this data base. It's currently composed of about 6 million records, longitudinally merged, including annual test scores for 5 subjects administered to all the students in Tennessee in grades 2-8 and high school subject area tests, some of which are now on line and some of which are in developmental stages. The new tests will be brought online in the next few years.

We keep these data for several years, employing new data to revise past estimates as they become available. All of our estimates are based upon at least 3 years of data and sometimes as many as 5.

In TVAAS, each child serves as his or her own control, which makes it possible to partition educational, socioeconomic, and environmental effects that confounded prior attempts to use data for educational assessment. Conceptually, TVAAS models a growth curve for each student. Although we don't expect these curves to be smooth any more than we expect a child to grow the same amount in height each year, the deviations in a child's growth curve can tell us some things. The deviation in ONE child's growth curve can tell us very little. That child may have had a stomach virus the morning of the test. That child may have had a divorce in the family. That child may have had a wonderful event happen that made learning much easier for him that year. But if you aggregate the deviations in the growth curves of many students over several years, and if those deviations occur when those students happen to be in the classroom of a certain teacher or in a certain school, then we can say something about the educational effects of that particular teacher or school or system.

Now, I only have a few minutes to talk to you, but I'm going to talk to you mostly about some of the things we have found out because we have this methodology. I'm going to tell you first of all and foremost of all, it's the teacher. It doesn't matter about a lot of other things. We have done research to check on classroom heterogeneity--how homogeneous is the academic level in the classroom. We have looked at class size. We have looked at ethnicity and socioeconomic levels and what matters is the effectiveness of the teacher. Period.

Everything else is trivial in comparison to the teacher effect. I'm not using that term statistically. Trivial, though in my estimation as a teacher in the school and--when the effects of other variables we have studied are not trivial statistically, we have found that they are, consistently, of far less importance than the teacher effect.

Here are things we thought we knew about education and I'm speaking very broadly here when I use the word we: "Poor kids are poor students." We've looked at the distribution of gains in schools across the State of Tennessee, and based on this huge data base, we do not see any correlation between the socioeconomic level of those students and the likelihood that they will achieve normal or above normal gains or below normal gains. It's simply not correlated.

Secondly, we thought we knew "minority kids can't learn." At least that's the unstated assumption of many people including some teachers who think if they have poor or minority kids they can't expect much from them. That is simply not the case. That is not correlated with academic gain in Tennessee.

"Special education kids can't learn either." We've found that our special ed kids made gains quite comparable to the other children in the classroom. As a matter of fact, the only real deviation related to academic level we've found is that the higher performing kids don't score as large gains as the lower performing students. We've found that only the most effective teachers can bring those kids along.

We've found that teachers at the median of effectiveness can produce normal gains in most of their kids, but they still fall behind in producing gains in their higher level kids. Another thing that we found is that the more effective the teacher, the less likely they are to be in minority predominant schools.

What does this say? It says perhaps it's true what all of those deconstructionists were saying--that we do perpetuate this gap in education by assigning less-effective teachers to those schools. We have the data to back that up, in Tennessee at least in two of our largest metropolitan areas.

We thought we knew that if it feels good, it is good. Right now we are looking at the New American Schools models in effect in Memphis, Tennessee, which is your largest school district. There are 14 different models testing out now. Only one or two are actually effective. You wouldn't know that and you wouldn't know which ones if you weren't looking at the student outcomes.

"If it feels good, it is good." Maybe that's why they call it educational theory. (Laughter). I don't know, but a lot of what we learn from our colleges of education and a lot of what we were told when we were being trained taught us to look with suspicion at any kind of scientific data. I hope that that is changing now but I do not see that as fact. Teachers are not taught how to look at data. They don't know how to use student test scores and to find their own student gains. Some of them are puzzled by scatter graphs.

"If it looks like a good teacher, it is a good teacher." Well, in Tennessee many of us are trained by the Madeline Hunter model. That used to be how we were trained--how we provided the proper set and closure and what came in between. But we who deal with value-added assessment do not pretend to know what you should look like when you teach. Personally, I think if you want to stand on your head and whistle Dixie in the corner, if your students learn and consistently learn and produce wonderful gains, I'm going to come learn why from you.

We also thought the good teachers can make up for bad teachers. We know now through research that Dr. Sanders and June Rivers have conducted that that is not true. The effects of a bad teacher do not go away over time. Children who have two consistently ineffective teachers will never achieve up to the potential they would have if they had been assigned to better teachers.

We also thought that bright kids would get it on their own. They don't. I already told you that. We also thought in the past that high test scores meant a good school. That is not true. In Tennessee we have brain trust schools. We have Oak Ridge, Tennessee, where the rocket scientists are. In another section of the state, there are enclaves of highly educated professional people who work for other large companies. There was a large difference between the Oak Ridge gains and those of some of the other comparable systems. In Oak Ridge, the students are producing excellent gains even though they are one of the highest scoring schools in the State. In other equally privileged systems, they are just now catching up because they were resting on their genetically-endowed laurels. The kids were in the 98 percentile. They were born that way, were economically and educationally advantaged, and the schools got them that way. Their kids are now making exceptional gains, too, at least in part because of the information they received from their value-added reports. We mustn't ignore those children. They do not get it on their own at all. It's the teachers.

These are things that you can learn if you want to know it. You can set standards all you want, but unless you actually look at the data, you don't even know whether those are the right standards. You have to find out whether you are making a difference. You need to look at the difference. First, what did you get before the Standards? Then you need to see what the difference is after the Standards are implemented. Now there are lots of ways to do that, but one of them should involve objective, statistical measures of student outcomes.

Now why do I say that? Because we don't need snap shots in education. I cannot tell you everything that a student knows on the basis of his or her scholastic test scores any more than you can tell me what a student can do entirely based upon what is sometimes termed authentic assessment. We need all of those things. We have to take every bit of information that we can gather--very bit of reliable and valid information we can gather to compose a holograph, so we can walk around this monster we are dealing with and try to make it into something that can be of service to all of us. Thank you for having me. (Applause)

Mr. Herr: We have a few minutes for some questions so please come up to the microphones up here.

Audience member: I have a question. It was so good to hear you. In our district socioeconomic status is the last column on the right. An excuse to how they perform. It's the parent's education level. You are telling us that doesn't matter.

Ms. Horn: Unless you are looking at raw test scores. If you look at how much the students learn it has no bearing whatsoever.

Audience member: Thank you for saying that.

Audience member: We've had a lot of discussion over the past years in California about what kind of data we should be looking at. We went through norm reference test and criteria reference tests. It's hard to get good longitudinal data. What I wanted to ask was are your data and methodology available in a form-- are they on the web in some form or can can you tell us how to look in more detail at the work you've done. It sounds interesting and helpful.

Ms. Horn: We have published quite profusely in several -- there are a couple ofbooks in particular. Jason Millman's books called "Grading Teachers -- Grading Schools" details the methodology pretty clearly, I think. That's the first place I would send you. We have several articles in "The Journal of Personnel Evaluation in Education." The Tennessee data, of course, links student data to teachers to schools to systems. We trace students as they move among teachers and schools and systems in Tennessee. We can share general data. We cannot give you specific data. Individual teacher data is entirely a matter of privacy in Tennessee. That's something you have to think about because, in some states, all teacher records are of public record. In Tennessee, by law, that is private matter. All of the school and system data is public and that's available to everyone. By the way, we are very very open to research projects with responsible people who are interested in learning from the data, because we don't have the personnel or the time to do all of this ourselves, to do all the research. I encourage you to contact us at the University of Tennessee Value-Added Research and Assessment Center.

Audience member: My question is very simple. I would like to know which data you use. Do you use percentile ranking. Grade equivalent. What data do you use.

Ms. Horn: In the past our State testing data have come from the Tennessee Comprehensive Assessment Program, from which we used the CTBS/4 part. We now use Terra Nova. We take the scale scores, but we use the gains that students make from one year to the next.

Audience member: Thank you.

Audience member: I'm a teacher for 31 years. What I have seen and this is what concerns me from what you say. I respectfully listen to what you say and buy three quarters of it. I have seen, because of the pressure put on principals, the teachers are pressured to manipulate the test scores so that their schools look particularly good. There are many ways that this is done and I've seen it at a variety of schools and I'm concerned when decisions are made simply based on test scores, they do not take into account this little problem. I was wondering how you deal with that.

Ms. Horn: First of all, as specified in the law, Tennessee uses fresh, equivalent, non-redundant tests every year. You can't teach to the test precisely because everything changes from year to year. We also have ways of detecting footprints. It just doesn't happen that often. Thirdly, we use gains, and because we use gains, everybody can be a winner. We don't have a median line where half the schools are below and half are above. What we have is a line that is the normed gain. We think every school can achieve that normed gain. And if teachers achieve that normed gain year after year, some people have said that is pretty low standards. Well it's not. If your school achieved a normed gain every year, we wouldn't be sitting here worrying about standards.

Audience member: Thank you. (Applause)

Audience member: Tell me if you can't understand me. I have laryngitis. How do you determine whether your tests are equivalent because we have that in Texas and have good reason to believe they are not equivalent. Number two, have you used this value added system to play into a teacher incentive program based on merit because I'm always told there is no way to identify master teachers and I don't believe that. Three, Have you encountered

Ms. Horn: Wait. Virginia, can we do these one at a time because I won't remember them all. The first, about the equivalency of the tests, when we were using the CTBS/4, that was a part of the contract with McGraw-Hill. The original tests were correlated to the curricula in Tennessee with the assistant of teacher input. There were meetings across the State to develop questions and then they were analyzed according to Item Response Theory by the testing company and whatever else they do to ensure validity and reliability. Your second question?

Audience member: I think this is what we did in Texas. There were certain test items that aren't common. That's not enough to determine if tests are equivalent. You would need a test, retest comparing the tests from one year.

Ms. Horn: Well, yes. Let me put it this way. Our gain scores across the board show an equivalency. As a matter of fact at one time we did have a real problem with reading one year in a way that could not--I mean that was state-wide. We went immediately back to McGraw-Hill. We said there is a problem with this test in particular, because we have this data on a state-wide level from all the kids including special ed.

Audience member: It couldn't be true.

Ms. Horn: We knew it couldn't be true. We are keeping an eye on the tests for them.

Mr. Herr: I'm sorry I think we need to move on.

Audience member: That teacher question is very important here in California. Has this test been used to identify master teachers and in essence begin merit pay?

Mr. Horn: No and I'll tell you why. The only people who have access to the teacher data are the teacher and appropriate administrator. So if we ask for volunteer teachers, we can do that analysis, but we cannot have a sample that is not self-selected in order to do that. We do have some people who have come into the State who are doing that research, but they are doing it with self selected teachers. I still think it's something--



(Technical problem)


Under construction

Go to transcript ofJustine Su 

.  

Contact the organizers

Postal and telephone information:

1999 Conference on Standards-Based K12 Education

College of Science and Mathematics

California State University Northridge

18111 Nordhoff St.

Northridge CA 91330-8235

Telephone: (Dr. Klein: 818-677-7792)

FAX: 818-677-3634 (Attn: David Klein)

email: david.klein@csun.edu

clipart: http://www.clipartconnection.com/, http://web2.airmail.net/patcote1/partydan.gif