Return to conference
Return to transcript of Question and answer
Ms. Metzenberg: It gives me great pleasure to introduce Bonnie
Ms. Grossen: Thank you. The topic I have today is the Legacy of Project Follow Through.
There are important parallels between the problem Project Follow-Through was designed
to solve and the current problem that the standards are trying to solve. In Project
Follow Through, which began in the '60s, we had set a national goal and tried to
figure out what kind of instructional practices would help us achieve it. What happened
then and in the aftermath is helpful for us to understand where we are today.
In 1967, in President Johnson's administration, the government set a goal to raise
the performance of at-risk learners to mainstream levels of performance. These at-risk
learners, called culturally disadvantaged at that time, lived in pockets of poverty
throughout the country. The government noticed that those children living in poverty
usually achieved at around the 20th percentile and were much lower than mainstream
America. So the goal, and standard, of Project Follow-Through, was to see if we could
raise the performance of these children of poverty -- in the Rio Grande in Texas,
on the Native American reservations, in the inner cities of New York, Los Angeles,
all over the country--to see if we can raise their academic performance to mainstream
levels, to the 50th percentile. So Project Follow-Through was initiated and sponsors
were invited to submit their models, their ideas for solving this problem to the
government. Through this process, 22 models, 22 designs were submitted.
One of those designs, was Direct Instruction. DI began with a highly effective teacher--not
a person in political power or an educator in a university, but a preschool teacher
who became who became widely known because of the dramatic effects he had on the
learning of preschool and very young children. In fact he taught his twins, when
they were only 4 years old, how to do some basic algebraic operations in math. By
showing a videotape of his 4-year-old children doing algebra, he got his first job
working in a preschool project for children of poverty. Other people became associated
with him in figuring out how to get his method of teaching disseminated and replicated
to other teachers, so that others could also be as effective or at least approximate
his effectiveness as a teacher.
So this one, sort of dark horse model, also came into the competition with all of
the other models that had a lot of support from the educational establishment, from
the universities, foundations, and so on.
And over the course of 10 years, from 1967-77, one large-scale evaluation occurred.
This was, by the way, and still is the largest educational experiment that has ever
been funded in history. And Project Follow Through continued to receive funding until
1995 with a total cost that amounted to about a billion dollars -- a billion real
dollars, counting those 1967 dollars along with the dollars from 1995. It was a huge,
The evaluation, which cost 59 million dollars alone, came out in 1977. That evaluation
included kids coming in at kindergarten, then going on to first, second, and third
grade. Only those third graders that had spent four years learning in these different
models were included in the evaluation. Most of the models were what today we would
call constructivist practices, those were also the mainstream practices in the 60s.
There was a model called Open Education, which is the earlier incarnation of the
Developmentally Appropriate Practices model promoted today by the National Association
for Education of Young Children. There was another model developed by High Scope,
called Cognitive Curriculum. And there was a whole language model, in those days
called the language experience approach, where kids learned by hearing stories and
writing their own stories--the whole language sorts of things we have been doing
An independent agency came into the schools and collected the data from all these
sites. There were approximately 10,000 kids in this sample. Only the nine most widely
implemented models were included in the final evaluation. Parents got to choose the
models their school selected, by the way. That's the only reason the Direct Instruction
model was the most popular model implemented. Direct instruction was the model developed
by this highly effective preschool teacher. The results came out in 1977 in the form
of a number of comparisons. First, they compared the schools that were using the
new Follow-Through-sponsored model with traditional schools -- schools that were
nearby and had similar demographics and historically a similar performance profile.
However, many model sponsors complained that the comparison schools were not as low
as the Follow Through sites. The poorest schools generally had some Follow Through
model in place, so that left only higher achieving schools for the comparison. So
many people criticized these comparisons because the schools were not equal.
This chart shows what percentage of comparisons were positive versus negative. The
evaluators looked at basic skills, cognitive skills (the ability to learn how to
learn) and self-esteem. The models on the right are the ones that actually had self-esteem
as a very important goal. And the fact that their bars all go below zero indicates
they had far more negative than positive outcomes, when you compare the schools with
the other models. It is ironic that self-esteem actually decreased in the schools
trying to develop self-esteem.
The Direct Instruction model had more positive outcomes than any of the other models.
You can see this in all the areas.
Go to the next slide now.
Here the evaluators compared the models with each other using a standard achievement
test. We are now comparing the models to a standard of performance, the 50th percentile,
as opposed to comparing them to other schools equal in demographics and so forth.
The goal was to achieve a 50th percentile with these children from low income neighborhoods.
The base line bar shows the 20th percentile, where kids normally have achieved in
these neighborhoods. You can see the Direct Instruction model raised their mean performance
to very close to the goal set by the federal government.
Some of our models on the right, our constructivist models, resulted in lower scores
than what had been achieved even with traditional instruction. What happened? We
spent more money to solve the problem of low achievement in low-income neighborhoods
than we have spent on any other problem in the history of education, we found an
answer, but you have you not heard of these results. The results were swept under
the carpet, you might say. The educational establishment, of course, was not pleased
with the results. They had been promoting the models that had negative effects on
at-risk children. These folks carried out a number of their own reanalyses of the
data, but no matter who did the analysis, they always came out with a similar pattern
of results. No matter how they looked at the data, the direct instruction came out
the best. The Ford Foundation hired a group of professors who published a critique
in Harvard Educational Review. And the essence of that critique was that our goal
in education was really not academic achievement. There are other goals that are
more important, and we should not disseminate this model to the field. Consequently,
results of this large-scale, very expensive evaluation have gone largely unnoticed
and unattended to.
We were basically comparing a very systematic, explicit direct approach to a large
number of approaches that used what we called child-centered methods, let the children
discover and develop their knowledge as they go. One of the things we should have
learned is that those approaches don't work, for those at-risk kids at least. And
if anything, we should put more effort into investigating other forms of direct systematic
explicit instruction to help us achieve these goals or possibly even do better.
The critics said that because this was a standardized measure, multiple choice and
so on, you cannot measure higher level thinking. You can only measure basic skills
and who cares about that. But the test contained some subscales in mathematics, which
are quite interesting.
The math scale was broken down into computation, problem solving, conceptual knowledge
(math vocabulary concepts) and total math score. And you can see, of course, the
Direct Instruction kids not only knew more math facts and were better at computing,
but they also had a higher problem solving score than any other groups. Of course,
the whole language group was focused entirely on reading and didn't spend much time
teaching math. But the other groups got even lower scores with their child-centered
approaches than what was normally achieved in those schools.
The Follow Through results generally had no impact on education. If you remember
far back enough to the late 70s and early 80s, you will remember that we went through
a brief period of looking at effective teaching practices and effective schools,
describing the characteristics of those effective teachers and schools. And there
was talk about direct instruction as teacher-directed instruction in general. Then
in the late 80's and 90's we are suddenly swept into a new era of child-centered
practices by "new research on how children learn." What happened in the
late 80's and early 90's to bring us full circle back to the models here that showed
detrimental effects for kids? Our leading organizations --the National Council of
Teachers of English, the International Reading Association,the National Association
for Education of Young Children, the National Council of Teachers of Mathematics
-- organized their agenda to promote the old constructivist practices far and wide
among all the teachers coming to their conferences and so forth. People need to understand
that these are not particularly databased organizations. People think of these organizations
that have the words "national" or "international" as the first
words in their name, as organizations trust much as we trust the American Medical
Association and so on, because we believe that they represent national interests,
presenting trustworthy, unbiased information about research and best practice. However,
education does not have a Food and Drug Administration or any kind of agency to check
the validity or truth of the recommendations for practice disseminated by these organizations.
And that has been a big problem for us in education.
So let's look now at the legacy of Project Follow Through and what happened to the
Direct Instruction model sponsored by the University of Oregon. We have expanded
Direct Instruction to the teaching of challenging higher level content, teaching
precise habits of thought in science and math, geometry and algebra, teaching even
inquiry skills with direct instruction. It never made sense to me to require students
to use their inquiry skills to figure out how to do inquiry when they didn't have
any inquiry skills yet. Using inquiry methods to teach the skills of inquiry leaves
out the children who don't have inquiry skills, the ones who most need to learn it.
We created video disk programs so people would not even have to follow the script,
believing that the script was one of the major objections that people had to the
model. But even without the scripts, we found the videodisc programs were rejected.
It was the "precise habits of thinking" that the educational establishment
really objected to.
The model set up scripts so that it was possible to train and coach teachers. Originally
the script was thought of as a scaffold for teacher training. One of the most important
features of effectiveness in the model was to keep the pace moving quickly enough.
If you as a coach did not know what was going to happen next in the lesson, it would
be impossible for you to speed the pace of the lesson. The scripts were there to
help the teacher get the lesson moving and focus on the precise kinds of examples
and so forth that the kids needed to really understand the concepts and discriminations.
But people were offended by or had a hard time understanding the utility of those
I want to close with a specific example of the legacy of Project Follow Through in
California middle schools, which are now inheriting all those kids who don't read
or do math well. In the last 10 to 15 years, we at the University of Oregon have
been doing a lot of work with middle school interventions, anticipating that there
would be a crisis and a need. And so two years ago, with a federal grant, I had some
money to try to recruit a middle school to demonstrate what we could do for these
kids that are sinking. And in a very low-income neighborhood in Sacramento, we found
a school that was much like the Titanic, ready to go down. To continue that metaphor
from Janet Nicholas this morning, the staff were already singing "Nearer my
God to Thee." 60% of the kids were from families on welfare, and the population
was culturally very diverse: more than half of the students were English language
learners. I approached the teachers with the idea of implementing our middle school
interventions. We wanted to implement everything we had in one site to see the effect
that a combined treatment would have on these at-risk middle schoolers. Well, these
teachers were smart enough to realize that they were on the Titanic, and jumped into
my rescue boat.
We began the first year of implementation with an agreed-upon schoolwide focus on
reading and pilots in the other subject areas. Then at the last minute, the day before
school started, the math teachers decided that they wanted to do more than just a
pilot. For all the classes below prealgebra and algebra, they wanted to use the Direct
Instruction program, Connecting Math Concepts. So we implemented our DI programs
with only the lower-performing children. The teachers kept the algebra and prealgebra
classes in the regular instruction they had been using. Eight percent of the students
were placed in algebra and 20% in pre-algebra. We had to place the remaining lower-performing
students in two levels. The kids who knew nothing about fractions, could not borrow,
and did not know their multiplication facts, and for the most part could not even
tell time, these kids were placed in one group that comprised 40% of our school population.
And the other slightly higher level included kids who knew a little bit about fractions,
didn't know much about multiplication or long division that equaled 22% of the population.
We looked at our SAT-9 scores from last spring, broken down by mean score for each
of the different levels. The algebra students had a mean percentile score of the
28th percentile on the SAT-9. Our kids in the bottom group, those kids started out
and could not do fractions or borrowing and or carrying and could not even tell time,
they also achieved a mean score of the 28th percentile, the same mean score the algebra
students achieved. The kids in our next level got a mean score of the 36th percentile,
better than the algebra students. The teachers of the algebra and pre-algebra classes
were doing traditional teacher-directed instruction. This model of teacher-directed
instruction differs from the University of Oregon Direct Instruction model in several
important ways. The U of O DI model is highly engineered. All the examples are carefully
thought out to prevent misconceptions; they are carefully designed to clearly communicate
an important discrimination or concept. The specificity of everything the teacher
and kids need to really learn the concept clearly is in place for the teacher to
present and for the kids to work through. The power of a well-designed sequence of
examples is something that is not yet well-understand in education. The more popular
belief is that instruction must be delivered without such planning, with more spontaneity.
Why is it that everyone seems to believe that to be a good teacher, all you need
to do is love to teach, yet no one believes to be a good surgeon, all you need to
do is to love to cut.
We have recently started training teachers using the low-income Sacramento school
as a training center. Teachers come in to learn how to do the instruction by working
with our teachers in the classrooms, seeing how the kids respond. And I think this
has been one of the most positive experiences of my career. Teachers really do see
how that highly planned lessons can work, how much easier it is, how much more rewarding
it is when the kids are successful. People don't believe that the kids will respond
to the highly planned lessons the way that they actually do respond. It is something
that people can't believe when they just look at it the scripted lesson, read or
hear about the positive effects on scores and learning.
One teacher, as she was leaving a training session, asked with concern, "How
long do you plan to keep this program?" I was surprised that she seemed to believe
that the rapid succession of reforms that come through education are something that
the schools plan for, that it is a school's intention to do something for just a
little while. She was assuming that the pendulum must swing and that everything will
go away in a short time, even effective practices. She is probably right. The only
way to stop the pendulum from swinging away from effective practices is to have standards
and assessments of those standards that let us know when we are doing better and
sound an alarm when we are doing worse. Thank you. (Applause).
Ms. Metzenberg: We have time for questions, thank you. Please come to the microphone.
Audience member: A question I posed, Dr. Grossen, Wesley elementary based in Houston
has been an avid follower of Direct Instruction for almost 20 years now. The biggest
concern is that the TAAS has very low standards and during past year they introduced
the Stanford 9 in the district to provide a more challenging measure of competence.
Most of the children at Wesley score in first grade around 85 or 87th percentile.
Yet when we look at similar test scores several years ago, by the time each student
gets to 4th or 5th grade they have dropped off in performance. What can we do to
maintain those higher levels of performance?
Ms. Grossen: Two things. First, we need to realize that good instruction is not like
an aspirin that you can take and then you're better. Good instruction is more like
a regime of good diet and good exercise. You can't expect that if you stop exercising
and eating a good diet that you will to stay slim, trim and fit. Good instruction
is necessary to sustain your learning growth over a long period of time. That it's
not just a matter of phonemic awareness in Kindergarten and then you're fixed and
you won't have to worry about anything related to reading after that.
And the second part I think has to do with characteristics of low-income family environments.
Wesley serves an at-risk, low-income community. A lot of more recent research shows
that these kids have a lot less exposure to vocabulary and language in their homes.
If not less vocabulary, the less sophisticated vocabulary for sure. And to compensate
for the more limited exposure they have at home requires a very a much more intensive
instructional program in comprehension and vocabulary. We have to keep working to
find ways to somehow compensate in the curriculum for that disadvantage. The reason
students can score high in first grade and then lower in 4th grade has to do with
the change in the content of the tests. Fourth-grade-level tests are more sensitive
to deficient vocabularies than are first-grade-level tests. We have not solved all
the problems in education. At the University of Oregon we are constantly reevaluating
and fixing the design of the programs to serve all needs and we have done everything
we could to share our findings with others and to encourage others to help solve
these problems as well. We are pleased when other groups, publishers and so forth,
can design programs that compete with the quality of the ones that we've developed.
Audience member: I had heard about this program, but I had never seen the actual
graphs that you have on there. And I just wondered, what we can do to publicize it
substantially beyond the point that it has been publicized here? At least one of
the things that we can do is be sure that it gets on the web.
Ms. Grossen: Yes, well, we have it on the web: http://darkwing.uoregon.edu/~adiep/
You will also find it if you search for "Project Follow Through."
Ms. Metzenberg: Yeah, but I think that's a good point. What else could be done. And
she mentioned sharing the findings with others.
Ms. Grossen: Yes. And I think a lot of people are helping -- more and more people
have heard about Project Follow-Through, now more so than 10 years ago. And even
though it is old news -- it is important news to help us judge the direction we're
going in today. Anyone can print the information off the web and share it with their
friends and colleagues. The Association for Direct Instruction has published it in
the journal of Effective School Practices, other organizations have picked up the
story, provided short news briefs and so forth, in the Washington Post recently for
example, and in other organizational newsletters. But it takes a lot to get everyone
to know about it. Anything people in this audience can do will help.
Ms. Metzenberg: Any other questions? thank you so much (Applause).
Go to transcript of Bill