NCTE - The National Council of Teachers of English - A Professional Association of Educators in English Studies, Literacy and Language Arts
Search:
About NCTE Membership Professional Development Publications Programs Related Groups
 
The National Council of Teachers of English
- All Positions by Category
-    Assessment & Testing
-     Censorship & Intellectual Freedom
-     Class Size & Workload
-     Computers in Education
-     Curriculum
-     Diversity
-     Government in Education
-     Grammar
-     Instruction
-     Interdisciplinary
-     Language
-     Library
-     Literacy
-     Literature
-     Media Literacy
-     Multicultural Literature
-     NCTE Organizational Concerns
-     Professional Concerns
-     Professional Development
-     Publishers
-     Reading
-     Rights and Roles in Education
-     Standards
-     Teacher Certification & Preparation
-     Teaching Materials
-     Working Conditions
-     Writing
- All Positions by Level
- All Positions by Date
- Call for Resolutions
NCTE

- Parents & Students
- Press & Policymakers
Login to My NCTE Page
Shop the NCTE Catalog
 Assessment & Testing
Home > About NCTE > Overview > Our Positions > Positions by Category > Assessment & Testing > Article:107610
 

Writing Assessment:
A Position Statement


Prepared by the Conference on College Composition and Communication

March 1995

 

Background

In 1993, the CCCC Executive Committee charged the CCCC Committee on Assessment with developing an official position statement on assessment. Prior to that time, members of CCCC had expressed keen interest in having a document available that would help them explain writing assessment to colleagues and administrators and secure the best assessment options for students.

Beginning in 1990 at NCTE in Atlanta, Georgia, open forums were held at both NCTE and CCCC conventions to discuss the possibility of a position statement: its nature, forms, and the philosophies and practices it might espouse. At these forums, at regular meetings, and through correspondence, over one hundred people helped develop the current document.

An initial draft of the statement was submitted to the CCCC Executive Committee at its March 1994 meeting, where it was approved in substance. The Executive Committee also reviewed a revised statement at its November 1994 meeting. An announcement in the February 1995 issue of College Composition and Communication invited all CCCC members to obtain a draft of the statement and to submit their responses to the Assessment Committee. Copies of the draft statement were mailed to all 1995 CCCC convention preregistrants, and the final draft was presented in a forum at the 1995 CCCC Convention in Washington, DC. Changes based on discussions at that session, and at a later workshop, were incorporated into the position statement, which was subsequently approved for publication by the CCCC Executive Committee.

 

Introduction

More than many issues within the field of composition studies, writing assessment evokes strong passions. It can be used for a variety of appropriate purposes, both inside the classroom and outside: providing assistance to students; awarding a grade; placing students in appropriate courses; allowing them to exit a course or sequence of courses; and certifying proficiency, to name some of the more obvious. But writing assessment can be abused as well: used to exploit graduate students, for instance, or to reward or punish faculty members. We begin our position statement, therefore, with a foundational claim upon which all else is built: it is axiomatic that in all situations calling for writing assessment in both two-year and four-year institutions, the primary purpose of the specific assessment should govern its design, its implementation, and the generation and dissemination of its results.

It is also axiomatic that in spite of the diverse uses to which writing assessment is put, the general principles undergirding writing assessment are similar:

Assessments of written literacy should be designed and evaluated by well-informed current or future teachers of the students being assessed, for purposes clearly understood by all the participants; should elicit from student writers a variety of pieces, preferably over a period of time; should encourage and reinforce good teaching practices; and should be solidly grounded in the latest research on language learning.

These assumptions are explained fully in the first section below; after that, we list the rights and responsibilities generated by these assumptions; and in the third section we provide selected references that furnish a point of departure for literature in the discipline.

 

Assumptions

All writing assessments--and thus all policy statements about writing assessment--make assumptions about the nature of what is being assessed. Our assumptions include the following.

FIRST, language is always learned and used most effectively in environments where it accomplishes something the user wants to accomplish for particular listeners or readers within that environment. The assessment of written literacy must strive to set up writing tasks, therefore, that identify purposes appropriate to and appealing to the particular students being tested. Additionally, assessment must be contextualized in terms of why, where, and for what purpose it is being undertaken; this context must also be clear to the students being assessed and to all others (i.e., stakeholders/participants) involved.

Accordingly, there is no test which can be used in all environments for all purposes, and the best "test" for any group of students may well be locally designed. The definition of "local" is also contextual; schools with common goals and similar student populations and teaching philosophies and outcomes might well form consortia for the design, implementation, and evaluation of assessment instruments even though the schools themselves are geographically separated from each other.

SECOND, language by definition is social. Assessment which isolates students and forbids discussion and feedback from others conflicts with current cognitive and psychological research about language use and the benefits of social interaction during the writing process; it also is out of step with much classroom practice.

THIRD, reading--and thus, evaluation, since it is a variety of reading--is as socially contextualized as all other forms of language use. What any reader draws out of a particular text and uses as a basis of evaluation is dependent upon how that reader's own language use has been shaped and what his or her specific purpose for reading is. It seems appropriate, therefore, to recognize the individual writing program, institution, consortium, and so forth as a community of interpreters who can function fairly--that is, assess fairly--with knowledge of that community.

FOURTH, any individual's writing "ability" is a sum of a variety of skills employed in a diversity of contexts, and individual ability fluctuates unevenly among these varieties. Consequently, one piece of writing--even if it is generated under the most desirable conditions--can never serve as an indicator of overall literacy, particularly for high stakes decisions. Ideally, such literacy must be assessed by more than one piece of writing, in more than one genre, written on different occasions, for different audiences, and evaluated by multiple readers. This realization has led many institutions and programs across the country to use portfolio assessment.

FIFTH, writing assessment is useful primarily as a means of improving learning. Both teachers and students must have access to the results in order to be able to use them to revise existing curricula and/or plan programs for individual students. And, obviously, if results are to be used to improve the teaching-learning environment, human and financial resources for the implementation of improvements must be in place in advance of the assessment. If resources are not available, institutions should postpone these types of assessment until they are. Furthermore, when assessment is being conducted solely for program evaluation, all students should not be tested, since a representative group can provide the desired results. Neither should faculty merit increases hinge on their students' performance on any test.

SIXTH, assessment tends to drive pedagogy. Assessment thus must demonstrate "systemic validity": it must encourage classroom practices that harmonize with what practice and research have demonstrated to be effective ways of teaching writing and of becoming a writer. What is easiest to measure--often by means of a multiple choice test--may correspond least to good writing, and that in part is an important point: choosing a correct response from a set of possible answers is not composing. As important, just because students are asked to write does not mean that the "assessment instrument" is a "good" one. Essay tests that ask students to form and articulate opinions about some important issue, for instance, without time to reflect, to talk to others, to read on the subject, to revise and so forth--that is, without taking into account through either appropriate classroom practice or the assessment process itself--encourage distorted notions of what writing is. They also encourage poor teaching and little learning. Even teachers who recognize and employ the methods used by real writers in working with students can find their best efforts undercut by assessments such as these.

SEVENTH, standardized tests, usually developed by large testing organizations, tend to be for accountability purposes, and when used to make statements about student learning, misrepresent disproportionately the skills and abilities of students of color. This imbalance tends to decrease when tests are directly related to specific contexts and purposes, in contrast to tests that purport to differentiate between "good" and "bad" writing in a general sense. Furthermore, standardized tests tend to focus on readily accessed features of the language--on grammatical correctness and stylistic choice--and on error, on what is wrong rather than on the appropriate rhetorical choices that have been made. Consequently, the outcome of such assessments is negative: students are said to demonstrate what they do "wrong" with language rather than what they do well.

EIGHTH, the means used to test students' writing ability shapes what they, too, consider writing to be. If students are asked to produce "good" writing within a given period of time, they often conclude that all good writing is generated within those constraints. If students are asked to select--in a multiple choice format--the best grammatical and stylistic choices, they will conclude that good writing is "correct" writing. They will see writing erroneously, as the avoidance of error; they will think that grammar and style exist apart from overall purpose and discourse design.

NINTH, financial resources available for designing and implementing assessment instruments should be used for that purpose and not to pay for assessment instruments outside the context within which they are used. Large amounts of money are currently spent on assessments that have little pedagogical value for students or teachers. However, money spent to compensate teachers for involvement in assessment is also money spent on faculty development and curriculum reform since inevitably both occur when teachers begin to discuss assessment which relates directly to their classrooms and to their students.

TENTH, and finally, there is a large and growing body of research on language learning, language use, and language assessment that must be used to improve assessment on a systematic and regular basis. Our assumptions are based on this scholarship. Anyone charged with the responsibility of designing an assessment program must be cognizant of this body of research and must stay abreast of developments in the field. Thus, assessment programs must always be under review and subject to change by well-informed faculty, administrators, and legislators.

 

Rights and Responsibilities

Students should:

1. demonstrate their accomplishment and/or development in writing by means of composing, preferably in more than one sample written on more than one occasion, with sufficient time to plan, draft, rewrite, and edit each product or performance;

2. write on prompts developed from the curriculum and grounded in "real-world" practice;

3. be informed about the purposes of the assessment they are writing for, the ways the results will be used, and avenues of appeal;

4. have their writing evaluated by more than one reader, particularly in "high stakes" situations (e.g., involving major institutional consequences such as getting credit for a course, moving from one context to another, or graduating from college); and

5. receive response, from readers, intended to help them improve as writers attempting to reach multiple kinds of audiences.

Faculty should:

1. play key roles in the design of writing assessments, including creating writing tasks and scoring guides, for which they should receive support in honoraria and/or release time; and should appreciate and be responsive to the idea that assessment tasks and procedures must be sensitive to cultural, racial, class, and gender differences, and to disabilities, and must be valid for and not penalize any group of students;

2. participate in the readings and evaluations of student work, supported by honoraria and/or release time;

3. assure that assessment measures and supports what is taught in the classroom;

4. make themselves aware of the difficulty of constructing fair and motivating prompts for writing, the need for field testing and revising of prompts, the range of appropriate and inappropriate uses of various kinds of writing assessments, and the norming, reliability, and validity standards employed by internal and external test-makers, as well as share their understanding of these issues with administrators and legislators;

5. help students to prepare for writing assessments and to interpret assessment results in ways that are meaningful to students;

6. use results from writing assessments to review and (when necessary) to revise curriculum;

7. encourage policymakers to take a more qualitative view toward assessment, encouraging the use of multiple measures, infrequent large-scale assessment, and large-scale assessment by sampling of a population rather than by individual work whenever appropriate; and

8. continue conducting research on writing assessment, particularly as it is used to help students learn and to understand what they have achieved.

Administrators and Higher Education Governing Boards should:

1. educate themselves and consult with rhetoricians and composition specialists teaching at their own institutions, about the most recent research on teaching and assessing writing and how they relate to their particular environment and to already established programs and procedures, understanding that generally student learning is best demonstrated by performances assessed over time and sponsored by all faculty members, not just those in English;

2. announce to stakeholders the purposes of all assessments, the results to be obtained, and the ways that results will be used;

3. assure that the assessments serve the needs of students, not just the needs of an institution, and that resources for necessary courses linked to the assessments are therefore available before the assessments are mandated;

4. assure opportunities for teachers to come together to discuss all aspects of assessments: the design of the instruments; the standards to be employed; the interpretation of the results; possible changes in curriculum suggested by the process and results;

5. assure that all decisions are made by more than one reader; and

6. not use any assessment results as the primary basis for evaluating the performance of or rewards due a teacher; they should recognize that student learning is influenced by many factors such as cognitive development, personality type, personal motivation, physical and psychological health, emotional upheavals, socioeconomic background, family successes and difficulties which are neither taught in the classroom nor appropriately measured by writing assessment.

Legislators should:

1. not mandate a specific instrument (test) for use in any assessment; although they may choose to answer their responsibility to the public by mandating assessment in general or at specific points in student careers, they should allow professional educators to choose the types and ranges of assessments that reflect the educational goals of their curricula and the nature of the student populations they serve;

2. understand that mandating assessments also means providing funding to underwrite those assessments, including resources to assist students and to bring teachers together to design and implement assessments, to review curriculum, and to amend the assessment and/or curriculum when necessary;

3. become knowledgeable about writing assessment issues, particularly by consulting with rhetoricians and composition specialists engaged in teaching, on the most recent research on the teaching of writing and assessment;

4. understand that different purposes require different assessments and that qualitative forms of assessment can be more powerful and meaningful for some purposes than quantitative measures are, and that assessment is a means to help students learn better, not a way of unfairly comparing student populations, teachers, or schools;

5. include teachers in the drafting of legislation concerning assessments; and

6. recognize that legislation needs to be reviewed continually for possible improvement in light of actual results and ongoing developments in writing assessment theory and research.

 

Assessment of Writing

Assessment of writing is a legitimate undertaking. But by its very nature it is a complex task, involving two competing tendencies: first, the impulse to measure writing as a general construct; and second, the impulse to measure writing as a contextualized, site- and genre-specific ability. There are times when re-creating or simulating a context (as in the case of assessment for placement, for instance) is limited. Even in this case, however, assessment--when conducted sensitively and purposefully--can have a positive impact on teaching, learning, curricular design, and student attitudes. Writing assessment can serve to inform both the individual and the public about the achievements of students and the effectiveness of teaching. On the other hand, poorly designed assessments, and poorly implemented assessments, can be enormously harmful because of the power of language: personally, for our students as human beings; and academically, for our students as learners, since learning is mediated through language.

Students who take pleasure and pride in using written language effectively are increasingly valuable in a world in which communication across space and a variety of cultures has become routine. 

Writing assessment that alienates students from writing is counterproductive, and writing assessment that fails to take an accurate and valid measure of their writing even more so. But writing assessment that encourages students to improve their facility with the written word, to appreciate their power with that word and the responsibilities that accompany such power, and that salutes students' achievements as well as guides them, should serve as a crucially important educational force.


This position statement may be printed, copied, and disseminated
without permission from NCTE.


 
 
 
Copyright 1998-2005 National Council of Teachers of English. All rights reserved in all media.
1111 W. Kenyon Road, Urbana, Illinois 61801-1096 Phone: 217-328-3870 or 877-369-6283
Read our Privacy Policy Statement and Links Policy. Use of this site signifies your agreement to the Terms of Use.
Educator Resources:  Elementary  |  Middle  |  Secondary  |  College  |  Parents/Students  |  Press/Policymakers  |  Job Announcements