PROVOST’S PRELIMINARY VIEWS ON ASSESSMENT

 

 

Background:

 

CSUN has a public history of commitment to assessment. The campus announced the Policy on Assessment in 1995. And the WASC Report of 2000 reported on how that policy was being implemented across the campus. Indeed, that Report was entitled, Becoming a Learning Centered University: Achievement, Technology, and Assessment. The Report implied that becoming (more) learning- centered depended, in part, on the University’s capacity to reflect on how well students learn what is taught. In fact, since the middle ‘90’s divisions like Undergraduate Studies have sponsored workshops on related topics, and many departments have made assessment part of how they decide about changes.

 

Relevance:

 

All of this, of course, is not surprising. In the late ‘90s WASC began to encourage universities to tailor their accreditation reports to fit their peculiar goals, missions, contexts, and needs rather than just to meet formulaic standard. Since then, university-accreditation visitors have insisted on proof that local objectives are, indeed, assessed locally. This emphasis coincides with the ever more common requirement by accrediting agencies that their disciplines build ways of evaluating the effectiveness of pedagogy, curriculum, and student services into their programs. Certainly, assessment has been part of the culture of teacher education and “waiver” programs for well over a decade, too. And the CSU featured a commitment to assessment in Cornerstones in the ‘90s. While the CSU system has focused central reporting on numerical indicators of persistence, graduation, etc., it still expects campus programs—and faculty—to develop assessment tools and results. Finally, beyond all this, there is another reason—an intrinsic one—why assessment is important. It provides evidence for faculty to judge whether results align with goals.

 

Operationalizing Assessment:

 

The campus Policy calls for assessment to inform decisions about curriculum changes and program reviews. By ’05-06, we should make sure that, in fact, faculty and administrative decisions about both major curricula changes and program reviews use assessment results. Further, both curricula and programs depend on who the faculty are. As a result, beginning in ’06-07, requests to hire should include a summary of the goals, objectives and evidence—from assessment—that are pertinent.

 

Now, assessment is not the only thing that drives curriculum, program review, and hiring. Typically, an institution relies on scans of how disciplines and employment prospects change. Also, it is not uncommon for changes to curriculum to capitalize on the faculty’s particular expertise, in a way that assessment and scans cannot capture. But assessment must be a major driver of why we decide either to stay the course or to change. In preparation for the next WASC visit, we need to make sure that assessment is doing what, in 2000, we said it would do: affect strategic decisions.

 

Essential to an effective assessment program is local—that is, peer—control of the goals, processes, and reviews. The assessment coordinators (liaisons) from the departments can function, however, as an overall team of advisors. They can provide formative advice about the methods that a specific department develops. Once those processes are put in place, the coordinators are useful guides for opinions about the logic and utility of the methods and results. If the coordinators are not called on to do these things, then assessment methods themselves are reviewed only once every five years, with the entire program.

 

What to Assess and How to Assess:

 

The literature on assessment, nationally and in the CSU, is rich and diverse. So, it cannot be summarized here. But the following are some suggestions.

 

An assessment plan can include indirect measures—surveys of students and/or employers’ satisfaction with what is learned. But direct measures are more pertinent, albeit harder to develop. What are they? They are measures—tools, outside of grades themselves and results that indicate individual students success in class—that suggest how well a program itself is actualizing its goals, allowing for elements that it minimally controls, like the budget and the building plan!

 

A program should have goals that the peers agree on and that reflect, in some way, the campus’s and the system’s goals. For instance, Cornerstones indicated thirteen goals for learning in the CSU; to these CSUN added several more, including good work habits and understanding the academic community. One would expect a department’s goals to cohere generally with these.

 

But how does one assess students’ learning in such a way as to learn something about the program? There are many ways; and these ways do not have to yield quantifiable data. Ultimately, the results are not answers but rather evidence for the faculty’s judgment. One can deploy portfolios that sample students’ work over time in a consistent way; juried performances can work similarly. One can compare the knowledge base that students reveal in a gateway course with what they show in capstone experiences. Throughout courses in a curriculum, one can embed questions that, while pertinent to a course, yield answers that can be extracted and then weighed against the entire program’s goals. Or, of course, one can use either standardized tests or locally developed ones that align with a program’s objectives.

 

Finally, it is necessary that a department’s faculty learn about and act on the results. The results belong to them—not the dean, not the chair, not the provost. It is not necessary for an annual report to be a tome. Usually, a concise summary of goals that were surveyed, measures that were used, results that were achieved, and changes that were recommended suffices. Nor, is it required that all goals and all measures be cycled through annually. Departments, after all, house many programs. The cycle must be frequent enough, though, to match the time line for significant changes; and again, it must involve as many peers as possible in reviewing the results.