We receive the message, indubitably correct, that assessment is very important for understanding what our students are learning in our courses and programs. But the array of assessment activities, from course, program, and institutional assessment, using internal sources such as samples from freshmen writing, major’s gateway and capstone courses, and the WPE (to name a few) to using external sources such as CLA and NSSE (to name a few), is absolutely bewildering. On top of this, assessment is central to new streamlined Program Review guidelines and factors into new initiatives such as WRAD and The Learning Habits Study. But the relationship between these activities and initiatives is by no means clear. More than one person has pointed out that assessment activities appear to be highly redundant – lots of different activities causing a lot of work but aimed at measuring the same thing.
Is there a way to make the diverse assessment activities fit together as part of a conceptual piece? Can we synthesize the elements of this picture? And if so, will conceptual clarity enable us to streamline the assessment workload? I think there is a way to see a common logical structure which runs through the types and levels of assessment, which connects various assessment activities, and which in the end will enable us to reduce redundancy. In the paper, I discuss the logical thread connecting goals, student learning outcomes, and rubric design, the connection with Program Review, and lastly the specification of core competencies as a strategy to reduce redundancy.