Blog Img 1.6 Abstract 9 Triangle Ruler

What is “Triangulation” in the Assessment Context?

Assessment continues to play an essential role in student learning. Yet it struggles to keep pace with a fast-changing world. The internet has placed knowledge at students’ fingertips. So the focus of curriculum, everywhere, is shifting from an emphasis on factual knowledge to a balance of essential skills, competencies, and conceptual understanding. These changes in the focus of learning demand a rethinking of assessment. It can no longer focus primarily on the memorization of content but instead shift to an appropriate balance of what I call “write, do, and say evidence” — in other words, triangulation. Many policy documents now require teachers “triangulate assessment” by including observations, conversations, and products in their assessment routines.

Triangle Assessment Of Learning
From VOCAL 101 Section 3 Lecture 1

Yet for many teachers and parents, assessment is synonymous with gathering written evidence of student learning. Actually observing students while they demonstrate skills, or engaging them in conversation to reveal their understanding, and then capturing what the teacher sees and hears digitally — these approaches to assessment are the exception rather than the rule in most of the schools I visit.

Think about how pervasive assessment through observation is in our lives outside of school. Maybe you play golf? If so, I’ll bet lessons you’ve taken involved video analysis of your swing.

Been to a hockey, football, or soccer game lately? Rest assured the coaches and players spent hours viewing video footage of previous games to help improve the team’s play. Even The Royal Conservatory of Music now makes extensive use of student video performance, both to instruct and to celebrate excellence. These examples highlight the importance of using video to observe and digitally capture a student’s performance, and then providing feedback and resources to improve subsequent performance.

And what about conversation? When we listen to students engaged in conversation related to their learning, we literally “see into their thinking.” We learn about their level of understanding, their misconceptions, their tolerance for other perspectives and points of view, their ability to listen to and build on the ideas of others, their ability to answer questions… I could go on. And if we videotape these conversations, then we have rich evidence of learning as the basis for providing feedback and self-assessment that will lead to improvement.

Gathering, analyzing and sharing digital evidence of observations and conversations is, I believe, the future of educational assessment. From my perspective, after almost forty years in education, I haven’t seen another innovation that holds greater promise for improving the learning of all students.

Understandably, a change of this magnitude causes anxiety. Here is a sample of the questions I hear most frequently from teachers:

  • What are the benefits of observation and conversation?
  • Isn’t observation and conversation more subjective than written assessment?
  • What does observation and conversation look like in subjects like math and science?
  • How do I involve students in these kinds of assessment?
  • How do I find the time to observe and talk to all my students?
  • How do I convince parents to accept assessment based on observation and conversation?
  • How many observations and conversations do I need for each student?

Let’s examine two of these questions:

“WHAT ARE THE BENEFITS OF USING CONVERSATION AND OBSERVATION TO ASSESS STUDENT LEARNING?”

There are two main benefits, and both relate to the quality of assessment evidence we seek to gather:

  1. First, many of the essential skills that we teach can only be assessed appropriately by either talking to students or observing them as they demonstrate their learning.
  2. Second, since many students struggle with written communication, if we rely on it as our primary method of assessment, we run the risk of drawing erroneous conclusions about student learning.

Both reasons are concerned with the concept of validity. Validity, at a simple level, answers the question, “Does this assessment task actually provide evidence of the learning I’m looking for?”

On to the first point, if a learning outcome or expectation states,

“Students will investigate the interdependence of plants and animals within specific habitats and communities.”

A written test or report is of questionable validity. Instead, if the teacher creates an authentic task that provides students with a hands-on opportunity to investigate the interdependence of plants and animals within specific habitats and communities, she can then talk to them and observe them as they demonstrate, in the moment, skills of scientific investigation. And with today’s tablets and smartphones, she can record a sample of each child’s actions and comments, or indeed the student can capture it themselves.

The second point identifies a common assessment scenario:

Jack, a student in Grade 4, has just completed a written test about rocks and minerals. Jack failed the test. Upon consulting Jack’s Student Record, Mr. Brooks, his teacher, is reminded that Jack is reading at a grade 1 level. And yes, there were words on the test such as “igneous, sedimentary, and metamorphic”. So in reality, this wasn’t a Science test for Jack – it was a Reading test.

Concerned about the validity of the test score, Mr. Brooks sits down with Jack and says, “Jack, I’m pretty sure you know plenty about rocks and minerals, especially since you were so interested when we went on the field trip to the quarry. So let’s have a chat, and you can show and tell me what you know.”

Using rock and mineral samples, Mr. Brooks helps Jack recall his fascination during the field trip, as well as during class when students were examining the properties of similar samples. Throughout the chat, Mr. Brooks or Jack captures evidence of Jack’s knowledge and understanding with the video camera feature of his tablet. When they are finished, Mr. Brooks or Jack can upload the video to Jack’s FreshGrade portfolio and write comments about the experience. The notes can be private or shared with the student and parents.
By taking a balanced approach to assessment – triangulating evidence – Mr. Brooks becomes acutely aware of the problems and limitations associated with assessing Jack’s learning by relying too heavily upon written evidence.

Here’s a second question from the earlier list:

“ISN’T OBSERVATION AND CONVERSATION MORE SUBJECTIVE THAN WRITTEN ASSESSMENT?”

First of all, it must be said that measurement error occurs every time we assess student learning. We can never have 100% confidence in the conclusions we reach about learning. Ruth Sutton put it best in 1991 when she wrote:

“It is worth noting, right from the start, that assessment is a human process, conducted by and with human beings, and subject inevitably to the frailties of human judgment. However crisp and objective we might try to make it, and however neatly quantifiable may be our results, assessment is closer to an art than a science. It is, after all, an exercise in human communication.” (Assessment: A Framework for Teachers, Ruth Sutton, 1991)

Ruth is talking about “reliability” which is a measure of the confidence we have in the data we are gathering. So, as I’ve said, no assessment is 100% reliable. But reliability is not always of critical importance. The purpose of formative assessment – which includes assessment for learning and assessment as learning — is to improve learning, NOT to evaluate the quality of learning. And for formative assessment to be truly effective, it needs to be responsive to the differing needs of students. And as soon as we begin to differentiate formative assessment to further the learning of students who have differing needs, reliability goes out the window. And that’s just fine!

There are times, of course, when reliability is important. When our assessment purpose is summative – end-of-unit, end-of-term, end-of-course — we need to be seriously concerned with reliability. Why? Because we must have confidence in the judgments we make about whether students are proficient with respect to essential knowledge, understanding, and skills.

Enter FreshGrade. Along with access to a smartphone or tablet. Together they provide a solution to the problem that has been holding educators back for years – the poor reliability of their observations. Today, teachers (and their students) can quickly and easily create a permanent “in the moment” digital record of their learning that can just as simply be stored, reviewed, evaluated and shared. A reading teacher, for example, can use her tablet to capture brief samples of each child reading at key points during the school year and store them in the student’s FreshGrade portfolio to track improvement, provide feedback, adjust instructional plans, and have a shared conversation with parents about progress of their children.

Too many teachers gather too much assessment evidence in some areas and too little in others. In other words, their assessment plans are inefficient. Many teachers also believe that they are the only assessors in the classroom. By making effective use of today’s digital devices, teachers can empower students to become highly effective monitors of their own learning. Teachers need to learn how to have all students become reliable, autonomous assessors of their own performance, as well as independent adjustors of their own performance. Never before have teachers had the tools to accomplish these goals. But the increasing availability of handheld technologies, in the form of smartphones and tablets, coupled with ever more access to high-quality software suites and apps, means today’s teachers have this missing piece of the assessment puzzle.

Summary of Key Messages

  1. As educators preparing students for life in the 21st century, we must be willing to examine how assessment practices need to change to provide students with the information they need to deepen learning, and to improve communication with parents about what their children are learning.
  2. A balanced approach to assessment includes gathering evidence of learning by observing students as they demonstrate skills, and engaging them in conversation to assess understanding, as well as collecting written work and other learning products.
  3. Validity is a measure of the extent to which an assessment provides evidence of the learning target it is intended to assess.
  4. Reliability is a measure of the confidence we have in the consistency of the information we have gathered through an assessment task. Reliability varies in importance, depending on whether my assessment purpose is diagnostic, formative or summative.

About Damian Cooper

FreshGrade Adviser, Damian Cooper is an independent education consultant who specializes in helping schools and school districts improve their instructional and assessment skills. In his varied career, Damian has been a secondary English, Special Education, and Drama teacher, a department head, a librarian, a school consultant and a curriculum developer. He has specialized in student assessment since 1986. Follow Damian on Twitter @cooperd1954

Subscribe

Tags

× ×

Welcome back to FreshGrade!

Perfect for remote learning, connect your classroom from everywhere with the world's most powerful portfolios

LOG IN TO CONNECT

Are you new to FreshGrade and need a free account?SIGN UP


Looking for FreshGrade Classic?


CLASSIC LOGIN
× ×

Your powerful, integrated learning network

Simplify your process and save time for what matters most, teaching

NEXT SIGN UP

Tried and true, FreshGrade Classic is here for you


CLASSIC SIGN UP