Using Interim Assessments to Predict End of Year Results

 
Predict-End-of-Year-Results-e1582570105205.jpg
 

Schools are taking more assessments than ever before. While the output and data from assessments vary – RIT scores and percentiles from NWEA MAP or a percent score from Achievement Network – common challenges exist for school leaders, regardless of the exam. School leaders are all aiming to create meaning from student scores and put interim scores in context with end-of-year outcomes. This post identifies one method (of many) that will help leaders make sense of current assessment results and what they imply about end-of-year outcomes.

Analysis Method Main Objectives:

  • Identify the correlation between scores on a given exam and the probability of end-of-year success using historical data.

  • Use the relationship between scores to pinpoint what current exam results imply about “readiness” on a given end-of-year-metric.

To illustrate this method, let’s assume your school takes NWEA MAP during the year, and your state exam provides students an achievement level of Below Basic, Basic, Proficient or Advanced (note: this method can be applied to any number of inputs and end outcomes). 

In this hypothetical scenario, your 6th grade ELA team would like an analysis of NWEA MAP Reading results from Fall 2019. Specifically, they have asked what the NWEA MAP results indicate about students’ progress towards the state exam. One effective route to make sense of this problem is to answer the question:

If students earned a certain score on NWEA MAP Reading, how likely were they to end the year Below Basic, Basic, Proficient or Advanced?

An effective method for completing this analysis is to do the following  

STEP 1: Align historical data from previous year NWEA MAP Reading with actual outcomes on the state exam. This will put your data in the same location and give you a starting point for analyzing.

Table-1.png

STEP 2: Identify a meaningful unit of measure from NWEA MAP that will reveal differences in likely state assessment achievement levels. In this case, it makes sense to pick a group such as “Decile” that will indicate meaningful differences in performance, but not be so broad to lose precision. In this example, organizing students into Deciles means taking their NWEA percentile and grouping students in the 90-100 range, the 80-90 range, and so on.

STEP 3: Using your organized data, calculate how often being in a certain category of inputs (NWEA deciles) translated to a specific output (Below Basic, Basic, Proficient, and Advanced). This will put you in a position to understand how likely a student is to achieve X if they are currently performing at Y.

Table-2.png

STEP 4: Review your data to find the level where there is a high probability of reaching your desired outcome. To find a cut point of where proficiency is highly likely, you may look to the 60-70 NWEA range below and interpret as “we know students who performed at 60th percentile or higher are highly likely to be Proficient or Advanced on our end-of-year exam…if they perform as high as the 50-60th percentile, we are not yet confident they are tracking to proficiency.”

Table-3.png

STEP 5: Using these probabilities, incorporate your new understanding of assessment data into conversations for student goal setting, interventions, and other academic structures. The primary purpose of this exercise is to drive action! Use any newfound understanding of high or low performance to drive your structures forward in a more informed way, setting students up for success.

Determining When to Deploy This Method:

This method can go a long way toward understanding the relationship between interim exams and end-of-year outcomes.  

You should note, however, that some exams will show a clearer link with end-of-year results. This means judgment and interpretation is required to determine both when this method should be used and what the results tell you.

Generally speaking, exams that are norm-referenced with clear scale scores are most likely to consistently show a clear relationship with your end-of-year outcomes. This means exams such as NWEA – with its standardized measures for RIT scores and percentiles – are strong candidates for using in this analysis. Exams such as these are also often given in similar testing conditions to end-of-year exams, helping to ensure their output aligns with eventual performance. 

Some interim exams are more mastery and skill-based, and will produce results such as “% correct” throughout the year.  One example of this would be exams from the Achievement Network. In these instances, the exam performance may show a very clear connection with your end-of-year outcomes, but it also may not. When this occurs, you should use this method to explore and define the relationship, but take caution: this method is best acted upon when there is both a logical and mathematical relationship between the input (exam such as NWEA) and output (end-of-year state exams).  Let the probabilities and relationship you observe be your guide: If you don’t find a clear connection after analyzing your results, don’t force it – you should scrutinize the data and potentially elect to not move forward with this process.

[This post contributed by Justin Vavroch, Missouri Regional Data Manager.]

Student DataDan Theisen1-30