Te Kete Ipurangi Navigation:

Te Kete Ipurangi
Communities
Schools

Te Kete Ipurangi user options:


You are here:

Assemble

Once you have a very good question, you need to consider what data and other evidence will help you answer it.

Often the data you need will be obvious – but there may be more to it.

We want to know if our senior students are doing better in one area of NCEA biology than another.

So we need to look at NCEA results for our students.

But you need to make sure that you have looked at all angles, to ensure you have all the data needed to draw valid conclusions.

It could be that all biology students (across the country) do better in this area than others.

So we also need national data about differences across the two areas.

Ask whether your data are any good. Think critically about the available data and other evidence before you decide to analyse it.

If you find the results of a test a bit surprising, look closely at the test itself. Was it set at an appropriate level for that group of students? Ask questions about how tests were administered.

This might seem an unlikely scenario, but it apparently happened:

A school found that a set of asTTle scores indicated that almost all students were achieving at lower levels in November than they had been in March.

Then they discovered that the March test had been conducted in the morning when students were fresh, but the November test was in the afternoon and soon after the students had sat a two-hour exam.

Always think critically about data.

  • Was the assessment that created this data assessing exactly what you are looking for?
  • Was the assessment set at an appropriate level for this group of students?
  • Was the assessment properly administered?
  • If the data came from different sources, was there sufficient moderation?
  • Are you comparing data for matched groups?

These questions are really about validity and reliability.

But you can make commonsense judgments about whether data is valid for your purposes and whether it was created under conditions you can rely on.

Ask if there are data-related factors that might have a bearing on the issue in question.

You want to look at changes in a cohort’s asTTle writing levels over 12 months.

Was the assessment administered under the same conditions? Has there been high turnover in the cohort? If so, will it be valid to compare results?

Take care when you think you have matched groups.

You have data that show two classes have comparable mathematics ability. But end-of-year assessments show one class achieved far better than the other.

What could have caused this?

Was the original data flawed? How did teaching methods differ? Was the timetable a factor? Did you survey student views? Are the classes comparable in terms of attendance, etc?

Back