This is my very own summary of the Assessment Literacy session, by Dr. Mark Griffiths, from Trinity College London, last Monday at CIFE Juan de Lanuza.
1. BACKGROUND TO TESTING
1. 1. TYPES OF TEST AND TESTING PURPOSE
This used to be a classroom…
Is it really different from classrooms today?
Teachers tend to do most of the speaking, while students tend to do most of the listening.
Still teaching languages through repetition of lists (I am, you are, he is, she is…)- using memory, but memorising it in the abstract, not in a meaningful context
example: What day is it today? (Monday, Tuesday, Wednesday….) Thursday! (You need to go through the whole list to retrieve the information you need)
Tests are not made up of questions, but rather of items (umbrella term for tasks, questions, ….)
Assessment is related to the quality of teaching and learning:
Students think that the curriculum is what is asked in the exam–so, assessment is the engine that drives learning.
Types of assessment:
1. Summative assessment: to summarise what students have learned (example: exam at the end of the week, or Selectividad). There is a degree of predictability. Think of mock exams (practice exams)
2. Formative assessment- gives feedback to students. The results are not usually part of summative assessment.
3. Formal vs. informal assessment.
When writing a test:
- You need to know what you really want to test before you write the test
- Test knowledge in real life contexts (ex. don’t test the list of the days of the week, ask instead “When do you play football?”)
An exam is just a snapshot- a good test should allow you to generalise the information you get from it (It should give enough information about whether they would be able to use that skill in the real world)
1.2. WHAT DO WE WANT TO MEASURE? COMMUNICATIVE COMPETENCE (a cluster of competences)
- linguistic competence
- discourse competence: organising (signposting, discourse markers…)
- sociolinguistic competence
- strategic competence:
- planning ahead what you are going to say or write, what language do you need?
- knowing what to do if you don’t have the words (ex. using circumlocutions, finding a way around to saying the same message in a different way)
- coping with unpredictability
- pragmatic competence:
- being able to cooperate in communication
- recognising locutionary and illocutionary acts (understanding directness and indirectness, irony, sarcasm…)
ex. see my analysis of The Imitation Game (pragmatics and reported speech)
1. 3. KEY PRINCIPLES OF GOOD DESIGN: principles that help us design or evaluate our tests
- Is the test doing what it claims to be doing?
- Does the assessment measure what we really want it to measure?
- If most people can’t answer the question, it’s not the students’ fault–the test was faulty (ex. there was little time, the question was not clear….)—–>not valid.
- Solution: do a pre-test, see how it goes—then, do the real one (pilot it beforehand)
- A test is valid if it tests what it says it’s testing
- Reliability: The extent to which test results are stable, consistent, and free from errors of measurement.
- Is the work marked to a consistent standard? (would all teachers give the same mark?)
- Is the test consistent over time? (is it repeatable with the same results?)
- Is the test relatively easy to administer?
- It shouldn’t be too time consuming
- It shouldn’t require too many resources
- For most students, assessment requirements define the curriculum.
- Assessment can have important effects and consequences (ex. if good students get a bad result, it will have a negative impact on them)
- The choice of your test has an impact on your classroom: it can boost their confidence (but it shouldn’t be too easy, either).
- Impact: transferrable skills
- research / planning / organising
- self-evaluation / self-correction
- flexibility / problem solving
- learning how to learn
- Authenticity: give them real-world tasks and situations; cognitive activities in the real world
- Fairness: the test should be fair to all students: everyone should have the opportunity to do it.
- * Textbooks usually test very badly
2. TIPS FOR WRITING TEST ITEMS:
- Learner’s reactions to assessments-
- Tests can be scary, or tricky. Usually, students try to guess what the teacher wants (it shouldn’t be so).
- Don’t say: “There’ll be a test on Friday”- say “for Friday, I want you to be able to …” (summative assessment)
What we should tell our students about their tests:
- Why are they doing it?
- What should they learn / practise?
- What can they do to improve?
- What should they do next?
- How can we maximise the benefit of the testing process to learners (self-correction, peer correction)?
Have a look at pp. 2-6 from the Assessment Literacy handout.
Tips for writing multiple choice tests:
- Use 3 options- research shows it’s as reliable as giving four options, but it is much less time consuming (for teachers to write, and for students to read all of the options).
- Also, don’t mix: in the same exam, give 3 options in all the questions (or 4), but don’t mix.
- Distractors: if you use them, don’t make them too obvious.
- Try to keep questions simple.
TIPS FOR WRITING GOOD TESTS:
- What do I want to measure?
- Does the task match the test specifications?
- Is it valid? (Is it really measuring what I want it to measure?)
- Does it test a representative range?
- How am I going to measure performance? External reference?
- Is it reliable? Will it get consistent results?
- Is it authentic? Does it reflect real life?
- Fair? Is it appropriate for all test takers?
- Has it got a clear purpose? Do the learners know why they are doing it?
- Impact: what impact will the test have in the classroom and on the syllabus?