Developing and assessing exam skills without exam conditions



Thanks to @miss_jayin who helped devise this.

Our school has long desired to free itself of the tyranny of exams and move towards authentic assessment. Given the freedom I had with my class I clearly should have being doing more over these past few years. I plan to take advantage of the impetus and the drastic changes in this uncertain world of COVID-19.

I teach in a British school and so we follow the British curriculum and its proclivity for high stakes standardised examinations. A far cry from Finland’s singular national matriculation examination. I am not entirely anti-exams as seems to be the trend nowadays. Nor do I believe in teaching to the test, save those final few weeks of crunch time. I do believe in regularly immersing the students in exam questions though to build up familiarity with specific idiosyncrasies, presentation and language. I definitely do not do enough project based learning. I probably attempt one or two experimental project activities per class per year. This has lead to many failures, though the few successes do get repeated and I should eventually have a repertoire of one per topic and grade.

When our school underwent lock-down the uncertainty and prioritisation of the exam classes combined with slow and vague communication from the examining bodies left the question of remote assessments hanging. Eventually we attempted to deliver end of year exams online with some measures to simulate an in-class experience, however, the overall consensus is that it was not fit for purpose. Some students had connection issues. Others disliked the digital format and struggled to edit the documents satisfactorily. Some had their answers disappear because a parent reset a router mid-exam in an attempt to reestablish their connection to a work meeting. Others submitted too early so I was required to release it back to them and then set them a separate end time. Then was the very obvious problem of cheating.

So we do not want to repeat that. We could continue to experiment with refining our procedure, or invest in those incredibly controversial remote or algorithmic proctoring platforms that detect keystrokes patterns, mouse movements among a host of other checks. That seems a little overboard for our situation even if the many issues were addressed . Knowledge can be developed and assessed in a myriad of fashions so why continue to deliver a flawed process in unsuitable conditions under overbearing procedures that further stress already emotional and anxious students. This was the time to research, experiment and innovate.

What are exams and tests for?

NOTE: This was written before the Algorithmic grades fiasco in the UK emerged. My few thoughts about that are on Twitter and I will not rehash them here.

Exams have never been fit for purpose. In fact they serves multiple purposes. It crudely maps out the ability to reproduce knowledge or solve problems. Sadly, as the saying goes. “The map is not the territory.” Nevertheless, the strong correlation of examination with ability makes it difficult to ignore. Generally students who know more or work harder do better at exams. The real issue is we know that traditional examination is not good enough to avoid false negatives or false positives. Failing some brilliant students due to a bad day and crediting undetected fraud or luck. In a Chronicle article Kevin Gannon compares it to “declaring that the winner of the Indianapolis 500 is whoever has the best final lap, as it certainly seems more important than the previous 199 circuits around the track.” In a similar vein I contend in a yet to be published post that “a single instance of achievement under intense focus with a great deal of permissible oversights does not translate into continuous performance.”

There are many origins and reasons exam came to be widespread. Quite separate from the “problem with exams” is the “problem centralised exams are trying to solve”. Credentials serve as a way to communicate, validate and compare the candidacy of unfamiliar people to unfamiliar administrators without the need for trusting personal recommendations across time and location at scale. Formal centralised standardised exams serve as a way to affirm the value and integrity of the credential. A process that is familiar and easily understood by outside parties.

Despite the shortfalls, centralised standardised exams are seen as less vulnerable to manipulation and cheating than coursework. Absent other simpler solutions exams have become ubiquitous. It should be noted that both exams and coursework have seen countless cheating scandals. Academic Dishonesty is a vast topic so I will leave it at that.

False results, while depressing, are only truly an issue if the suspect work is used for appraisals such as at thresholds, transfers and pay rises. Between these times it seems mostly in service of accountability, marking out progress so that we can panic that we are behind on some target rather than focus on development for development’s sake. When seen in the light of their purpose, it is easier to see that the process of testing and comparison to peers does not have a fundamental place in an individual’s journey of personal progress. The only assessment that matters is “Are you better than you were?”

I could go on but I will not go deeper in to the history or appropriateness of exams in this post since I am stuck with them for the IB and the IGCSE.

Ultimately many of these students will return to sitting an exam in a hall so we still wanted to assess and develop those crucial exam skills in a way that was meaningful and formative for the students. It also needed to be cheat resistant by design rather than by enforcement. Replace distrust of individuals with confidence in our process.

What are exam skills?

So on top of the subject knowledge, we also need to to teach what has come to be known to teachers as exam skills. The interface between the contents of their brain and the desired credentials. Teaching exams skills is supposed to help students develop generalised problem solving skills. Sadly the exam mill approach has in my experience resulted in tunnel vision. Drilling exam questions can lead to students that are able to answer a specific types of questions in specific topic rather than cross domain aptitude. The focus is “how to answer this question” rather than “how to answer questions”.

Exams designers, for all their flaws, are genuinely trying to assess true competencies such as expertise, literacy, comprehension, creativity and reasoning. These have been simplified and encoded into familiar and repeated question templates. The complexity of a mind’s wondrous capability is reduced to discrete skills which are then atomised into specific words, known as “command words”, and used to elicit a precise family of responses.

This is not entirely a bad thing. A student should know the difference between describing an event and explaining it. They also need to recognise the appropriate degree of description or explanation. With project based learning, these two skills can be developed in parallel with neither impacting the others. Mistake are built upon. Neither skill is ever undone. In an exam setting the well polished answer with accurate information can be entirely wrong simply because of describe/explain distinction. Zero marks. The students has made a mistake and will take on feedback on how to correct it. Even with clear communication this can have a deleterious effect on the students capabilities. The skills and information contained in their answer is associated with failure and starts to become undone in their minds.

Another separate concern is familiarity with the exact exam format. This turns out to be incredibly important. I personally experience it when I answer questions provided to me by colleagues in alternative education systems. The closest experience I can think of is cooking in a stranger’s kitchen. The unfamiliarity is frustratingly arresting despite competence. We as teachers are rightfully expected to acclimatise them to the specific exam formats ahead of time.

So mishandling exam skills can produce rote answering instead of lucid inquiry, deleterious rectification rather than continual refinement, and trends towards establishing familiarity rather than promoting adaptability.

The three “C”s

The assessment tasks I will outline are unremarkable. They are regular class actives that all good teachers will have integrated into their lesson at some point in time. We simply re-imagined them as graded assessments so that we can continued to develop skill and as a side effect reduce/capture our students into a single grade for reports.

It did not begin with the goals and theories outlined at the end. We knew we had to come up with an alternative to exams that was appropriate to a temporarily online school and was fair to both the students and the teachers. My original idea was to push for open book exams. The reality is that at work people have access to the wealth of the internet and colleagues, so the restriction of the exam framework to an isolated individuals memory and acumen only grows more absurd with time. This move against the structure of exams brought it into focus the miasma of skills that we developed only for this niche activity of traditional exams. Moving too far away from it would only serve my protests rather than serve my students and prepare them for the system to which the school has subscribed. I could not ignore the exam.

These exam skills tasks are not high stakes. Each would only contribute a small percentage to the end of term grade. On top of the exam skills students will also work on a lab report, some multiple choice quizzes, comprehension tasks and other regular homework tasks. I think of these section and true subject development and treat exam skills as a chance to develop cross discipline meta-cognition.

We began to consider all the different ways an exam question could be used in a classroom that did not involve actually answering the question presented. We came up with many. Since this occurred within the science department we also had to consider how boring it would be for students to carry out the same task three times. We decided that we should reduce the skills to three, with each science assessing one skill and rotating between the skills over the year.

The three we decided upon were

  • Command Terms Unpack the language of the question
  • Create Question’s Modify existing exam questions to focus on different skills or knowledge complete with new answers
  • Correction To annotate an exam that was filled in with entirely incorrect answers.
Term 1CommandsCreateCorrect
Term 2CreateCorrectCommands
Term 3CorrectCommandsCreate
Table 1: Exam skill rotation for the Science Department

They were designed to engage in the three stages of before answering, answering and after answering a question. There is also a slight circularity to the design where each skill draws on the previous. We wanted to generalise the skills so that there could be applied to any question whether open or closed at any level of study.

We also considered how these could be extended so that students as the year progressed we could take away guidance or encourage deeper exploration.


Figure 1: Example of command question that will be given to students

I hope the instructions in the images are self-explanatory so I will only talk about the learning we hoped would be achieved. Here we wanted to drill in the importance of identifying the key features of a questions before answering it, including slowing down and explicitly identifying the topic in the syllabus before jumping into the question. I really dislike it when students refer to topics as “oh yeah.. that thing”. This is the shortest and simplest of the tasks.

It can be made more difficult by having the answers provided but the command term obscured so that the students have to infer the question class. Several answers can be provided and have the answer matched to the question variation.


This one is probably the most ambitious of the tasks. This challenges student to not only look at the structure of a question and how it relates to the topic being assessed. It challenges them to deconstruct and create new expectations and then prescribe answers to the questions. In effect they explore and potentially answer several variation of a question before they settle upon one and submit their task.

This question allows for incredibly divergent work. It can be simply differentiated with adding suggested changes or more difficult requirements.


Figure 3: Example of correction question that will be given to students

During post-exam reviews in class there is a tendency for students to argue over how their papers are marked. They simply see that a word is both in their paper and mark-scheme, even if they misused it or their answer is tangential, and stake their claim to the mark. This point grabbing really annoys me. I have had to expound on the virtue of dignity and self-reflection over a hurried scramble for points. Either its a borderline grade and students need to set a goal for a secure grade, or its in the middle so a mark or two changes nothing.

Here they have the chance to really see and explore subtle misconceptions without any defensiveness over their own mistakes.

This task can be made more difficult by beginning to weave in some correct responses. I could tell them the marks or leave it entirely a mystery. The obviousness of the mistake can be reduced to near invisible.

The Results

The department reacted incredibly positively to this proposal and we have scheduled them to run in October so I will provide an update after we have conducted them.

Leave a Reply