A long-standing limitation of Numbas has been the inability to offer “error-carried forward” marking: the principle that we should only penalise them once for making an error in an early part. When calculations build on each other, an error in the first part can make all the following answers incorrect, even if the student follows the correct methods from that point onwards.

Numbas now has a system of adaptive marking enabled by the replacement of question variables with the student’s answers to question parts.

It’s hard to pin down exactly what “error-carried forward” marking means – what kinds of errors will you carry forward, and can you detect them? Once you’ve decided to take an error into consideration, how do you change the way you mark following question parts? Because I could never come up with a good way of thinking about these problems, I didn’t make any progress on designing a system to allow adaptive marking in Numbas.

A few months ago, I came up with a simple abstraction of a circumstance in which you would want to adapt marking based on an error in an earlier part:

- The student is presented with some data and asked to perform a calculation whose result they enter in part
**a**. - The student is then asked to use that value in a further calculation, whose result they enter in part
**b**. - If the student’s value for
**a**is wrong, then their value for**b**will probably also be wrong. But the method used to obtain**b**from**a**should be independent of**a**(or, for any value of**a**, there is a clear method of obtaining a value of**b**). So, if the method applies the method correctly and proves that by giving the appropriate value, they should be given full credit for part**b**. - Part
**b**is then assessing only the application of the method, and the student only loses credit in part**a**. It’s clear from the marks awarded where the student went wrong.

That abstraction doesn’t consider *what* error the student made: that’s a very hard problem! You could come up with a marking scheme where some errors are penalised more heavily than others (getting a sign wrong in a formula isn’t as bad as using the wrong formula entirely, for example), but I’ll solve the easier problem first: ensure that it’s clear what each question part is assessing, and to make them as independent as possible.

If part **b** is assessing a method, then we should be able to write down the result of that method as a function of the value assessed in part **a**, and any other information given to the student. (If we can’t write down this function, then this method can’t be assessed by a computer and we should go home!) At this point, we start looking at the question variables. If there’s a variable used in the definition of the answer to **b** which corresponds to the expected answer to part **a**, then we can carry forward the student’s error by replacing the value of that variable with the student’s answer, and recomputing the expected answer to part **b**.

I wrote this down in our GitHub issue tracker. After a few months working on other stuff, I finally got round to implementing it a few weeks ago.

It works really well!

Each question part now has an “Adaptive marking” tab, where you can specify a list of variables to replace with the answers to other question parts. When the student submits an answer to the part, the values of the specified variables are replaced with the student’s answers to the corresponding parts, and then any other question variables depending on those parts are recalculated automatically, before the new “correct answer” to the part is calculated. You can decide whether to also accept the original correct answer, or to always make the variable replacements to ensure that student’s answers are consistent with each other.

The variable replacement is very powerful – changing one variable can affect many others. This makes it very easy to add adaptive marking to a question – since Numbas does all the recalculation for you, if your question is structured well it’s a simple matter of matching up variables with parts, and then all of the other intelligent stuff Numbas does to mark the student’s answer carries on as normal.

Here’s a video showing how I added adaptive marking to a fairly complicated question on hypothesis testing:

This feature is now available in the public Numbas editor. I’ve created a small demo exam to show off a few ways you can use adaptive marking – among other things, it shows that variable replacements don’t just have to be numeric values. I’ve also written some documentation to explain how to use this feature in your own questions.

This system has its limitations – I’ve mentioned that it can’t determine *how* a student made an error, which would be very useful for feedback; and it assumes that the student will be consistent in what they write, which you might not want to do – however, I believe it will prove to be a very useful addition to Numbas.