Numbas

Research

Development update: November 2020

You might notice this update was published in December: November was a busy month!

The Numbas runtime and editor mainly got bug fixes this month. There’s a big new feature in the LTI provider: the ability to automatically remark a resource after you update the exam package. This has already become invaluable for us, with more lecturers than ever setting Numbas assessments and misconfigured marking becoming more common. The remarking feature should be considered experimental: we’ve used it on a few assessments, but I expect to uncover bugs and limitations as we use it more often.

Numbas runtime

I’ve tagged version 5.2 of the Numbas runtime on GitHub.

  • Enhancement: Custom part types check the data type of each of their input options, and automatically convert compatible types. (code)
  • Enhancement: There is now a diff: annotation for variable names, to render differentials. Thanks to Lorcan James for adding this. (issue, documentation)
  • Enhancement: The entire Numbas runtime (the scripts.js file included in exam packages) can be loaded in a headless JavaScript enviroment such as node.js. (code)
  • Enhancement: The labels “Rows” and “Columns” on a matrix input are localised. (issue)
  • Enhancement: There’s a new display rule timesDot to insist that a centred dot ⋅ is always used as the multiplication symbol rather than a cross ×. (issue)
  • Enhancement: The string and latex functions when applied to expression values now take an optional list of display rules to configure the rendering. (code, documentation)
  • Change: Pattern-matching: if “gather as a list” is turned off, named terms inside lists aren’t returned in lists unless they match more than once. (code)
  • Change: When displaying rational numbers, they are reduced to lowest terms. (code for plain-text display, and in JME)
  • Bugfix: Custom functions written in JavaScript have the JME scope flattened, so that old functions written before scope inheritance was introduced still work. (code)
  • Bugfix: The range except X operation returns integers when appropriate. (code)
  • Bugfix: The default value of true for the setting “Allow pausing?” is correctly applied when an exam only has one question. (issue)
  • Bugfix: Leading minus signs are correctly extracted from nested expressions while pattern-matching. (code)
  • Bugfix: Hopefully the last ever bug fixes related to Internet Explorer – I hope to drop support for it next year when Microsoft ends support. (code)
  • Bugfix: Variables defined through destructuring are restored correctly when resuming a session. (code)
  • Bugfix: When the scheduler halts, halt all signal boxes. (code)
  • Bugfix: The checkboxes answer widget for custom part types deals better with being sent an undefined value. (code)
  • Bugfix: The legend for matrix inputs isn’t shown for the expected answer. (code)
  • Bugfix: Cleaning an answer of undefined for a number entry part produces the empty string. (code)
  • Bugfix: Slightly improved the rendering of the brackets around the matrix input on Safari. (issue)
  • Bugfix: The mathematical expression part’s built-in marking algorithm has default value generators for decimal, integer and rational data types. (code)

Numbas editor

I’ve tagged version 5.2 of the Numbas editor on GitHub.

  • Enhancement: When you create an exam, it’s automatically set to use your preferred language. (issue)
  • Enhancement: In the part marking algorithms tab, marking notes are shown even for invalid inputs. (issue)
  • Enhancement: The Extensions and scripts tab of the question editor has been split into separate nested tabs for each of extensions, functions, rulesets and preambles. (code)
  • Enhancement: The preview rendering of LaTeX while editing a content area uses display style when appropriate. (issue)
  • Bugfix: Fixed an error when browsing a project but not logged in. (code)
  • Bugfix: Access control is applied to the links to download source of exams and questions. (code)
  • Bugfix: The preview and embed views for questions and exams scale properly on mobile devices. (code)
  • Bugfix: I made some more changes to colours and keyboard navigation, to improve accessibility. (code)

Numbas LTI provider

I’ve tagged version 2.9 of the Numbas LTI provider on GitHub.

Screenshot of the resource remarking interface.
  • Enhancement: Each resource now has a Remark tab, which provides an interface to rerun all or some of the attempts at the resource using the latest version of the exam package, and optionally save any changed scores to the database. This has been on the wish-list for a long time. At the moment, consider this experimental: I expect to encounter bugs as we use this on real data. (issue, documentation)
  • Enhancement: I started work on automatically testing an exam package as it’s uploaded, to check for some obvious errors: that it starts correctly, the expected answers for each part are marked as correct, and a paused attempt at the exam can be resumed. It works on my development machine but we need to do some upgrades on our production LTI server, so it’s switched off by default at the moment. (issue)
  • Enhancement: The attempt timeline view now groups items produced within one second of each other; the score column is only shown when the score changes; the whole thing now runs lots faster. (code)
  • Enhancement: There’s now a global search tool for administrators. You can search for users, resources or contexts. (documentation)
  • Enhancement: There is now a link to review an attempt from its timeline and SCORM data views. (issue)
  • Enhancement: The “Maximum attempts per user” setting for resources now has some help text explaining that a value of zero means no limit. (issue)
  • Enhancement: The colours used to represent incorrect, partially correct and correct answers to questions are the same for the attempts listing and the stats page, and a bit more readable. (issue)
  • Bugfix: Fixed a bug in OAuth authentication affecting D2L Brightspace. (code)
  • Bugfix: Fixed a couple of bugs in the stress tests. (code)
  • Bugfix: Broken attempts don’t count towards the limit on the number of attempts a student can make. (issue)

An Analysis of Computer-Based Assessment in the School of Mathematics and Statistics

By Dr. Nicholas Parker.

a. Introduction

Since 2008 the School of Mathematics and Statistics has incorporated computer-based assessments (CBAs) into its summative, continuous assessment of undergraduate courses, alongside conventional written assignments. These CBAs present mathematical questions, which usually feature equations with randomized coefficients, and then receive and assess a user-input answer, which may be in the form of a numerical or algebraic expression. Feedback in the form of a model solution is then provided to the student.

From 2006 until the last academic year (2011/2012), the School employed the commercial i-assess CBA software. However, this year (2012/2013) the School rolled out a CBA package developed in-house, Numbas, to its stage 1 undergraduate cohort. This software offers greater control and flexibility than its predecessor to optimize student learning and assessment. As such, this was an opportune time to gather the first formal student feedback on CBAs within the School. This feedback, gathered from the stage 1 cohort over two consecutive years, would provide insight into the student experience and perception of CBAs, assess the introduction of the new Numbas package, and stimulate ideas for further improving this tool.

After an overview of CBAs in Section b and their role in mathematics pedagogy in Section c, their use in the School of Mathematics and Statistics is summarized in Section d. In Section e the gathering of feedback via questionnaire is outlined and the results presented. In Section f we proceed to analyze the results in terms of learning, student experience, and areas for further improvement. Finally, in Section g, some general conclusions are presented.

b. A Background to CBAs

Box 1: Capabilities of the current generation of mathematical CBA software.

  • Questions can be posed with randomized parameters such that each realization of the question is numerically different.
  • Model solutions can be presented for each specific set of parameters.
  • Algebraic answers can be input by the user (often done via Latex commands), and often supported by a previewer for visual checking
  • Judged mathematical entry (JME) is employed to assess the correctness of algebraic answers.
  • Questions can be broken into several parts, with a different answer for each part.
  • On top of algebraic/numerical answers, more rudimentary multiple-choice, true/false and matching questions are available.
  • Automated entry of CBA mark into module mark database.

Computer-based assessment (CBA) is the use of a computer to present an exercise, receive responses from the student, collate outcomes/marks and present feedback [10]. Their use has grown rapidly in recent years, often as part of computer-based learning [3]. Possible question styles include multiple choice and true/false questions, multimedia-based questions, and algebraic and numerical “gap fill” questions. Merits of CBAs are that, once set up, they provides economical and efficient assessment, instant feedback to students, flexibility over location and timing, and impartial marking. But CBAs have many restrictions. Perhaps their over-riding limitation is their lack of intelligence capable of assessing methodology (rather CBAs simply assess a right or wrong answer). Other issues relating to CBAs are the high cost to set-up, difficulty in awarding of method marks, and a requirement for computer literacy [4].

In the early 1990s, CBAs were pioneered in university mathematics education through the CALM [6] and Mathwise computer-based learning projects [7]. At a similar time, commercial CBA software became available, e.g. the Question Mark Designer software [8]. These early platforms featured rudimentary question types such as multiple choice, true/false and input of real number answers. Motivated by the need to assess wider mathematical information, the facility to input and assess algebraic answers emerged by the mid 1990s via computer-algebra packages. First was Maple’s AIM system [514], followed by, e.g. CalMath [8], STACK [12], Maple T. A. [13], WebWork [14], and i-assess [15]. This current generation of mathematics CBA suites share the same technical capabilities, summarized in Box 1.

Numbas in operation in 2012

Bill has written this report about how Numbas is being used this year for a University committee. I thought it was worth sharing.

Numbas will be used for all first year modules and service teaching in Maths & Stats in 2012/2013. This is in formative mode with an in-course assessment component.  Will be extended to all second year modules in 2013/2014. Note that presently this is the most extensive use of formative e-assessment in UK HE and is based on our original award winning use of formative e-assessment.

Transfer to other Universities

Two universities are transferring the technology from Newcastle as part of an HE STEM project: Chemistry at Bradford and Maths at Kingston. This will be finished by August 2012.

Birmingham University are talking to us in early April about using Numbas in their foundation courses and in their maths support system.

Mathcentre

There is now a new project starting April 2012 to embed Numbas in mathcentre.

OER and Numbas

We are running an HEA/JISC workshop on preparing OER materials using Numbas on April 10.

This is free and open to all.

Maths-Aid

We are developing new materials for Maths-Aid using Numbas as part of e-learning and support packages. Also preparing materials and resources (eg DVDs) for revision.

Documentation

Numbas features full documentation which is always in line with the most recent version.

Numbas Documentation

Numbas Blog

We regularly update our blog with articles about new and future features, as well as other useful information.

Numbas Blog