Here’s a development update, covering everything that’s changed since November.
Most of my work has been on adding LTI 1.3 support to the Numbas LTI provider. We hope to have that ready to use by the summer, in time for the next academic year.
The rest of the development work has been mainly bug fixes, with a couple of new features in the Numbas runtime.
Here’s a development update, covering everything that’s changed since July.
I spent a lot of time on the Summer working on our other project, Chirun. I wrote a new LTI 1.3-compliant tool, to make it easier to embed Chirun material in our virtual learning environment. That’s now in use at Newcastle, and I’m looking for other institutions to test it with virtual learning environments other than Canvas or Moodle. Our intention is to make our server available to everyone, since it won’t handle any personally identifying information.
So it’s been a while since I had time to do a Numbas development update. There have been quite a few bug fixes and an encouraging number of contributions from other people. The main news is that the Numbas runtime is now WCAG 2.1 AAA compliant.
A bug was introduced in v7.0 of the Numbas runtime which led to variables whose names contain an uppercase letter not being correctly saved to the attempt suspend data. Exam packages downloaded from numbas.mathcentre.ac.uk between the 4th and 12th of December 2022 are affected. The values of these variables were not saved, so any attempts at affected packages can not be recovered correctly.
This would affect any students who left an in-progress attempt and later resumed it: they would potentially see different variable values to those they saw on the first launch.
Please check any packages that you downloaded between these dates and used with a SCORM player or the Numbas LTI tool. The bug is now fixed, so you should download a new copy of the exam package to replace the broken one.
We apologise for the inconvenience. Tests have been added to the automatic test suite to ensure this kind of problem doesn’t happen again.
It’s been a while since our last development update. The reason for that is that we’ve been working on some big changes to every bit of the Numbas software. We’ve released v7 of the Numbas app and editor, and a new lockdown app to integrate with the LTI provider which ensures that students can only access assessments in a restricted environment. There’s also a Safe Exam Browser integration, for in-person invigilated exams.
Laura Midgley has joined the team, replacing George Stagg, who left for an exciting job with RStudio.
Sorry for the long gap since the last development update: the user meeting, EAMS, and work at Newcastle have consumed all of my time. Now it’s the summer, and I can take a moment to reflect on what I’ve done since March.
We had two student interns working for us for a couple of weeks in July: Will McNestry and Aleksas Bagdonas. They each contributed a few new features to the Numbas runtime, tackling some things that had been on the to-do list for a while.
I’ve mainly spent my time trudging through the ever-growing list of issues on GitHub, adding features and fixing long-standing bugs.
George Stagg left us at the end of June to work at RStudio. We’re interviewing his replacement in a couple of weeks – the new role will have a lot more time dedicated to Numbas development.
The first Numbas user meeting was a big success: over 50 Numbas users from around the world gathered to share their experiences and work together on plans for the future.
The recordings of the talks are now available to view on the Numbas user meeting Spring 2022 programme page. My intention was to record every session, but I had some technical problems which meant that the training sessions and a couple of the talks weren’t recorded properly. There’s still plenty of good stuff to watch, though!
Here’s an update on Numbas development, covering December 2021 to March 2022. Sorry about the long gap between posts – holidays, strikes, exams and finally catching covid didn’t leave me much time for blogging!
In February, I wrote about our new extension for assessing programming. In order to implement this, I added a framework for running asynchronous tasks before marking a question part, and for defining new kinds of input method. These should both open up all sorts of possibilities beyond the programming extension.
In the editor, I’ve been working on adding a “queues” feature to projects. The main motivation is to support the Open Resource Library, but I’ve already thought of a few other use cases for it.