Hi,
Sorry for my delay in getting back to this. I have a few things I’d like to say about this feature. Firstly - I do not believe that having this feature implemented suddenly opens up the possibility to create the extensions listed above. I believe the current implementation restricts extensions to a core that becomes opinionated and this is not what the Adapt OS project was set out to do (I'll explain this below). I remember having a chat with the core developers at the start of this project and deciding that core should give enough but not too much and that the functionality - like listed above should be in extensions.
At Appitierre we’re developing extensions similar to those listed above but without the need to have a “soft reset” implemented. I however, believe that being able to retake assessments and allow multiple attempts is a must in a learning system - I’m not sure that a “soft reset” is the answer.
After looking through the code for the past few months and realising that there are a lot of “overwrites” needed to implement this I have a few concerns that I think impact Adapt as a product:
I gave a talk at the start of the re-write of Adapt, when we decided to make it open source and have a plugin architecture. This talk was to the core developers involved in bringing Adapt to the community. I talked about expectations in code - I believe that we should be sticking to keeping the expectations of code valid for users and programmers. With the “soft reset” functionality we’re able to end up with a data structure on a component model like so:
_isComplete: true,
_attemptsLeft: 2,
_attempts: 2,
_isEnabled: true
To me this is lying as the question component is complete, but the user has had no attempts at the question - how can that be? I do not believe that code underneath the UI should lie, however this would indicate that we are lying. If a tracking system other than Scorm was to track this data - it would suggest that the learner has completed a question without any attempts. Learning data then becomes invalid.
In the diagram listed above it also suggests that pageLevelProgress should lie about completion too. But to add to this - it’s bad UI design. Both two giants in UI design Google and Apple mention about expectations the user should receive during their time in a product (they’ve even written guidelines to explain these too). PageLevelProgress showing that a page is complete although it’s not complete because it’s been “soft reset" is not fair to the user. The UI should always follow the underlying model structure. If a user is to reattempt questions on a page - pageLevelProgress should not tell the user that everything is complete.
Accessibility causes even more problems - Maybe there’s a popup that tells the user that something is being reset just before - but let’s look at visually impaired users of Adapt. We should be pinning model status on the components to tell visually impaired users that a component is complete - however after a “soft reset” on a question component the visually impaired user is told it’s complete - but it’s not really because you’re attempts are zero. To me, this doesn’t seem right. The whole design around this seems odd that UI and model data should differ.
I’m a strong believer that _isComplete is a boolean. If you complete a component it’s complete, not - it could be or it could not be - some where in the middle. However being able to reset the question to allow multiple attempts is something that an extension should handle. This is why I’d vote for an extension handling this:
1 - Enables more flexibility as it’s not reliant on a core functionality - instead the extension picks the behaviour. Extensions = extra behaviour
2 - An extension enabling multiple attempts would not need to lie about the model data - _isComplete gets reset to false - pageLevelProgress doesn’t need to be re-written - extension holds multiple attempts data. This way you can record multiple attempts and the data in the attempts, which in turn gives more power to the extension. Imagine an extension based upon an assessment that allows multiple attempts and after your attempts it takes the best score of all the questions and pushes this to the tracking extension. This would be possible with the suggest I’ve just made - with “soft reset” it does leave enough scope to build this.
3 - Change of state for individual models should be owned by it’s model - once you get into restoring model data - this should be an extension.
4 - Learning designers and programmers can choose to add this. Forcing developers to use this functionality is not the way forward. Where it can be - Adapt should not be opinionated. Again, putting this functionality into an extension would solve this.
Tracking data is the most important part of learning yet it has been given the least amount of time. The Spoor extension hasn’t even been modularised - instead it’s still using the old code developer at the start of Adapt. Maybe this is why _isInteractionsComplete has been created? At Appitierre we’ve spent a good deal of time working on tracking and enabling features like listed above in the original post. I believe the problem is not in the core but rather the implementation - and this implementation should go into extensions to allow the most flexibility.
In summary, I think the current implementation is not the correct way to go about it. I believe in Adapt being less opinionated and having extensions be the functional aspects. This way both users and programmers can pick and choose their system without it being forced upon them. A good programming practise is to allow the model data to mimic the UI data - this is why we’re starting to see frameworks like React, Ember and Polymer who update the UI if the model updates. And finally, expectations of users/learners should always be at the forefront of our development. We should not be jeopardising this because of an implementation.
Thanks,
Daryl