Hi Luisa,
Welcome to the forum.
Your post is auspiciously timed as I’ve just posted here that we’re in the process of organising some user trialling workshops (to accompanying a literature review on scrolling navigation across devices over the coming months). We’ve already secured some budget so it’s a case of getting these workshops booked in.
We have done something similar before although not via an academic institution. Back when Adapt was in its infancy we carried out A/B comparisons of like for like courses with users and got their feedback on their preferences between fixed layout and scrolling layouts and the ease of use across multiple devices (scrolling won!), as well as recording issues with particular components and so on which we’ve since addressed. So much has happened since then that I felt it sensible to organise these user trialling workshops to make sure we’re still delivering something that meets the needs of users. In addition, Adapt has been used to deliver hundreds of courses and of course we get lots of feedback from clients that influences design and functionality. This has led to improvements to the framework and there’s an obvious example of this in the impending V2.0 release where we will be offering improvements to the button clusters on questions which will allow learners to reopen previously closed feedback, see the overall pass fail state (as well as option specific marking) and so on.
On a practical level one of the things we do at Kineo is create a build, let’s call it a functional proof of concept, which contains the theme, the art direction and one single example of every plugin the course will include. This goes through an internal team review before undergoing a full QA on all target devices for both functionality and usability. We then share this with the client where we review together and check it runs on all target devices when launched from the clients intended hosting solution. This is obviously all done before the solution is delivered to the user so it may not answer your question which is how do we evaluate how effective the course is being when used on different devices.
From my experiences to date, most evaluation has gone into what devices are being used to launch content rather than effectiveness of the solution on a given device. Incidentally, the answer from the clients I spoke to was that the vast majority of users accessed the content first from PC and then from tablet with smartphone a distant third. I wonder how much of this is down to the mobile friendly design and ease of access to the sites containing the content, the behaviour of learners whilst learning in the workplace or the design of the courses. It’s of course probably a combination of all three but the third issue is the one we can do something about most easily. Love to hear other views, ideas and experiences.
I think it would be great to share examples on the community and why we know (or even at least think to get us started) a given solution work well across different devices. I appreciate there may be some blockers due to commercial sensitivities but I’m sure there would be workarounds.
Thanks,
Paul