Picture of Chris Jones
Diagnostics and Feedback
by Chris Jones - Thursday, 20 February 2014, 10:34 AM
 

I'd like to open up a conversation around the best ways that we can improve the framework with regard to diagnostics and error reporting. Looking at peoples feedback on the forum, it appears that there are a few areas that can be improved to give clearer feedback if something has gone wrong.

 

What specifically can we check for and how might we feedback to the developer in a meaningful and useful way?

Picture of Chris Jones
Re: Diagnostics and Feedback
by Chris Jones - Thursday, 20 February 2014, 10:47 AM
 

Further to my post, we have currently 2 places where we can give feedback to the developer.

1) In the course when it fails on loading. Feedback is through console and Developer Tools, which require a very good understanding of the framework to dig in and diagnose the problem.

2) During the build. Feedback from grunt tasks like jsonlint can catch errors before deployment.

If possible I think we should be picking up errors in the build step where possible before deployment, especially if you are running grunt watch as feedback is nearly immediate, but also giving meaningful feedback in the browser if things fail to load.

Picture of Brian Quinn
Re: Diagnostics and Feedback
by Brian Quinn - Thursday, 20 February 2014, 11:28 AM
 

Hi Chris,

To generate some discussion, my idea is to have the a Grunt 'debug' switch or command to run a task which adds a two-pronged validation in addition to JSON lint:

  • Full Adapt JSON validation (from article to component, and all in between)
  • Validation of access to all resources (images, video, etc.)

By "full JSON validation" let me clarify that I mean that each component (for example) has its own schema, detailing which attributes are required and which ones are optional,and what their potential values might be.  Following successful JSON-lint, at the point where the JSON files are read into Backbone, each component JSON should be validated against its own schema, and any errors flagged to the user, possibly in the front end and definitely in the browser console.  There is currently on-going work to define the schemas themselves for the UI project, but the work here could align nicely.  At the point the actual validation could be handled by a simple Grunt task.

Regarding access to resources, this would involve getting all the external content that components reference, e.g. images, sounds, videos, and making a request to check that they're accessible.  A simple check for anything other than a HTTP 200 code here would be an improvement on what is currently there.

I think both these measures would provide great feedback, and not just for developers, but content creators who might not be familiar with the browser's debugging tools.  Just thinking about it we could also have the Adapt output (when in debug mode) link back to the Wiki or some help system.

Regards,

Brian

 

Picture of Chris Jones
Re: Diagnostics and Feedback
by Chris Jones - Thursday, 20 February 2014, 12:21 PM
 

If we have a full JSON check in the build process, then on loading we would only need a simpler 'did it load' type assertion. And we can build that in to the framework even before schemas.

In the situation when something does fail we probably need some way to display help and diagnostic messages that doesn't rely on the framework.

I also think that friendly and helpful error messages should be shown to the end user even if debug mode is off. This should help with supporting technical issues when the end user may not be technically savvy.

 

Picture of Mathew Gancarz
Re: Diagnostics and Feedback
by Mathew Gancarz - Thursday, 20 February 2014, 4:05 PM
 
I definitely agree a full JSON check would be very useful. In my experimentation, it's very easy to miss a comma here or there or have some other minor typo that currently just makes the course not show up as a blank when testing.

I would also recommend possibly a pass where everything is changed to lower case and encoding special characters in URLs (such as spaces). This helps to minimize the issues when moving from a Windows box to Linux. This might make more sense in the web authoring tool though, since if we are specifying paths directly in the JSON files, it might be messy to be re-writing those.
Picture of Daryl Hedley
Re: Diagnostics and Feedback
by Daryl Hedley - Friday, 21 February 2014, 7:13 AM
 

Hey,

I think validation is important but what's more important to the end user is speed and how a course interacts. Validation inside the framework is more code and we're at a point where we are comfortably ahead of our download speed/loading targets. This branch of Adapt is quick - and we need it to stay that way. So any validation should remain outside of the code and in a grunt task. But I agree with more descriptive error messaging (I really like the way requireJS give you a link on their site to go to for more information) and maybe we have a task:

$ grunt debug

this way when you're developing you don't need to go through a massive compile just to see a little javascript change. However I feel this debug should be built straight into the command:

$ grunt build

Mathew - if you grab the latest version of Adapt master you will that there's a new grunt task that checks your JSON files.

thanks,

Daryl

Picture of Chris Jones
Re: Diagnostics and Feedback
by Chris Jones - Friday, 21 February 2014, 9:46 AM
 

okay sounds good, to clarify our intended build commands then: 

debug: run jsonllint then full schema check

build: debug, compile (without source maps) then copy to build folder

dev: compile (with source maps) then copy to build folder

watch: monitor for changes then dev

 

@Daryl: would it be possible with the new schemas to identify all assets? is there a data type of url or image? If yes then we could look at running a validate and normalise of all the assets as Matthew suggests.

Picture of Rafael Chaves
Re: Diagnostics and Feedback
by Rafael Chaves - Tuesday, 11 March 2014, 3:50 PM
 

Hi all, Is there any work happening around this at this time, or planned?

From my brief experiment trying to build a simple "hello world" style course with Adapt, the direct editing of JSON-encoded content is a showstopper.

JSON syntax errors aside (easy to catch), I am more concerned with the lack of a tool that performs Adapt-aware semantic validation. Right now, I am struggling with getting a simple one course-page-article-block-component to work. And the failure mode is an empty page, no help whatsoever with determining what I am doing wrong.

An authoring tool will probably avoid such issues, but I don't think it replaces the need for build-time validation of course content.