Picture of Rafael Chaves
What is the intended authoring workflow
by Rafael Chaves - Tuesday, 18 March 2014, 4:57 PM
 

What is the envisioned workflow for the authoring tool (when it eventually becomes available)?

How will content authors preview and verify their work? Will it require generating the JSON artifacts, deploying them to a server and loading them into a web browser?

Or are you guys aiming for something with a much shorter turnaround?

Is the author workflow going to involve (even if only behind the scenes) any of the tools currently part of the development stack (nodejs, npm, grunt etc)?

Thanks,

Rafael

 

Picture of Brian Quinn
Re: What is the intended authoring workflow
by Brian Quinn - Tuesday, 18 March 2014, 5:47 PM
 

Hi Rafael,

We're aiming for something much more user-friendly than that. 

Sven has linked to some designs in this post which should give you an idea of where we are going with this.  As you can see in the link, authoring will be via a GUI, and there will be options to publish and preview work. 

We have made good progress with creating menus, pages, articles, and blocks.  Components will be a focus of the next sprint but these are a lot more difficult, and will impact the framework itself.

To answer your question, the experience gained in the current development stack (NodeJS, Grunt, Backbone, Handlebars, etc.) is being used in developing the authoring tool.  If you'd like to get involved and help out I'm sure that can be arranged.

Hope this helps. 

Regards,

Brian

Picture of Rafael Chaves
Re: What is the intended authoring workflow
by Rafael Chaves - Tuesday, 18 March 2014, 6:27 PM
 

Thanks, Brian. 

I saw that thread and saw some of the mock-ups and requirements. I could not infer though what the authoring workflow  is going to look like (not within editing itself, but the cycle authoring-verifying-publishing). Is that covered somewhere in that link?

EDIT: I guess this mockup gives a good idea. What happens when the user hits those preview and publish buttons? (I understand the high level meaning of the actions, but am interested in the technical details)

Cheers,

Rafael

Picture of Dennis Heaney
Re: What is the intended authoring workflow
by Dennis Heaney - Wednesday, 19 March 2014, 9:13 AM
 

Hi Rafael,

Presently, the plan is to use a similar build process to the one used when manually publishing your course at present, meaning that grunt will be used on the backend to build a course.

Naturally, as part of the build process, we want to include both a straightforward json-lint, as well as validation of the json output using the json-schema of each Adapt component (schemas are a work in progress at the moment). Other actions will include building the components.json, config.json etc. from the data in the database; collecting assets used by the course and copying these into place; excluding components that are not used in by the course, and so on.

The publish/preview action is executed by an 'output' plugin, and while we can't yet provide fine-grained detail on the process (we haven't written any output plugins for the tool yet!), the process for publish and preview will be much the same, the main difference being that 'preview' will build your course to a temporary location on the server and 'publish' will generate a compressed file which can then be downloaded.

EDIT: I should not that the above functionality is what is planned for the version 0.1 release of the tool. For verion 1.0 we plan to integrate workflow extensions that will allow multiple users to collaborate on course development including the ability to preview/comment on/review a course. Beyond that, we expect that plugins will be built to allow the user to deploy a built course directly to an LMS, alleviating the need to download and re-upload to your LMS manually.

Picture of Rafael Chaves
Re: What is the intended authoring workflow
by Rafael Chaves - Thursday, 20 March 2014, 2:25 PM
 

Thanks, Dennis, that is really valuable information, and makes things much more clear. But also raises other questions.

I actually had no idea there was a database - I thought the JSON-encoded representation was the source representation, and the authoring tool would directly manipulate it.

So should I see the JSON-encoded content representation as effectively being just object code, meant to be only interpreted by the Adapt framework rendering engine? Up to know, even though I had my concerns with the format, I thought having an open text-based format was great, so people could tweak it directly if needed, share content using a git repository, and use other tools to produce Adapt-compatible content. If it is just object code, how does one get content created elsewhere into the authoring tool? For instance, would there be an import feature?

Cheers,

Rafael

 

 

Picture of Brian Quinn
Re: What is the intended authoring workflow
by Brian Quinn - Thursday, 20 March 2014, 3:38 PM
 

Hi Rafael,

Yes, there is a database.  Currently we have it running on MongoDB which uses a JSON document format so we can map it pretty closely to the JSON output the Adapt Framework expects.  This also circumvents the limitation in the framework where you can only work on one 'course' folder at a time.

I think there are plans for an import feature in the pipeline, though I can't say if it's a priority for 0.1.  The next step for both the framework and the authoring tool is to implement JSON schemas which define the properties for components and extensions.  In theory you should only be able to 'tweak' the component and extension JSON within the defined parameters set in the schema.  We're very open to pull requests which implement new or missing functionality on both components and extensions.

If you think about the JSON as object code, the authoring tool will actually publish JSON files which are identical to those created by hand in your current editor of choice.  For instance, after you publish, you could still tweak your JSON if you want.  Or you could collaborate with your colleagues in the authoring tool before you publish.

Regards,

Brian

Picture of Rafael Chaves
Re: What is the intended authoring workflow
by Rafael Chaves - Thursday, 20 March 2014, 5:45 PM
 

Thanks again, Dennis.

If you think about the JSON as object code, the authoring tool will actually publish JSON files which are identical to those created by hand in your current editor of choice.

Created by hand *today*, right? Once the tool becomes available, my feeling is that the team's intention is that no one should be manually authoring content using the JSON format. Correct?

For instance, after you publish, you could still tweak your JSON if you want.

Unless you are suggesting automated tweaks (a tool that massages the output produced by the authoring tool), I hope you will agree this is not a sustainable approach. If the source is in the database, that is the only place to make manual changes, right?

Or you could collaborate with your colleagues in the authoring tool before you publish.

Right, but that forces me to use tool to do collaboration - it would be valuable to be able to use something like git. I may also be working on my own content repository (inside my company's private network) and want to share it with some people working on a public server; I believe the existence with multiple disconnected authoring sites is part of the vision for Adapt, right?

It seems the approach you guys are taking is akin to what Wordpress does. I have a feeling that is not a good model to follow for elearning content authoring. Wordpress content is not meant to be shared in source form, only in published form, so the fact the content is stuck into a Wordpress instance is not as much of an issue (provided they can migrate the content to new server, and the export/import mechanisms are sufficient for that scenario).

Cheers,

Rafael

Picture of Mathew Gancarz
Re: What is the intended authoring workflow
by Mathew Gancarz - Thursday, 20 March 2014, 6:38 PM
 

I agree about the value of sharing the 'source' of the courses. We've often developed courses and passed the source files to other related groups to make their own tweaks. Being able to version things in git also outside of the authoring tool would also be very important and useful.

I think an import/export ability could handily take care of these things though.

me
Re: What is the intended authoring workflow
by Sven Laux - Thursday, 20 March 2014, 8:23 PM
 

Hi, just to avoid any misunderstanding. 

Our goal is that the Adapt Framework is and stays usable in isolation of the Adapt Authoring Tool. This means that you will be able to build courses in the future (as you are today) without touching the authoring tool.

Our mission is to establish the Adapt Framework as the industry standard for developers and hence we will keep it separate as it is just now. We realise that this is important so that a developer community builds around the Framework and helps maintain and enhance it. 

At the same time, we are building the Authoring Tool, which enables non-technical authors to structure, build and publish courses (by packaging content with the Framework without users having to touch JSON, CSS etc). We can only reach the non-technical audience with an easy to use authoring tool. 

The balance to achieve is in requiring the right amount of metadata for plug-ins and extensions, so that a plug-in, which has been developed by Framework-only users will also work with the authoring tool. We believe this is achievable without any major obstacles. 

When it comes to collaboration, this will only be built into the tool. If you wish to collaborate using the Framework alone, you'll likely have to use some sort of version control system (e.g. SVN or GIT). For Framework-only users, implementing collaboration will be entirely up to anyone to implement. With regards to supporting developers, we have built a powerful command line interface and will carry on enhancing this. 

This explains the strategy and plans. There is no need to use the tool if you are technical enough. You won't have content or modules locked in a system with a database - even if you use the tool, you can always publish uncompressed and work without the tool. We also have import/export functionality listed in the requirements already. 

Hope this alleviates any concerns. 

Thanks,

Sven

 

Picture of Rafael Chaves
Re: What is the intended authoring workflow
by Rafael Chaves - Thursday, 20 March 2014, 9:36 PM
 

Thanks, Sven, that too was very informative.

We can only reach the non-technical audience with an easy to use authoring tool. 

I am all for enabling non-technical users via a user-friendly UI like the one you guys are starting to work on.

There is no need to use the tool if you are technical enough.

My feedback here is that the JSON-based encoding is just too hard/awkward even for technical users. As it stands right now, it is not usable.

Some things are problems with JSON itself: I may be mistaken but it seems JSON strings cannot have line breaks, so you can't just grab text off somewhere and paste it into a JSON file, you need to remove line breaks first. 

The fact that articles, blocks and components are all in their own flat lists complicates things further, as in order to make up the hierarchy you need to match parent ids. Also, it is too easy to get parent ids wrong when writing content.

Finally, when one makes a mistake authoring the content, and something is wrong, the failure mode is often just a blank page, or if you are lucky, a Javascript bug you can try to look into.

I am not sure a generic JSON Schema-aware validator can help with all those problems. It seems an Adapt-aware tool is required to perform some additional checks.

And I do think a more hierarchical representation, even if still JSON, would work much better (with blocks physically containing components, and articles physically containing blocks). Are you open to changing the format in that way?

me
Re: What is the intended authoring workflow
by Sven Laux - Friday, 21 March 2014, 12:08 AM
 

Hi Rafael,

Thanks for your feedback. We discussed the structure of the content in detail in the core team a while back and ended up with the decision to store these in a non-hierarchical, collections-type structure. I'm afraid, I'll have to ask the developers for the full details on this decision and am more than happy to get back to you on this.

Changing the structure may well make things easier for developers using the Framework but has to be considered in the light of performance and architecture of the authoring tool. From memory, I believe it is this particular aspect, which led us to choosing the current data structure.

As described above, we are balancing the requirements for the framework to stand alone as well as developing an easy to use and fully featured authoring tool for non-technical end users. We are trying to follow some very good advice on being very explicit about the current release (i.e. the Adapt Framework) requiring technical skills. We, the collaborators, use the current format on a daily basis to create projects for our clients and are finding the JSON-based work quite usable (and even efficient!) - albeit, we expect there may well be a period of getting used to working with Adapt.

Suffice to say that we are here to help the community with any questions and that we are keen to listen to good comments and advice. We also endeavour to respond very quickly and be as helpful as we can.

With regards to the issues and use cases you describe, it might be that what you are really wanting is the Adapt Authoring tool we are in the process of developing as this would:

  • simply allow you to simply copy & paste content
  • manage parent / child IDs on your behalf
  • be the Adapt aware tool to help perform additional checks
  • etc. 


I hope this helps. Thanks again for your feedback.
Sven

Picture of Rafael Chaves
Re: What is the intended authoring workflow
by Rafael Chaves - Friday, 21 March 2014, 7:10 PM
 

Thanks for the detailed response, Sven. 

I can totally see value on the flattening as an optimization and/or to make it easier to make it work with some Javascript framework That was my suspicion, as I have seen that approach come up in other client-server Javascript applications.

At the same time, I think it is a mistake that performance or ease of coding is being deemed more important than the readability of the representation (and that is how it shows to an external observer). By the way, I am developer, not a content developer, and have no problem with reading JSON, my issue is with having to make up the content structure by mentally matching ids. It can be done, but it is definitely much, but much harder to read than a hierarchical layout, and opens opportunities for mistakes (parent reference mismatches), that couldn't happen otherwise, and also can often be difficult to track.  

So, if the JSON representation is truly intended to be seen as "source code" and to be read and written by people (and developers are people), one suggestion is that the flattening could be done automatically on the fly or at build time, where required.

Just my R$ 0,02,

Rafael

Picture of Daryl Hedley
Re: What is the intended authoring workflow
by Daryl Hedley - Saturday, 22 March 2014, 10:08 AM
 

Hey Rafael,

It's been really interesting following this thread and hearing your ideas. I'd like to go over a few of the changes we've made and why we made them.

Firstly we started with a course.json file. Our internal Adapt framework started like this - it was nested and had hierarchical layout. However the nested structure starts to become overwhelming - copying a pasting large chunks of JSON making sure it fits in with the right closing bracket. Following the indent lines down through the 16th indent becomes un-manageable. We had modules, topics, pages, articles, blocks and components plus all the other attributes of plugins. It became a JSON mess to maintain.

Then we have a performance leak at the start of Adapt - this was mainly seen on mobile devices and IE8 where iterating through the JSON took a long time. We were putting them into models and then into collections - all of this was chunky.

Now we let Backbone do the work of importing the single JSON files and putting them into collections. No iterating over them at start time = A fast and reliable framework that gets started within our speed test times.

You've mentioned about it being a mistake we've deemed performance as more important than readability of the JSON. I see this the other way - performance is key. We strive to give the user (not the developer) the best experience - as what we're creating is a learning tool. On mobile devices we want near native performance, on desktop we want it seem-less between pages and on touch devices we want users to feel like they are in apps interacting with touch events. Right now we are smashing our download tests (https://github.com/adaptlearning/documentation/blob/master/01_cross_workstream/developer_requirements.md). When testing our in-house version we don't come anywhere near this.

I've edited both types of JSON structure and my personal opinion of the two is - single JSON files that contain one type are faster to create. The reason behind this is due to us knowing the course structure first - this is an important step (I can see how if there is no structure it can become harder). We have Word documents that have our course structure - by following this we're able to put all the pages into the course - then the articles, then blocks and then components. I have to admit it's a different workflow but it's much quicker when you've set yourself up with JSON snippets (Sublime Text does an amazing job of this - If I know I need two text components, one graphic and one media component I can type - text "tab", text "tab", graphic "tab", media "tab"). This I cannot do in a hierarchical structure where I have to put in another article and block to place my components.

In terms of importing and exporting - I did a lot of research around the use cases of hierarchical data structures - who's using them and why? Almost every new web app or application is using smaller chunks of JSON data. The reasons can be put inline with why we write smaller chunks of code - If I want to change something it's easier to change one attribute then multiple things. For example, I want to change an article and put it in another page and remove the first block inside it to another article on the previous page. In a nested data structure this is quite a task of making sure you get the two places correct - selecting the right end bracket (and if you've got a few question types in there you know this data is going to be long). In our current version I have to change two attributes - no copying and pasting. Change a _parentId on the article and then change the _parentId on the block.

I went on a course run by the MongoDB guys and the way they describe data is to not lock yourself into structure. Structure can change - what happens if one day we decide to take out articles. It's a lot easier to do with this structure than nested. Storing nested data in MongoDB is harder to work with, so it's best to keep these separate.

Whilst I'm saying all of this I'm conscious that we've released the framework as a developers release - it is intended to be harder to work with than the editor. I envisage 98% of people using the tool - it does all the hierarchical data visualisations for us. Where Sven has said that some developers will still hand code JSON is when developing new plugins or adding additional JSON attributes (at Kineo we do this a lot to add attributes into the menu items). On the note of developing plugins - it's important for developers to really understand where can I put my plugin data. Having the separate files, plugin creators know they can either put data in the "config.json", "course.json", "contentObjects.json", "articles.json", "blocks.json" or "components.json". It then means there's a nice distinction between "where can I put my attributes?" and "I can put my attributes here".

Some other benefits are the ability to import and export the JSON data easier, you can leave JSON data in that might not be needed for one course but needed for another - simply remove the _parentId (However this should probably not be seen as a positive with the editor involved).

One of the biggest new updates in Adapt is the multi-dimensional menu system we've built. It enables contentObjects (pages and menus) to have a never ending system of menus. We were limited in the current in-house version to three levels and although we hardly went over three we did have cases where we wanted three menu items on the first screen and two of those went to pages whilst one went to a two level menu system. To code this and put it into the JSON structure we needed to have redundant JSON data for no reason. Now we can put our _parentId to the correct contentObject and it builds a menu system for us.

I hope this makes sense as I've realised it's gone on for quite a while. If you have any more questions or suggestions post back and we'll try to answer them as best as possible.

Thanks,

Daryl

Mark
Re: What is the intended authoring workflow
by Mark Lynch - Saturday, 22 March 2014, 10:52 AM
 

Cracking post Daryl.

Thanks,

Mark.

Picture of Rafael Chaves
Re: What is the intended authoring workflow
by Rafael Chaves - Saturday, 22 March 2014, 11:50 AM
 

Thanks for the detailed response, Daryl. This was the key bit for me:

 it is intended to be harder to work with than the editor. I envisage 98% of people using the tool - it does all the hierarchical data visualisations for us. Where Sven has said that some developers will still hand code JSON is when developing new plugins or adding additional JSON 

which confirms my original suspicion that it is not a source format, it is indeed object code - content authors are not expected to manipulate the JSON encoded representation (developers may want to fiddle with it when testing a new feature). In that case, all my concerns with readability can be dismissed.

Thanks again,

Rafael

 

Picture of Daryl Hedley
Re: What is the intended authoring workflow
by Daryl Hedley - Monday, 24 March 2014, 8:46 AM
 

Hey Rafael,

I wonder if naming your '_id' and '_parentId' attributes with a prefix of the parent will help? If I was to manually set my '_id's then I would use a system like 'co05-a05-b05'. Using a system of 5s means I can add objects in-between. So this '_id' is the first block in the first article in the first page. It's a system like the one we use in Kineo to make sure we know where we are.

Thanks,

Daryl 

Picture of Adam Laird
Re: What is the intended authoring workflow
by Adam Laird - Monday, 24 March 2014, 10:34 AM
 

Small comment on the naming convention if you use underscores instead of dashes it makes your double click copy pasting (oh we all love C&V induced RSI) a lot easier i.e knowledgeCheck_a01_b01_c01

I find it a tiny but worthwhile improvement on workflow, darn shame most text editors i.e. bbedit/brackets don't do auto-suggest/auto-complete within speech marks

Picture of Daryl Hedley
Re: What is the intended authoring workflow
by Daryl Hedley - Monday, 24 March 2014, 2:43 PM
 

Hey Adam,

Good shout - but remember that these IDs get pushed to the elements as class names. By convention and our coding standards our class names contain dashes. It is a mighty shame that dashes can't be selected by double clicking.

Thanks,

Daryl 

Picture of Adam Laird
Re: What is the intended authoring workflow
by Adam Laird - Monday, 24 March 2014, 3:10 PM
 

good Comment Daryl,

to adhere by coding standards and ease of use I propose the use of selectable camel case dashes i.e.

c01dashA02dashB02dashC03

;)

Picture of Rafael Chaves
Re: What is the intended authoring workflow
by Rafael Chaves - Monday, 24 March 2014, 5:08 PM
 

Daryl, I have been doing something like that, and it does help a bit, thanks (reminds me of my days learning Basic). Typos still occur though, and every time I spend quite some time trying to find out where the mistake was introduced. If by build end I could know whether my content has no dangling references, that would catch most issues.

Cheers,

 

Rafael

Picture of Adam Laird
Re: What is the intended authoring workflow
by Adam Laird - Tuesday, 25 March 2014, 8:04 AM
 

Your thinking of something like checking parentId exists task? yep that would be useful,

perhaps a simple python script would do it, I use something similar on a different framework we built that does that on assets both checking non used and missing.

Picture of Rafael Chaves
Re: What is the intended authoring workflow
by Rafael Chaves - Wednesday, 26 March 2014, 4:31 PM
 

Yes, Adam, that definitely would help. One should not have to go all the way up to load the course in a browser to find out there is a semantic problem with the content. 

me
Re: What is the intended authoring workflow
by Sven Laux - Wednesday, 26 March 2014, 10:16 PM
 

Hi Adam,

thanks, I agree, this is a great idea and would be very helpful.

Do you think the script you use could be adapted and shared? Maybe we can find a willing member in the community to help with this. Naturally, the code would need to be available under a suitable license...

No worries if not and I hope you don't mind me asking.

Thanks,
Sven

Picture of Adam Laird
Re: What is the intended authoring workflow
by Adam Laird - Thursday, 27 March 2014, 12:06 PM
 

Hi Sven

Checking for assets

I had a quick look to see if adaptable (wasnt created by myself) and it checks a defined xml files for a certain tag and matches against items in a folder. I am unable to work out if it can or how to change it to work with the markup in course .json files

This is outside my ability to adapt or create. Ill ask the guy who made it for me if its possible but I know he's really busy on a project at the minute.

If it can be changed to read .json it would work fine for the components folder but I'm not sure how you would create one to check all iDs had ParentIds and vice versa, have attached my attempt below.

This relies on BeautifulSoup to work

Picture of Daryl Hedley
Re: What is the intended authoring workflow
by Daryl Hedley - Thursday, 27 March 2014, 2:47 PM
 

Hey Adam,

I've just been speaking to Brian and people seem to want this in Adapt so we'll put it as a grunt task. However my initial feeling is to only ever call it when "$ grunt build" is run as putting it into the watch command seems too much. However maybe if a developer is editing just the JSON data they can run another command like "$ grunt watchJSON".

I'd like to suggest we check two things:

1 - Every element has a parent that exists

2 - There are no duplicate _ids.

How does something like that sound? Any other features to check? (without it being such a long and processer heavy task).

Thanks,

Daryl

Picture of Rafael Chaves
Re: What is the intended authoring workflow
by Rafael Chaves - Thursday, 27 March 2014, 3:44 PM
 

That would be great. I heard (read) talks of using JSON schema in the future, that should take care of most mistakes.

Picture of Brian Quinn
Re: What is the intended authoring workflow
by Brian Quinn - Thursday, 27 March 2014, 4:20 PM
 

Hi Daryl,

As we introduce JSON schema for the components we could potentially do the same thing for the framework itself, i.e. anything in the components.json corresponds to the expected JSON format, and the same type of validation is done against articles, blocks, etc.  I agree with you that this should be a Grunt task.

While it's actually running, I'd really like to see the framework become a little more robust, for instance, if assets cannot be loaded.  Right now if an image file generates a 404, you're likely to be given a blank page and have to rely on the console window to figure out why.  It would be good if there was some feedback that this was the reason for the course not being rendered as expected.  Perhaps this could be output in a 'debug' version of one of the existing Grunt tasks too.

Brian

Picture of Adam Laird
Re: What is the intended authoring workflow
by Adam Laird - Friday, 28 March 2014, 9:33 AM
 

I would recommend it not being part of the grunt build command but an extra optional one.

$ grunt check-ids (perhaps?)

Simple reason being when I'm putting a structure in place I might run grunt build as I'm going along to check a section but will have unmatched parents and Ids from the yet unfinished but placeholdered sections I will be creating later.

If the grunt task could check both ways, ie. 1. parent doesnt exist, and 2. parent with no children (would obviously ignore components.json for that one)

The check assets feature, be it a different task is really really useful for misnamed assets and de-cluttering/reducing build size from unneeded images, if your like me I'll grab the contents of a similar module to form the base of a new module.

$ grunt check-assets (perhaps

The one I had for an XML file checked both redundant and missing assets

Picture of Daryl Hedley
Re: What is the intended authoring workflow
by Daryl Hedley - Friday, 28 March 2014, 9:50 AM
 

Hey Adam,

Thinking along the same lines. "$ grunt check-json" will do a few checks. Having this separation means we can also have things like "$ grunt check-assets". These can be triggered individually but will also be added when running "$ grunt build". So the final build step does a check too.

During grunt watch and grunt dev this will not be called but I think we should have a "$ grunt watch-json" command too. So it checks a few things whilst editing.

Thanks,

Daryl

Picture of Brian Quinn
Re: What is the intended authoring workflow
by Brian Quinn - Friday, 28 March 2014, 10:25 AM
 

Hi Adam,

I really like the idea of a grunt task to flag up if you've bundled assets that you haven't referenced in your course. I think a lot of content developers do use similar modules as a basis for new ones and this would be a great way to reduce file size..

Brian

Mark
Re: What is the intended authoring workflow
by Mark Lynch - Friday, 28 March 2014, 11:21 AM
 

Hi,

Implementing something like this https://nodejsmodules.org/pkg/grunt-smushit might be really useful. It would have to target the assets folders for the course.

Mark.

Picture of Daryl Hedley
Re: What is the intended authoring workflow
by Daryl Hedley - Monday, 31 March 2014, 3:13 PM
 

Hey Mark,

I think it's important to add a png/image compressor. I'd like to add this in quite soon but it needs quite a lot of scope. Would be ace if we could set in our less a mixin that takes in a variable of the original image size. It then grabs the theme dimensions and creates images per dimension. Meaning we can create background images for articles and blocks really easy. But it can complicate quite a few things.

Also - I've just finished the grunt task for checking json data. It's going into the develop branch with a release expected really soon. We now have two commands that check our json.

``$ grunt check-json``

and

``$ grunt build`` now exectues ``$ grunt check-json`` straight after json-linting.

Thanks,

Daryl 

Picture of Daryl Hedley
Re: What is the intended authoring workflow
by Daryl Hedley - Wednesday, 2 April 2014, 2:38 PM
 

Hey everyone,

Just a heads up that I've just pushed a new extension to the registry to help visualise Adapt's layout.

This extension puts the _ids in little white transparent boxes over all the page objects (page, article, blocks and components). I can't claim the idea as my own as the developers at Kineo thought of the idea to help with testing and getting client feedback on specific elements on a page. 

Run "$ adapt install shadowId" to use. It doesn't need any thing else

However, before your build please remove the extension from the extensions folder otherwise you will be left with the _ids all over your course. This should help with the JSON file issue too.

Thanks,

Daryl

Picture of Mathew Gancarz
Re: What is the intended authoring workflow
by Mathew Gancarz - Wednesday, 2 April 2014, 3:22 PM
 

Hi Daryl, sounds like that would be really useful. Thank you for sharing it!

I'd suggest adding a "Useful tips for development" page on the wiki to gather tips such as these together. I haven't delved much into development beyond dabbling but I think gathering some of the useful forum tips such as this, as well as the other post with ogg encoding options and other such things together would have a lot of value to help potential developers.