An Academic Authoring Environment

Ok in a set of somewhat freaky circumstances I seem to have been thinking through some ideas over the summer that a number of others have too. I had just finished sketching out some initial ideas yesterday and today I read through Mike Caulfield’s post A Better Way to Build a EdTech Support Wiki (or, Doctor, Heal Thyself)
and it hints at quite a few of the same kinds of issues I’ve been mulling over. The idea around federation from Ward Cunningham is pretty close to some of the thinking I’ve been doing – although I have to admit I had no idea until I saw Mike’s prior post A Federated Approach Could Make OER More Numerous, Findable, and Attributable about his work in this space.

So I’m coming from the perspective of spending a considerable amount of time last year working on the Publishing phase of content (through our work around TADPOLE) The Adaptive Digital Publishing Engine. While we completed the proof of concept the work is on ice at the moment and in the down time I started to rethink the system and ways we could simplify it, make it easier and make it better. In going through this reflection I started to think more about the authoring side of things – how could we make that better?

How can we write and develop content better?

The first clue was that writing works better through feedback. Getting others to collaborate, read, edit and change your work usually ends up in a better result. The project report and two academic papers I worked on last year were collaborative, but the experience of working this way sucked. The tools we have access to mean that it wasn’t easy to incorporate changes and edits (even finding them is a problem!) and this in turn created a huge number of version control issues. Even for myself, using different software applications meant copying and pasting from app to app – unsure of which was the “Master” copy.

At the same time my team was engaged in software development and it’s through this I became aware of Git. Git provides the backend technology that fosters collaboration between developers and through concepts like clone, fork & commit you are able to develop a custom workflow suited to you and your environment but in a way that actually fosters connection and collaboration. It provides the practical tools for people to actually work together as well as go off independently.

That concept was something missing from my experience writing. What I was using was just a bunch of clunky tools that required a lot of effort from me to mesh and work together – particularly going from authoring through to publication. Rather than simplifying the process the technology tended to get in the way. I quickly came to the realisation that what I needed was to work out how can I get Git working for me.

This coincided with one other big change in my work – Markdown. Throughout 2013 I made the switch to writing everything in Markdown. Occasionally it’s Google docs, but anything I initiate and work through, its Markdown. Why? Well it simplifies the publishing process. With a couple of keystrokes or mouse clicks I can publish my content to almost any format, any tool, any system and do it with clean, simple code. No mess no fuss. Add in a style sheet and I’m done, it’s that simple! This works great for me – but could this become more mainstream?

The last clue in framing my thinking came form some discussions around OERs at the Ascilite conference. In a great symposium session we worked in groups around some of the issues in developing and implementing OERs within institutions. Our group focussed quite a bit on the lack of interoperability of systems, information and published content. Again, despite the best efforts of many smart and talented people – technology was getting in the way.

Looking over what was happening I started to base my thinking around:

  1. Git – its ability to version control, clone, fork and commit.
  2. Markdown – as a simpler way of writing content to simplify publishing
  3. Open Practices – understanding of issues around tech needing to be more interoperable yet understanding of localisation.

Over the summer I’ve been mulling over how and what you could do about this and I came up with this:

An Academic Authoring Environment

The concept here is an Academic Authoring Environment that supports the following core functions:

  • collaboration
  • multiple authors
  • multiple institutions
  • publishing as a temporal container
  • versioning, cloning, forking
  • customisation
  • localisation
  • open & federated access
  • a content first approach
  • multiple workflow
  • system interoperability (API connections)
  • distributed hosting (self and institutional)
  • self publishing + institutional

How could you do it?

Well I haven’t really worked all of this out in detail (and as I’m not a programmer som I’m unlikely to ever built it myself) but I have a few ideas that I’ve started to sketch out.

The main one comes through the arrival of the flat file CMS. Rather than build a monolithic system the flat file CMS just works via files and folders – things like Ghost, Statamic & Kirby. They store your content as simple text files, and use a folder structure for your other components – media, css, javascript. Then using PHP (or similar) your website get put together on the fly. This allows a level of independence from databases and applications to keep things much simpler and manageable.

So my initial idea is to separate the flat file CMS into 1. an authoring and 2. a publishing function. We borrow the simple folder structure to order content and manage assets from the flat file CMS & setup a simple authoring standard. Writing in markdown would also allow you to use Git to manage the whole management of content and share, contribute and collaborate with version control, cloning and forking available. The publishing arm could be a customised flat file CMS for display on the web or an API pointing to the hosted content so that it can be incorporated directly into an LMS or CMS.

Additions

Now the ease of use for such a system, which at the moment looks like a text editor, is pretty low – but there are ways that it could be made better.

  • An editing and collaboration interface like that used by Drafts or Editorially could be really useful to ease people into this new method of working. They could actually be incorporated as a optional component of the authoring environment!
  • The authoring side can be kept relatively simple for the most part – just using files and folders – and the complex publishing components abstracted away. This can mean publishing a huge tome of work is as simple as telling a system to look at a folder. The technology rather than the person does the heavy lifting.
  • Ideas like the Adaptive Media Element (AME) used in out TADPOLE work last year could be incorporated into the system. This would mean that complex and adaptive content is possible. AMEs could also be used to circumvent some of the limitations inherent in Markdown, which tends to err on the simple side of things.
  • Once content is marked-up properly its a pretty simple process to convert it to other formats. This means that such as system could be used to populate web sites just as easily as it could be used to create printed books, mobile apps, eBooks and more! This is a huge selling point and broadens the use across an institution – rather than a tool specifically for teaching, research, support or administration.
  • A system like this would be able to handle some of the issues that Mike highlighted around attribution – but also findability – a GitHub for academia would be a great idea!
Advertisements

We need to start at digital

I came across Tom Johnson’s posts Structured Authoring By For And Or Nor With In the Web and Structured Authoring (like DITA) a Good Fit for Publishing on a Website? this weekend. It came at a good time for me seeing as I’m thinking about publishing, authoring and content. I missed the publication of the original post – and I only came across it because of the great work from the Zite recommendations engine – but I’m quite taken aback at the response to the post, and intrigued by many of the amendments since made to the original. It seems Tom is quite embedded in the documentation area and they have some pretty strong views on publishing.

This also coincides with a referral to a post via @CathStyles by Paul Rowe on the Create Once Publish Everywhere (COPE) concept. Paul gives a really great overview of COPE in a really fair and balanced view. He also offered me a different perspective with his work with museums and a better understanding of yet another specific context with specific needs.

Given my own work in education I feel I am getting a better picture of the state of play.

What I’ve picked up is that what we are all attempting to deal with analogue systems and processes shoehorned into digital spaces. The current state of our systems, processes and software are tied to an analogue way of thinking, of developing and of working. There is nothing inherently wrong with those systems – the structure of DITA, the individual museum catalogues or the tome of the study guide – they all work in their own contexts, but they have reached their limits because they are no longer existing in a single context. The culture, society and technology around them has changed substantially but these systems hold onto legacy vestiges.

I agree we need something new, and Tom’s discussions around what the Web offers are really interesting. Why? Because the web is digital and always has been. It has evolved much faster than our other publishing systems and it has embraced its digital nature. The line from Tom’s article that resonated the most with me was this, “web platforms are built on a database model of dynamically pulling out the content you want and rendering it in a view“. This to me highlights the contrast with the analogue model where content was just stored in the database – content was never developed with the database in mind, it’s just where the finished product ended up. It gives the illusion of being digital – but without any of the inherent benefits of being native.

While I didn’t come to my conclusion the same way, the Adaptive Media Element concept is about applying the content to the database. Using it’s richness and ability to apply logic, to call dynamically and to change as required makes the AME adaptable. It provides the strength of the digital object while retaining the detail and richness of the content itself.

I agree with Paul, and I’ve quoted Bill Hick’s before, but the line about the need to evolve ideas rings true to me. Evolution doesn’t mean starting from scratch, in fact in most cases it just means making small changes that make life easier, more adapted to the current climate. I think we need to take the ideas of structured authoring, illustrated so well by the COPE concept, and move them forward. We need to leave behind some of the analogue baggage. We need to start at digital.

Adaptive Digital Publishing – Current Work & Ideas

This is my first post in an attempt to “work out loud” i.e. to be more open in my practice rather than just my output. It’s an attempt to log my current ideas and concepts around a topic to frame it, share it and reflect back at a later date.

One of the things I’ve been interested in over the last decade is the convergence of digital tools with traditional analogue publishing processes. Working in the design field I’ve been at that coalface and published my own fair share of both digital and analogue artefacts.

The last 5 years have seen a massive shift and a chance for digital to move into the analogue print space with viable and attractive alternatives. Smartphones and tablets (and everything in between) offer new opportunities and potential to fundamentally change publishing – how we see it and how we do it.

What’s become abundantly clear though is the lack of tools that can take it to the next level. Most of the popular tools have been co-opted from print and bring with them legacy concepts and constraints. While they may seem adequate they tend to lack the ability to realise the potential of a true digital publishing model.

As part of the mLearn project I’ve been heading up, we investigated what options we have as a university to transition our content to a mobile platform. The results weren’t pretty. Nothing was easy. Conversion was never precise and required manual manipulation to massage it into something viable. This is fine in individual cases but not for the scale we require at Charles Sturt University. What we want as an institution is a way to author once and publish out to many channels, so that our students get to choose how and on what platform they can consume our content. It’s a realisation that our students are diverse, with diverse needs and desires and a one-size-fits-all solution never really fits anyone.

So I’ve been thinking.

Beyond Text

The first big idea was the need to start thinking beyond text. Technically text is easy – if you get a properly marked up document you’re fine. Perhaps that ‘if’ is harder to come by in some cases, but what I’m trying to say is that text is not the problem. No, the sticking point is media. Media is all the other ‘stuff’ that’s possible to place in and around the text. The data tables, diagrams, images, video, audio, activities, quizzes – it’s everything else that’s possible with a digital medium. These are where the problems lie, the challenges and the break points. Media brings to the fore the inherent differentiation between print and digital and in many cases defines them as unique and different. However the goal is not to wipe out print and replace everything with digital. I’d rather see but cohabitation – working and publishing to both, to all forms, platforms and spaces – not adversarial but complementary and certainly not the death of analogue.

Converging Ideas

My skill set and knowledge is something I consider quite unique. I straddle the multiverse, working on the fringes of many realities – analogue & digital, offline & online, web & print, commercial & public. So it’s within this divergent conversation I have been able to pick up on common strands. Twitter has been immeasurably important in this process – allowing me to tap into many new fountains of knowledge and expose my brain to new ideas.

I pick up on the content strategy discussions, with particular resonance are those from Karen McGrane. The thinking around responsive design from Ethan Marcotte. The emergence of new ideas from Brad Frost. The process of Mobile First from Luke Wroblewski. The backend work by guys like Dave Olsen on creating better tools for adaption. The discussions around the Content Management Systems – their pros and cons and place in the new age. Ideas from Jason Grigsby struggling with responsive images that cope with retina and standard displays. The dynamic and active content concepts from Brett Victor.

What I’ve been working through is aligning those ideas. Cherry picking concepts that resonate with my work and trying to formulate those into something we can work on in the project, and I think I’ve got there.

Adaption Through Specialisation

A few weeks ago I had a light bulb moment. I was watching a documentary on metamorphosis and as the narrator described the process and it’s genesis it sparked a Eureka moment! I got pen and paper and started scrawling notes:

Metamorphosis means to change form. It’s an evolutionary model whereby there is conspicuous and abrupt transformation accompanied by changes in habitat or behaviour.

That concept, that idea, that process – I could see vividly how that related to what we want our publishing systems to do. Not just simply respond, but adapt to a specialised form uniquely suited the mode of delivery.

The next big question was how.

The Birth of the Adaptive Media Element

I struggled with this for some time, but the work done by those in the web field, particularly around images was extraordinarily helpful. This is what I came up with:

An Adaptive Media Element (AME) is in essence a meta-object, which contains self-referential information. It is not a single file per se but instead contains more detailed and expressive information that allows logic to be applied. For example an AME might contain a file type reference, the file itself, a weblink to an external source or library, source information of where it came from, reference information, alternative files or metadata, a title, a caption and a description. In the diagram below is an example of what a Video AME could look like – a single element in the authoring environment that links to a library of connected files and information:

This diagram shows how a single AME appears in the editor as well as all the individual components that could make up an AME.

This diagram shows how a single AME appears in the editor as well as all the individual components that could make up an AME.

This extra rich information allows logic to be applied and through a predefined profile, pull in and display only the relevant information into the final markup. In the case below I’ve used some the HTML to define the kind of markup that would be created. In this case the DIV would act as a container for the elements inside being written from the library:

The AME is replaced by the individual components relevant to the publishing output. In this example a video AME is converted into a DIV with the associated elements and attributes coming from the library.

The AME is replaced by the individual components relevant to the publishing output. In this example a video AME is converted into a DIV with the associated elements and attributes coming from the library.

To some extent this is possible through customised web solutions, but not to the mainstream, not directly linked to an authoring system and not across many channels and platforms outside the web environment.

… So back to some more thinking. How would this all work, how would it function, what would it look like? At the same time, as anyone who works in a large institution or on the web would know, you need a great acronym to get any traction. So in looking for one I looked up words containing A, D and P (adaptive digital publishing). Not many words but one stood out…

TADPOLE – The Adaptive Digital Publishing Engine

Developing a process based on metamorphosis and that’s a suggested word? C’mon that’s fate, amirite!

So TADPOLE is an attempt to envision a system that would allow you to author and publishing content out to many formats using Adaptive Media Elements. It consists of three elements:

  1. The Authoring Environment
  2. The Adaptive Media Elements Library
  3. The Transformation/Compiler Engine
Describes the three main theoretical components of the system.

Describes the three main theoretical components of the system.

While I had this initial model I needed some other voices and opinions and was able to bring in the team. Rob and Rod bought some structure and order to my very sketchy ideas. From there we were able to start to define the functionality of each component.

The authoring environment can look and function however one may see fit – but what it essentially creates is structured content. Text creates the narrative structure required for publishing. The AME Library operates as a database of all the individual components required for each element. Many different types of AMEs can be defined and each would have their own separate components listed. Adding content to the library would be similar to a form with predefined fields. The authoring environment would contain a tool to insert an AME into the narrative so that all elements were contextualised and embedded, rather than hanging off the side. The transformation engine is when the logic is applied (defined as “PHP magic” in our early discussions). It would scan through the document and find each of the AMEs and using a predefined profile insert the relevant components from the library into the markup. Profiles would be defined for each output required, so many could be developed and applied to one source to ensure specialised output. Our initial focus was on three – for print, eBook and Web – but these profiles could be customised and sub divided down much further. Profiles could be developed for specific media, platforms or content restrictions e.g. at CSU we have to deal with separate copyright restrictions depending on delivery via print or online. The compiler would then do the final render out and attach the presentation layer dumping out the finished files and folders.

Illustrates how the profiled markup then goes through the transformation and compiling process to output the finished files.

Illustrates how the profiled markup then goes through the transformation and compiling process to output the finished files.

So Why Do This?

What we tend to do at the moment is simply transcribe content from one format or file type to another. This is incredibly time-consuming and inefficient, and tends to focus resources on manual transcription of content rather than capitalise on the logic inherent in the machine. At the same time there is a drive to provide greater diversity in publishing options. We find ourselves in a quite untenable situation In quite broad terms the world has changed but the tools haven’t kept place. We need better tools and better processes.

Everything outlined above is possible today, but what is available form vendors seems to be either an overly simple authoring tool that lacks the depth required for publishing, or an overly complex authoring tool that alienates everyday users to gain powerful publishing functions. What we are proposing is something that doesn’t need to compromise for two reasons:

  1. Content is separate from presentation &
  2. Authoring is separate from publishing.

They co-exist within the same system but there is clear delineation to address quite different requirements. Content should be seen as liquid, but presentation is many faceted and required to conform to certain constraints. In much the same way authoring should be simple and intuitive but publishing needs to be complex and extensible. Through separation of form and function we can achieve a much smarter tool that capitalises on the inherent abilities of machine and human, rather than forcing one to compromise for the other.

What my team and I have tried to envision is an extremely adaptable system. One where:

  • content is developed neutral to the delivery method
  • shape and form comes from the narrative
  • is simple and intuitive for the author to use
  • provides complex output options
  • automation and logic do the heavy lifting, and
  • consolidate disparate production processes

This creates a future friendly system for publishing that will remain adaptable into the future. New components and AMEs can be added as needed and new profiles developed as new standards, formats and platforms are released. Instead of re-encoding, re-creating and translating content into each new format as they arise, the process can be automated and structured based on logic that can automatically be applied to all content. Authors can look after the creation and developers can look after the backend.

From here

At the moment we are only looking at developing a proof of concept, so at this stage we would adopt a static publishing model – so you would need to initiate the process rather than it being dynamic. This is to impose the rigor of form, and using the act of publishing to provides the temporal constraint of a beginning & an end. This minimises the complexity of the system as we won’t be required to maintain versions or host live content. Instead we will leverage our existing systems – a digital repository and LMS – which are far more capable in these areas. Our needs within the university are also centred around the temporal constraints of sessions and semesters, so we want something to remain static for their duration. That said we can envision that the model could be adapted further for use in a dynamic system.

So that’s where we are at the moment. I’ve started to sketch out some of the required AMEs that we need as well as looking at and scoping products are out there that can do what we are proposing. My feeling is that customising an existing CMS might be the way to go, but I am open to ideas.

If you would be interested in collaboration or finding out more please feel free to contact me.

Separating Content from Presentation

At the moment I’m planning some work in the area of digital publishing. The premise is to develop a proof of concept for an adaptive digital publishing system. Central to this idea is the concept of the separation of content from presentation. This concept is perhaps best embodied on the web – HTML providing structure to content and CSS providing all the necessary visual styles. This model provides a surprisingly flexible system probably best showcased by the CSS Zen Garden site. What changes on the site isn’t the HTML, the content stays the same, just the CSS file is changed to create radically different page designs.

The idea is that the two can be neatly divided was thrown into disarray by content strategist Karen McGrane in her recent post over on A List Apart – WYSIWTF. In the introduction she states:

The reality, of course, is that content and form, structure and style, can never be fully separated. Anyone who’s ever written a document and played around to see the impact of different fonts, heading weights, and whitespace on the way the writing flows knows this is true.

Now the WYSIWYG editor and the faith we put in it has to die, because it is simply no longer relavent. Content will end up on the device chosen by your viewer, not you. Did you know there’s a browser on the Kindle? Yes your site and your content displayed in 256 glorious shades of grey. Did you design for that? Did your WYSIWYG editor show you that? But what can we use instead? Karen dismisses the inline editor for many of the same reasons as the need to ditch WYSIWYG, so what’s the alternative?

Well the last line “If we want true separation of content from form, it has to start in the CMS” was a bit of a guiding light.

Content, Container and Presentation

I think what we are missing is the concept of a middle man. Content and presentation are too distinct, they are the two extremes in this case and we need to find some common ground. This is the place for the container.

The container is a shapeless vessel, more liquid than solid, but it provides a flexible structure for the content. The container defines what the content is. It might be a chapter, an article, a post, a review – but what it does is create a defined space for the content to live. The container is the tool to avoid “blobs” and faceless chunks of text and defines how content belongs and provides context. Content without context is pointless, so we need containers to help us develop better systems, better tools and better presentation. Once we have a container in place then we can start to develop better tools to preview, review and edit. We can provide more expansive “preview” modes so that we can render models on different devices, screen sizes, browsers and even in different channels like in an app.

The container appears to represent the missing piece for my work and I can see the need to develop this idea further. So I’ll keep you posted on how things develop.

PS – wondering if I was channeling this great Aussie invention:

The Goon Bag