Riffing off Remix

I’m feeling a little inspired after reading David Wiley’s The Remix Hypothesis and Mike Caulfield’s Paper Thoughts and the Remix Hypothesis. That’s on top of putting together an application for a Shuttleworth Foundation Fellowship where I’ve applied to carry on doing work around adaptive digital publishing. (The pitch video outlines a lot of what I’m going to describe in a pretty simple way – so if you want to know more have a watch and I’m happy to answer any questions). One thing I’m particularly keen to explore in this space is how to improve sharing, collaboration, reuse and remixing – is it possible to build that kind of functionality into a system so that is built for and with open content at its heart?

Over the last couple of years I’ve been playing around with the concept of Adaptive Digital Publishing. A group of us wrote a paper and developed a proof of concept. We shopped it around for funding but other people had other priorities.

Conceptually I think it stands up as the most effective way to publish materials across multiple platforms. It bought together ideas that are only now starting to emerge into the mainstream – e.g., in srcset and picture in HTML – where content is adapted depending on attributes set by the device & browser. The Adaptive Media Element we worked on did that – but in more complex ways and for all types of media – from video, data, images to audio and across print, web and eBooks.

The proof of concept we developed was built on WordPress and used the PressBooks plugin to provide many of the features we required, an easy to use interface and a solid foundation to work from. The ideas were executable more easily within an existing framework, so rather than attempting to build everything from scratch we could focus on our innovations – the AME and the corresponding Publishing Profiles.

Ever since we built that initial proof-of-concept I’ve been toying with how to make it simpler. How can we make it easier to share, collaborate and remix content? Our initial concept didn’t really think about those areas, but they’ve been bugging me ever since.

How to Support Remixing?

One way would be to expose the WordPress system via JSON. This would allow other systems to pull content in to display, but also to commingle, re-contextualised and retooled. My experience over the summer with Federated Wiki has challenged many of my preconceptions about how content, and indeed publishing can look like in a purely digital sense. I’m enthused by the concept of a JSON based system but there are plenty of dependencies and technicalities required to develop things this way.

My other idea is to go simple by removing the need for a database by abstracting authoring into a simple files & folders structure, and then focussing on developing a “generator” to the publishing. So rather than create a contained system we could build something that can be plugged into a file system and live separately locally or online. This idea builds on those already in use in a range of static site generators that leverage markdown, scripting and something like GIT to manage the whole workflow.

By simplifying the system down to the bare minimum the potential is to make content more “forkable”. You reduce the need for specific software in the authoring but also open the process to powerful versioning and management technology. In this way remixing is encouraged but with the ability to merge back the potential is a truly inspiring. This would ensure that the remix doesn’t become another standalone piece of content, but a connected component that might be co-opted back into the main branch. It enables localisation, translation and adaption to specific contexts not just to be made, but tracked, traced and attributed.

The other attraction to this more simplified model is that it also reduces the technical overheads required. It could be run locally or over a simple network. It could run offline and allows for asynchronous editing and collaborative authoring in a manageable format. I’m not sure if this will provide the simplicity or granularity of that the federated wiki has, but it’s definitely a step in the right direction.

This flat file model also means that content can be openly hosted using repository sites like GitHub but also almost any online space, and for educational and research publishing this could be a huge boon. Being openly hosted means that access is greatly improved. The ways that Mike describes data models being accessed and modified could be achieved this way.

The final plus is that switching to a flat file generator model means that there is less reliance on single technology or system . While GitHub, WordPress and certain programming language are the choice today they are also dependancies in the long term. Not relying or depending on certain technologies means that we’re creating more sustainable content that is open to change and evolution as technology and trends change.

Publishing in the digital age needs to embrace the concept of remix as it’s the most significant affordance of being digital. I’m in a state now where I can see that the technology required is getting closer to realising that idea. Once it does we’re going to be in for a ride.

Advertisements

The Complexity of Zoom

I’m really glad I read this tweet this morning:

“Content” is like “art” or “weather” or “traffic” or “sex.” Useful words to describe large phenomena, but less useful the more you zoom in.
James Callan

After spending most of the day discussing digital publishing I’ve come out the other side feeling …. well, underwhelmed.

Despite discussing an issue that has vast implications for the organisation, and has specific value for the work I do, I don’t really think we achieved anything. The meeting started off quite well – quite a bit of disagreement while we started to establish terminology and reach some eventual consensus that this is quite a difficult and complex area. After that initial back-and-forth though we fell into the kind of conversation that lulls you into a false sense of security. We discussed things in such broad way that we kept glossing over the cracks, the inconsistencies, the half-truths and the half-facts. When things bordered on being difficult we backed away. When we met something that needed to be tackled, we sidestepped it. Maybe this illustrative of groupthink or symptomatic of coming together to have a conversation about something, rather than a conversation for something.

I’ve annoyed myself because despite going into the meeting with a clear perspective, a vision and concept of what we were dealing with I let my self get caught up in the constant pinch-and-zoom between generalisations and stereotypes then down to tiny granular details. That way of discussing a topic has a way of disorienting everyone, there is no forced perspective and so eventually you just resort to discussing things at that zoomed out level where everything seems simple.

The devil is in the detail. The problems and complexities inherent in something as generic as content are vitally important. Variations between context, system and workflows when zoomed out don’t seem like issues – but at the granular level they are deep and wide canyons that cannot be traversed let alone glossed over.

There is this human tendency to back away from what is difficult and threatening – and I think in some ways that’s what happened today. Things get put into the “too hard” basket to be dealt with later, and preferably by someone else.

Purity in the digital form

The announcement and showcasing of iOS7 from Apple this week heralded a significant shift in thinking about interface design.

Key to all this was the ditching of the skeuomorphic elements that have littered iOS and MacOS since the dawn of the Graphical User Interface. This isn’t a trend that Apple started by any means, but I think it represent a shift in the mainstream conception of our digital environments.

Matt Gemmell wrote an insightful post and perfectly summarised this change:

The new iOS is designed for a different environment, and a different maturity of mobile user.

iOS7 represents a homecoming – where device, user and interface are finally bought together. They have grown separately, creating new paradigms, overturning conventions, creating new opportunities and now they finally meet and merge.

The other point I picked up from Matt’s pieces was how ios7 is:

a shift away from artefact, and back to essence.

This is a far more refined (and beautiful) way of thinking, building and designing in the digital realm. It’s what I have been unable to articulate, but matches my thinking around developing digital content – it must be about the content and the user and no longer focused on the artefact. To me this is the central concept for working in the digital realm.

My last postwas an attempt to outline bringing some of these ideas into publishing. Dumping the traditional analogue ways of thinking that slows us down, impedes our growth – and I believe that Apple are now proceeding down this path. For those working with content and developing products and systens in digital spaces we perhaps can take a few points about our user experience of working in this environment:

In the field of user experience, there’s a huge and unhelpful overemphasis on similarity, familiarity, and the ability to formally reason about interfaces. People are more nuanced. We respond based not only on experience or reason, but also on emotion and intuition.

I think it’s time that purely digital concepts, ideas and methods begin to emerge from their cocoon. To remake their pudgy caterpillar selves into beautiful, delicate and specialised butterflies.

We need to start at digital

I came across Tom Johnson’s posts Structured Authoring By For And Or Nor With In the Web and Structured Authoring (like DITA) a Good Fit for Publishing on a Website? this weekend. It came at a good time for me seeing as I’m thinking about publishing, authoring and content. I missed the publication of the original post – and I only came across it because of the great work from the Zite recommendations engine – but I’m quite taken aback at the response to the post, and intrigued by many of the amendments since made to the original. It seems Tom is quite embedded in the documentation area and they have some pretty strong views on publishing.

This also coincides with a referral to a post via @CathStyles by Paul Rowe on the Create Once Publish Everywhere (COPE) concept. Paul gives a really great overview of COPE in a really fair and balanced view. He also offered me a different perspective with his work with museums and a better understanding of yet another specific context with specific needs.

Given my own work in education I feel I am getting a better picture of the state of play.

What I’ve picked up is that what we are all attempting to deal with analogue systems and processes shoehorned into digital spaces. The current state of our systems, processes and software are tied to an analogue way of thinking, of developing and of working. There is nothing inherently wrong with those systems – the structure of DITA, the individual museum catalogues or the tome of the study guide – they all work in their own contexts, but they have reached their limits because they are no longer existing in a single context. The culture, society and technology around them has changed substantially but these systems hold onto legacy vestiges.

I agree we need something new, and Tom’s discussions around what the Web offers are really interesting. Why? Because the web is digital and always has been. It has evolved much faster than our other publishing systems and it has embraced its digital nature. The line from Tom’s article that resonated the most with me was this, “web platforms are built on a database model of dynamically pulling out the content you want and rendering it in a view“. This to me highlights the contrast with the analogue model where content was just stored in the database – content was never developed with the database in mind, it’s just where the finished product ended up. It gives the illusion of being digital – but without any of the inherent benefits of being native.

While I didn’t come to my conclusion the same way, the Adaptive Media Element concept is about applying the content to the database. Using it’s richness and ability to apply logic, to call dynamically and to change as required makes the AME adaptable. It provides the strength of the digital object while retaining the detail and richness of the content itself.

I agree with Paul, and I’ve quoted Bill Hick’s before, but the line about the need to evolve ideas rings true to me. Evolution doesn’t mean starting from scratch, in fact in most cases it just means making small changes that make life easier, more adapted to the current climate. I think we need to take the ideas of structured authoring, illustrated so well by the COPE concept, and move them forward. We need to leave behind some of the analogue baggage. We need to start at digital.

Adaptive Digital Publishing – Current Work & Ideas

This is my first post in an attempt to “work out loud” i.e. to be more open in my practice rather than just my output. It’s an attempt to log my current ideas and concepts around a topic to frame it, share it and reflect back at a later date.

One of the things I’ve been interested in over the last decade is the convergence of digital tools with traditional analogue publishing processes. Working in the design field I’ve been at that coalface and published my own fair share of both digital and analogue artefacts.

The last 5 years have seen a massive shift and a chance for digital to move into the analogue print space with viable and attractive alternatives. Smartphones and tablets (and everything in between) offer new opportunities and potential to fundamentally change publishing – how we see it and how we do it.

What’s become abundantly clear though is the lack of tools that can take it to the next level. Most of the popular tools have been co-opted from print and bring with them legacy concepts and constraints. While they may seem adequate they tend to lack the ability to realise the potential of a true digital publishing model.

As part of the mLearn project I’ve been heading up, we investigated what options we have as a university to transition our content to a mobile platform. The results weren’t pretty. Nothing was easy. Conversion was never precise and required manual manipulation to massage it into something viable. This is fine in individual cases but not for the scale we require at Charles Sturt University. What we want as an institution is a way to author once and publish out to many channels, so that our students get to choose how and on what platform they can consume our content. It’s a realisation that our students are diverse, with diverse needs and desires and a one-size-fits-all solution never really fits anyone.

So I’ve been thinking.

Beyond Text

The first big idea was the need to start thinking beyond text. Technically text is easy – if you get a properly marked up document you’re fine. Perhaps that ‘if’ is harder to come by in some cases, but what I’m trying to say is that text is not the problem. No, the sticking point is media. Media is all the other ‘stuff’ that’s possible to place in and around the text. The data tables, diagrams, images, video, audio, activities, quizzes – it’s everything else that’s possible with a digital medium. These are where the problems lie, the challenges and the break points. Media brings to the fore the inherent differentiation between print and digital and in many cases defines them as unique and different. However the goal is not to wipe out print and replace everything with digital. I’d rather see but cohabitation – working and publishing to both, to all forms, platforms and spaces – not adversarial but complementary and certainly not the death of analogue.

Converging Ideas

My skill set and knowledge is something I consider quite unique. I straddle the multiverse, working on the fringes of many realities – analogue & digital, offline & online, web & print, commercial & public. So it’s within this divergent conversation I have been able to pick up on common strands. Twitter has been immeasurably important in this process – allowing me to tap into many new fountains of knowledge and expose my brain to new ideas.

I pick up on the content strategy discussions, with particular resonance are those from Karen McGrane. The thinking around responsive design from Ethan Marcotte. The emergence of new ideas from Brad Frost. The process of Mobile First from Luke Wroblewski. The backend work by guys like Dave Olsen on creating better tools for adaption. The discussions around the Content Management Systems – their pros and cons and place in the new age. Ideas from Jason Grigsby struggling with responsive images that cope with retina and standard displays. The dynamic and active content concepts from Brett Victor.

What I’ve been working through is aligning those ideas. Cherry picking concepts that resonate with my work and trying to formulate those into something we can work on in the project, and I think I’ve got there.

Adaption Through Specialisation

A few weeks ago I had a light bulb moment. I was watching a documentary on metamorphosis and as the narrator described the process and it’s genesis it sparked a Eureka moment! I got pen and paper and started scrawling notes:

Metamorphosis means to change form. It’s an evolutionary model whereby there is conspicuous and abrupt transformation accompanied by changes in habitat or behaviour.

That concept, that idea, that process – I could see vividly how that related to what we want our publishing systems to do. Not just simply respond, but adapt to a specialised form uniquely suited the mode of delivery.

The next big question was how.

The Birth of the Adaptive Media Element

I struggled with this for some time, but the work done by those in the web field, particularly around images was extraordinarily helpful. This is what I came up with:

An Adaptive Media Element (AME) is in essence a meta-object, which contains self-referential information. It is not a single file per se but instead contains more detailed and expressive information that allows logic to be applied. For example an AME might contain a file type reference, the file itself, a weblink to an external source or library, source information of where it came from, reference information, alternative files or metadata, a title, a caption and a description. In the diagram below is an example of what a Video AME could look like – a single element in the authoring environment that links to a library of connected files and information:

This diagram shows how a single AME appears in the editor as well as all the individual components that could make up an AME.

This diagram shows how a single AME appears in the editor as well as all the individual components that could make up an AME.

This extra rich information allows logic to be applied and through a predefined profile, pull in and display only the relevant information into the final markup. In the case below I’ve used some the HTML to define the kind of markup that would be created. In this case the DIV would act as a container for the elements inside being written from the library:

The AME is replaced by the individual components relevant to the publishing output. In this example a video AME is converted into a DIV with the associated elements and attributes coming from the library.

The AME is replaced by the individual components relevant to the publishing output. In this example a video AME is converted into a DIV with the associated elements and attributes coming from the library.

To some extent this is possible through customised web solutions, but not to the mainstream, not directly linked to an authoring system and not across many channels and platforms outside the web environment.

… So back to some more thinking. How would this all work, how would it function, what would it look like? At the same time, as anyone who works in a large institution or on the web would know, you need a great acronym to get any traction. So in looking for one I looked up words containing A, D and P (adaptive digital publishing). Not many words but one stood out…

TADPOLE – The Adaptive Digital Publishing Engine

Developing a process based on metamorphosis and that’s a suggested word? C’mon that’s fate, amirite!

So TADPOLE is an attempt to envision a system that would allow you to author and publishing content out to many formats using Adaptive Media Elements. It consists of three elements:

  1. The Authoring Environment
  2. The Adaptive Media Elements Library
  3. The Transformation/Compiler Engine
Describes the three main theoretical components of the system.

Describes the three main theoretical components of the system.

While I had this initial model I needed some other voices and opinions and was able to bring in the team. Rob and Rod bought some structure and order to my very sketchy ideas. From there we were able to start to define the functionality of each component.

The authoring environment can look and function however one may see fit – but what it essentially creates is structured content. Text creates the narrative structure required for publishing. The AME Library operates as a database of all the individual components required for each element. Many different types of AMEs can be defined and each would have their own separate components listed. Adding content to the library would be similar to a form with predefined fields. The authoring environment would contain a tool to insert an AME into the narrative so that all elements were contextualised and embedded, rather than hanging off the side. The transformation engine is when the logic is applied (defined as “PHP magic” in our early discussions). It would scan through the document and find each of the AMEs and using a predefined profile insert the relevant components from the library into the markup. Profiles would be defined for each output required, so many could be developed and applied to one source to ensure specialised output. Our initial focus was on three – for print, eBook and Web – but these profiles could be customised and sub divided down much further. Profiles could be developed for specific media, platforms or content restrictions e.g. at CSU we have to deal with separate copyright restrictions depending on delivery via print or online. The compiler would then do the final render out and attach the presentation layer dumping out the finished files and folders.

Illustrates how the profiled markup then goes through the transformation and compiling process to output the finished files.

Illustrates how the profiled markup then goes through the transformation and compiling process to output the finished files.

So Why Do This?

What we tend to do at the moment is simply transcribe content from one format or file type to another. This is incredibly time-consuming and inefficient, and tends to focus resources on manual transcription of content rather than capitalise on the logic inherent in the machine. At the same time there is a drive to provide greater diversity in publishing options. We find ourselves in a quite untenable situation In quite broad terms the world has changed but the tools haven’t kept place. We need better tools and better processes.

Everything outlined above is possible today, but what is available form vendors seems to be either an overly simple authoring tool that lacks the depth required for publishing, or an overly complex authoring tool that alienates everyday users to gain powerful publishing functions. What we are proposing is something that doesn’t need to compromise for two reasons:

  1. Content is separate from presentation &
  2. Authoring is separate from publishing.

They co-exist within the same system but there is clear delineation to address quite different requirements. Content should be seen as liquid, but presentation is many faceted and required to conform to certain constraints. In much the same way authoring should be simple and intuitive but publishing needs to be complex and extensible. Through separation of form and function we can achieve a much smarter tool that capitalises on the inherent abilities of machine and human, rather than forcing one to compromise for the other.

What my team and I have tried to envision is an extremely adaptable system. One where:

  • content is developed neutral to the delivery method
  • shape and form comes from the narrative
  • is simple and intuitive for the author to use
  • provides complex output options
  • automation and logic do the heavy lifting, and
  • consolidate disparate production processes

This creates a future friendly system for publishing that will remain adaptable into the future. New components and AMEs can be added as needed and new profiles developed as new standards, formats and platforms are released. Instead of re-encoding, re-creating and translating content into each new format as they arise, the process can be automated and structured based on logic that can automatically be applied to all content. Authors can look after the creation and developers can look after the backend.

From here

At the moment we are only looking at developing a proof of concept, so at this stage we would adopt a static publishing model – so you would need to initiate the process rather than it being dynamic. This is to impose the rigor of form, and using the act of publishing to provides the temporal constraint of a beginning & an end. This minimises the complexity of the system as we won’t be required to maintain versions or host live content. Instead we will leverage our existing systems – a digital repository and LMS – which are far more capable in these areas. Our needs within the university are also centred around the temporal constraints of sessions and semesters, so we want something to remain static for their duration. That said we can envision that the model could be adapted further for use in a dynamic system.

So that’s where we are at the moment. I’ve started to sketch out some of the required AMEs that we need as well as looking at and scoping products are out there that can do what we are proposing. My feeling is that customising an existing CMS might be the way to go, but I am open to ideas.

If you would be interested in collaboration or finding out more please feel free to contact me.

Separating Content from Presentation

At the moment I’m planning some work in the area of digital publishing. The premise is to develop a proof of concept for an adaptive digital publishing system. Central to this idea is the concept of the separation of content from presentation. This concept is perhaps best embodied on the web – HTML providing structure to content and CSS providing all the necessary visual styles. This model provides a surprisingly flexible system probably best showcased by the CSS Zen Garden site. What changes on the site isn’t the HTML, the content stays the same, just the CSS file is changed to create radically different page designs.

The idea is that the two can be neatly divided was thrown into disarray by content strategist Karen McGrane in her recent post over on A List Apart – WYSIWTF. In the introduction she states:

The reality, of course, is that content and form, structure and style, can never be fully separated. Anyone who’s ever written a document and played around to see the impact of different fonts, heading weights, and whitespace on the way the writing flows knows this is true.

Now the WYSIWYG editor and the faith we put in it has to die, because it is simply no longer relavent. Content will end up on the device chosen by your viewer, not you. Did you know there’s a browser on the Kindle? Yes your site and your content displayed in 256 glorious shades of grey. Did you design for that? Did your WYSIWYG editor show you that? But what can we use instead? Karen dismisses the inline editor for many of the same reasons as the need to ditch WYSIWYG, so what’s the alternative?

Well the last line “If we want true separation of content from form, it has to start in the CMS” was a bit of a guiding light.

Content, Container and Presentation

I think what we are missing is the concept of a middle man. Content and presentation are too distinct, they are the two extremes in this case and we need to find some common ground. This is the place for the container.

The container is a shapeless vessel, more liquid than solid, but it provides a flexible structure for the content. The container defines what the content is. It might be a chapter, an article, a post, a review – but what it does is create a defined space for the content to live. The container is the tool to avoid “blobs” and faceless chunks of text and defines how content belongs and provides context. Content without context is pointless, so we need containers to help us develop better systems, better tools and better presentation. Once we have a container in place then we can start to develop better tools to preview, review and edit. We can provide more expansive “preview” modes so that we can render models on different devices, screen sizes, browsers and even in different channels like in an app.

The container appears to represent the missing piece for my work and I can see the need to develop this idea further. So I’ll keep you posted on how things develop.

PS – wondering if I was channeling this great Aussie invention:

The Goon Bag