Listening to the Web

Ever since I started seriously studying and working with the web (way back in 2000), I’ve always struggled with the terminology used to describe the way we USE the web and what we DO with the web.

At that time it was all about “interactivity”, a term I have always struggled with and come to loathe. One of my earliest university essays was an attempt to define a taxonomy of different types of interaction in order to extract something meaningful from the word. It was messy and in the end I was never happy with the term nor the ideas I’d had to construct meaning out of it. Interactive did, and still does, grate on my nerves as it’s used so flippantly and with no consistency. Interactive is applied to a “Next” button just as easily as multi person video chat. It’s applied to the navigation of a page, the transactions carried out and the reading of text or watching a video. Interaction became a catch-all’, a terrible term to define or discuss your work.

But I’ve always struggled with an alternative. There needs to be more nuace and clarity – particularly in the area of “consumption” (another term I’d rather not use).

So what do we call the way we use the web?

Despite so much of the web being text, there’s always been an orality attached to it. In many ways text on the web has sort to replicate speech and dialogue rather than print. The immediacy and connected nature allowed text to become more transient and ephemeral within its own context. The shorthand and slang, even emojis, developed as ways of replicating the traditional context of speech – embedding place, culture and emotion.

For the “reader” too, the experience of text on the web never functioned like the printed word. Physically it required a “workstation” far removed from the portability of the book. The low fidelity screen and limited colour palette are really only now starting to disappear as a constraint and limitation on the reading experience. There is a also the fragmented, distracting and infinite possibility of the web. Rather than be a library of closed, sorted and stacked books the web is every page of those books open and on display. Hypermedia created a non-linear, non-sequential labyrinth of information that simply cannot be “read” in the traditional sense. And the text in chat, forums and comments – is that “read” in the same way as a book? Is reading text an interaction on the web?

During one of the recent Future Tense podcasts Tanja Dreher notes the work of Kate Crawford and the role of listening online.

When we think about particularly the social media environment, the online environment, it’s obviously a sort of proliferation of voices, stories, speaking, exclamations. Lots and lots of expression can seem incredibly overwhelming.

But if we take a step back and think about what we actually do, most of us still spend most of our online time listening rather than speaking. We might post a couple of Facebook updates, we might send a couple of tweets, but there’s also an awful lot of paying attention, listening in the background that’s going on.

So there’s a wonderful academic Kate Crawford who has made the argument that listening actually provides a better concept for thinking about our online participation, even though normally we focus on speaking. And she says part of the problem is that we have really undervalued the importance of the listening that we do.

A lightbulb went off at that moment. Listening! Yes!

Reading through some of Kate’s work the issues she highlights are often when the concept of “interaction” falls down or fails to capture what exactly is happening. So instead of a distinct act on the web it’s labelled as something passive like “consume”. But we’re not simply consuming, shovelling it down or burning it up, we are thinking, pondering, questioning, absorbing, agreeing, disagreeing and everything in between. We are not consumers of the web, we are listening. We listen to people tell us about their day on Facebook, not simply read or consume their posts. We listen to the discussion on Twitter, the chatter and dialogue passing us by. We don’t lurk, we listen.

As the introduction for one of her papers suggests.

much online media research has focused on ‘having a voice’, be it in blogs, wikis, social media, or discussion lists. The metaphor of listening can offer a productive way to analyse the forms of online engagement that have previously been overlooked, while also allowing a deeper consideration of the emerging disciplines of online attention.

Listening is participating. It’s not necessarily interaction, but its a conscious act, not a passive one. You choose to read, you choose to listen, it can’t happen by mistake or by accident. It requires effort. Listening is an act that goes to the heart of the web and why it actually works. Not because it gives people a voice, but because it provides a way for more people to listen. That’s a powerful thing.

Advertisements

The UX of Telecommunications

UPDATE: So magically the internet popped into action 6 hours after posting this. Doesn’t change the experience, but I’m glad to be back online 🙂

I’m putting this post together for a couple of reasons:

  1. Because I am so frustrated at the moment that I need to get it off my chest.
  2. Large organisations seem to have no idea what customers actually experience with the systems they’ve developed.
  3. I’m 100% sure I am not alone (just replace company names with pretty much any telco, cable company or mobile provider).
  4. Maybe someone might want to do something about it.

So on the 1st of November I ordered a new ADSL connection for the house we are now living in due to the fire. A simple enough process it would seem. Things started off well despite the odd experience of many of my first interactions with the company being via an Automated Menu System. A technician was sent out and he did his thing. When asked, he let me know that once he’s logged the job as being complete (that afternoon) that within about 2 days the connection will be activated and I’d get a text and email letting me know.

Wednesday comes around and I’ve heard nothing. I log on to my mobile to chat and end up having a rep call me. After back and forth about my order I am told that everything should be live on Friday. Friday comes around, nothing. I call again and spend waaaayyy too long on hold. Registering my dissatisfaction that this is now late I am told that tomorrow it will be fixed. Saturday comes and nothing. I’m call again. I get a more thorough explanation – ie more than “computer says no” – and told that it is a “back of house” issue which has now been forwarded on to another team. When I prompted for an ETA I was given the answer of Monday, as the team it was forwarded to don’t work on Sundays. I asked that I be notified when the job is completed. Monday comes, nothing. I try an use the handy link the service person gave me to get in touch – which was supposed to guarantee I don’t have to negotiate the chat service or call centre again. I get an error on the website. Frustration. Is pretty much at peak now.

Again I go into the chat room. 40 minutes later I have a person on the other end. A person who can’t deal with my request so must forward me to another team. I get a call, am told I’m being transferred to that team, “shouldn’t be a minute” and am then placed on hold. 40 minutes later someone picks up at the other end. I’m told that the due date for my activation is midnight tonight, and that it will automatically happen. I express my frustration – this was the same response I got last Wednesday, Friday and Saturday. I am sceptical that this will be fixed. I asked to have this information emailed to me. I ask to be contact tomorrow with an update. I get another direct line contact email and a phone number. Monday morning rolls around. Nothing. I try the direct link – it does’t work. I try the phone number – wrong number. I am done. This is the most pointless system that I have ever encountered.

I tried calling the helpline and was offered a callback option. I took that and gave them my number to save my ears from another onslaught of “hold music”. About 10 minutes later I got an automated call back and was placed on hold. Then was told there was a problem and was hung up on. This happened 3 more times and then promptly ended without actually having spoken to anyone.

My experience has become completely cyclical

  1. Place order
  2. Technician Installs line
  3. Nothing happens
  4. I call Telstra
  5. I sit on hold for 30 minutes
  6. Responder informs me that it will be fixed in 24-48 hours.
  7. Repeat from 3.

I’ve even made a diagram.

Mapped Process of dealing with telstra listed above

As a side process to all this I’ve been tweeting my frustrations to @Telstra. But these seem to little to no bearing on the outcome. In fact this is how that process works:

  1. I winge on Twitter.
  2. Someone from @Telstra provides sympathy but no solution.
  3. Nothing happens.
  4. Repeat at each stage of above process.

What I’ve done here is give Telstra a pretty good “User Journey”, and if you map that user Journey you find that there isn’t a point in it where the goal is achieved – particularly if this is “make customer happy”. Like some layer of Dante’s inferno i just keep going round.

Later on I got a call from an unknown number. It was the voice prompt lady again. But this time she wanted me to press “1” to connect the call – an entirely different user action – and one that I was unable to perform because I was driving. Because I didn’t press “1” in the allotted time I was then bombarded with a phone number to call and 10 digit reference number to cite…. lot’s of use when you’re a). driving b). not you’re warned and c). you don’t have a pen or paper because YOU WERE THE ONE “RECEIVING” A CALL! You can’t force your expectations on someone nor should you flip the expected modes of interaction. Using a phone is oral interaction, using a chat is text, if you want to change those you have to ask permission first or at least offer options or alternatives. When I got out of the car I called back the mystery number. I spoke to voice prompt lady who then connected me to a human – only had to be on hold for 5 minutes this time.

The responder informs me that it will be fixed in 24-48 hours.

The circle continues.

PS – If anyone from Telstra actually wants to talk to me – feel free to contact me – I’m on Twitter and am using my real name on this blog. I’d be happy to update this post with any news, changes or outcomes.

PPS – I’d estimate that I’ve wasted about 4-5 hours of my own time trying to sort this thing out. There is also the inconvenience that this has caused – not being able to work form home a big one – which has resulted at at least another 5 hours of lost productivity. It’s not how I want to spend my time, nor should I have to. If I measure this at my current hourly rate we’re looking at the process equating to about 6 months of broadband. I’ve also had to purchase multiple data packs for my phone and a 4G modem that was supposed to just be a stop gap. There’s another month there.

#dLRN: The Cynefin of Conferences

So I’m flying home to Australia after a challenging week here in the US. Challenging is good, but damn, it’s hard work!

I got some time to myself, to be with my thoughts and be away from the situation at home, which was a bit of a relief. I was also away from those dearest too me for too long and there was a definite sense of isolation there. I’m beginning to really understand how close knit we are as a family, and any extended time away from each other is hard for everyone. Can’t wait to see them soon!But the conference … How do you describe #dLRN15? It’s complex.

This was not your usual conference. While some of the structure was familiar, some was new. The conversation was different. The themes were different. The people were familiar but new. The discussion was broad and inclusive. There was respect and balance and care evident by everyone who spoke.

There were a lot of first for me:

  • first time seeing a lot of this community in the flesh, I only new them as an avatar and a @ handle before;

  • first time interacting with many of these people outside the digital, so ditching the blogs and tweets and actually having a dialogue;

  • first time discussing topics outside of text, pushing language and the limits of oral thought processes;

  • first time discussing actual issues that are having a real impact on people;

  • and first time in Palo Alto, Silicon Valley and an elite institution like Stanford.

That’s a pretty heady mix and a brew shared by many of the attendees and why I think it’ll require sometime before we’re able to really process the conference and what we might do next.

So to be honest I don’t think I’m ready to unpack the themes yet, but I do want to make an observation.
This is the first conference I’ve ever been to that dealt with education in the complex and chaotic domains.

I’m referring here to the Cynefin Framework and  I’ve found it to be an incredibly useful way of trying to in which to frame what’s going on in the world and the solutions and approaches required. As Wikipedia describes

The framework provides a typology of contexts that guides what sort of explanations or solutions might apply.

If you’re new to Cynefin if suggest you check this video best describe I’ve found

If I was to think about the conferences I’ve been to historically I’d suggest that they fit the various domains:

Simple = Vendor/Commercial Conference

Well to be honest the relationship between cause and effect is obvious – it’s the vendors product – and the content is usually about applying best practice

Complicated – Society or Professional Conference.

While taking it up a notch these conferences focus on analysis of the relationship between cause and effect. Content focussed on investigation and the application of expert knowledge. Through this we get a sense of what good practice is.

Complex – dLRN15

This conference really did focus on discussing the relationship between cause and effect in retrospect. There was a lot of presentations that referenced the past – success and failures – and attempted to place what was happening in a historic context. We discussed a lot of what had happened but there were not a lot of predictions of what will happen. There was an acceptance that the environment is a complex mix of social, political, cross cultural and economic issues that are locked in step. There was acceptance of complexity for the first time ever – in particular that there is no single solution. As such the presentations and discussion for the most part was very much focussed on  the emergent practices of digital learning. It also attempted to place them in a much broader and connected context.

Chaos – Bits of dLRN

To complement the complex there was definitely elements of the chaotic too. These were more like fleeting moments where the  relationship between cause and effect was left out of the equation and instead a focus on

novel practice. Mike’s discussion of the federated wiki and garden metaphor, while

grounded in the history of information, was very much a novel approach and alternative to the current model. 

Disorder 

It’s important to point to the final state of Cynefin – disorder. Which is very much the state of higher education and edtech. 

The fifth domain is Disorder, which is the state of not knowing what type of causality exists, in which state people will revert to their own comfort zone in making a decision. 

That to me sounds very much to the system (and the battle) that everyone at dLRN is involved in. 

The California Effect

There was something so refreshing about this conference and its ability to move into that complex and chaotic space. Maybe it was the location, the weather or the people, but it felt like something significant happened on the Stanford campus. Maybe it’s the Californian Ideology at play – but judging from the blog posts so far (from @acroom & @googleguacamole) whatever it is it’s left a mark on many of us who were there. 

Ditch the Duality

This presentation was developed for a series of Think Pieces at Charles Sturt University. I’ve nominated to do these for the last few years, mainly because it gives me an opportunity to explore issues relating to education and technology in a slightly more expansive (and sometimes provocative) way. My take on these think pieces is not for me to do all the thinking – but open up a channel to explore some different ideas.

I developed the topic for this presentation about 6 months ago – having a notion of what I wanted to discuss. What I’ve ended up with is probably not what I orginally intended but actually more cohesive. It brings together a number of ideas I’ve previously blogged about (interaction, abstraction & mediation) and ties in with some interesting pieces I’ve been reading recently – most notably this post from Nathan Jurgenson. Nathan’s post appeared at the perfect time – one where I had the ideas but not the taxonomy laid out – so I’ve borrowed quite a bit of his post.

Let me know what you think!

You Are Not In Control

Tonight I’m giving a presentation for INF537 Digital Futures Colloquium, a subject part of the Master of Education (Knowledge Networks and Digital Innovation).

While the title slide is a little ominous it’s aimed at being a provocation to the class to stimulate discussion rather than a lecture. I really want to hear what the students have to say – even if they think I’m way off.

Hopefully the seminar “provides the stimulus to identify and reflect critically on topics that have implications for a student’s own professional development, professional practice and scholarly interest” the subject aims to do.

The Quiet Page & Linking the Web

A number of recent posts and articles I’ve read discuss the concept of linking – Will Deep Links Ever Truly Be Deep?, Beyond Conversation, Follow-up: Reader as Link Author, How we might link and The Web We Have to Save.

Each in their own way has resurfaced an idea that I had a number of years ago. The year was 2011 and I’d spent about 3 weeks in the US as part of professional experience program. I’d spent a lot of time in the company of some great thinkers and innovators. At at some point there was a discussion about books – the supposed death of print, the inadequacy of ebooks but the potential that digital technology has for rethinking what makes a “book”.

Out of those discussions and over some long days driving I started to flesh out some ideas about what could be, where could the concept of the book go once it had been made digital? I wrote it down, drew it up on paper and left it there. Knowing the idea wasn’t ready. I couldn’t see how it could be done. Not yet anyway. But I pulled that paper out over the summer and read through it. Rethought it and started to rework it. And the big idea?

The Quiet Page.

At that original point in time most discussion was around what digital could add to the reading experience. Media, interaction, social media, video, analytics, data metrics, the list was endless. I was actually draw to the simplified, the unadulterated text. To be able to experience words and language without distraction. Without embellishments. Without blue underlines, embedded video, high definition graphics, interactive elements or embedded social media – the quiet page.

Text delivered to my liking. My font, my size in my colour or screen setup. Quiet. Relaxed. Readable.

And from the quiet page we can add the ability to turn on functions. To add to the quiet page layers of functionality. To view the text in different ways. To move beyond the navigation of our magic ink, and to embed the text with additional contextual information.

the-quiet-page

  • To see it linked to other resources to show its research and context. The internal and external connections of the text itself. (Author)
  • To add richness by adding media, visual and auditory elements that help enhance the message. (Publisher)
  • To annotate it myself. To highlight underline and note. To visualise and add my experience with the text. (Personal)
  • To view others experiences of the text. To see their notes and discussions. To see their highlights and to experience the text in a social and shared way. (Social)
  • To create trails. To connect the text to other content, ideas and resources myself. To place the text in my context, my experience and my knowledge. (Synthesis)
  • And then to share those trails. To let others see how I’ve contextualised the text. To see my experience but to then be able to add to it and expand it. (Connected)

From the Quiet Page you can do all these things – because the page doesn’t change. Each layer is an enhancemennt, an addition to the text rather that part of it. The Quiet Page allows the text to be adopted for other functions and purposes. To become non-linear, lived, felt, experienced and shared. To map and chart the interactions with the text. To go far beyond the “book”.

The point was to link the text. Not just in one way, but many. Internally and externally. Personal and social. Private and shared. And to cross between those states. To make the external internal, the personal social and the private shared. To link the text to life.

This discussion around linking – in particular Mike’s contribution – has made that importance of linking clear. That it is one of the key differentiators of the digital – not just the linking itself, but what the linking enables. It allows connections to be formed – not just between data, ideas or information, but people too. They provide a way to express, to visualise and map connections. To share, create ad communicate with humanity beyond our physical and temporal constraints.

The link is unique and powerful. It drives the potentially of the digital medium and needs to be enhanced rather than killed off or replaced.

Otherwise all that’s left is the Quiet Page.

Make Your Own Slogan: MYOS and the Networked Future

When I started this post it was only a week since I submitted an abstract for the dLRN15 Conference, but the it’s taken much longer to pull this post together than I originally thought. The title of the talk that I submitted was Empowering the Node & Avoiding Enclosure and in this post I want to begin the process of sketching out some of the core motivations and ideas I’ve been having in regards to the technology for living and working in a networked world.

This is has been a process of attempting to bring together some of the ideas I’ve been dwelling over for the last year and a half about what is happening online, particularly in the ed-tech space, and alternative ways that we could do things. The ideas are very much tied into notion of networks, in particular the concept of distributed systems. I put it down on my “year ahead” post back in January as a topic that I really wanted to explore this year, so when the call for papers, and the list of speakers/organisers came out – I figured this was as good a time as any.

In the meantime Jim Groom has published a couple of posts, one & two, that share similar ideas, particularly around the architectures around how to build alternatives. Yesterday Michael Felstein also put together this great post on the EDUCAUSE NGDLE and an API of One’s Own. Both share commonalities with what I’ve been thinking in particular around APIs and an “operating system” of sorts. It’s kind of why I decided to get this post out even though in some areas it’s still only half-baked.

So what’s the problem?

The big issue that I have with the current raft of technology is centralisation. Some of the big players are working desperately towards concentrating all your data, profiles, media and personal information into their own systems (see Facebook has officially declared it wants to own every single thing you do on the internet). Commercial social media tools have given life to the idea that networks are things that can be created, manipulated, bought and sold. However,

a network isn’t a thing, but an expression of individual nodes, how they interact with each other and the relationships they develop.
The Network & Me

These enterprises do not operate as networks, but as containers. They are an explicit attempt to seize and monetise our digital endeavour by controlling the vectors through which they flow. They are closed, controlled and centralised systems that are attempting to enclose the web, the notion of commons and the ability to connect and share. Yes it will be possible, but on their terms and in their space. As the importance for digital networks grows, the tools we currently rely on are undermining their ability to function. They are becoming a medium where networks do not grow and thrive, but silos in which they become stunted and curtailed by a simple binary choice – accept or decline.

Technologies in which digital networks can thrive don’t look like the tools available to us today, or those planned for tomorrow. Not the learning management system, Facebook, LinkedIn, Twitter or Medium.

So what’s the alternative?

I’ve been a huge fan of Jim Groom & Tim Owens’ work on developing up the literature and architecture for a Domain Of Ones Own. I think that idea – a space owned and controlled by the user – is paramount in this networked age. It forms a solid foundation from which to build networks in a distributed way, rather than the centralised silos that are currently available.

I’ve been eating up information relating to domain of ones own projects and the related technologies and concepts like Known, APIs, Docker & Containers, Federated Wiki, WordPress, JSON, GIT, node.js, Open Badges, xAPI, Blockchain – because to me they all work towards developing an idea of how a domain of ones own can be transformed into an operating system of ones own. An operating system that can drive us forward into the networked age by changing the current technological paradigm to one that seeks to empower the node rather than enclose them. “Nodeware” rather than explicit software or hardware.

This platform would aim to improve the ability for each individual to connect and share with others in truly negotiated and social ways. A platform that allows us to rethink the ways in which we learn and engage with digital networks – distributed, negotiated, social, interactive and sovereign.

Genesis

The genesis of this was an attempt to rethink the Learning Management System in a distributed rather than a centralised way. I was over bemoaning what the LMS is and was and so took it upon myself to think through the what a viable alternative might actually look like. If we simply reinventing the LMS we’d end up with something like the Learning Management Operating System that Feldstein and co developed. The central idea I was working on however was to provide students, rather than the institution, a way of creating content, recording learning, developing a portfolio and managing their online identity. The challenging component of this was to think beyond the standard institutional IT infrastructure and beyond a better centralised system but one that was truly distributed system. Domain of Ones Own showed that there was a viable alternative, and coupled with concepts embedded in the indie web movement such as POSSE (Publish (on your) Own Site, Syndicate Elsewhere) and the growing momentum behind APIs ideas started to form around a way to manage, mind and make your own learning:

mind-your-own-learning

That image was from about a year ago – the kernel of an idea was there but not necessarily the means to take it forward.

Over the new year I participated in the first Federated Wiki Happening and the experience of not only using, but embracing, a federated, socially constructed, non-linear and cooperative environment was fantastic. It opened my eyes to what could be possible if we re-thought not on the applications but the underlying technologies we used too. I loved the open nature of the federated wiki, but what I fell in love with was the concept of being an “empowered node“. The system worked in a way that empowered the individual. It provided tools and methods to create an individual identity while at the same time allowing others to connect social and professionally.

Last year I also worked on our university Badges project, and have been thinking about the potential of xAPI to capture a more nuanced and broader spectrum of learning and so have been broadening my concept of what’s possible technically and culturally.

A fortnight ago we held a workshop on how as an institution we could support Learning Technology Innovation. One of the key areas I wanted to explore with the group was APIs. So in the process of planning and putting together a presentation for the event I’ve been engaged in that space too. Just follow Kin Lane and have a play with IFTTT and you will quickly understand the power and potential that APIs offer. (PS this video offers a neat explanation of what the hell APIs are).

Welcome to MYOS

MYOS is the name I’ve given to the concept of developing a personal and social software system that provides not only the tools and technology to empower the individual in the networked age but some guiding principles about how it should enable, enhance and empower the user.

The name came from a bit of a play around with various combinations of words to describe what it would encapsulate:

  • make your own stuff
  • mind your own stuff
  • manage your own stuff
  • my online self
  • my operating system

MYOS could simply be – Make Your Own Slogan 🙂

MYOS is very much the model the Jon Udell laid out as “hosted life bits” – a number of interconnected services that provide specific functionality, access and affordances across a variety of contexts. Each fits together in a way that allows data to be controlled, managed, connected, shared, published and syndicated. The idea isn’t new, Jon wrote about life bits in 2007, but I think the technology has finally caught up to the idea and it’s now possible to make this a reality in very practical way.

Technology Foundations

There are two key technical components to MYOS – Containers and APIs.

Containers are a relatively new phenomenon and arose as part of Docker. They allow individual applications and services to be packaged in a way that can be deployed on a single server. Apps can be written in any language and utilise a variety of databases because they are contained their own package. At the same time they can talk to each other – share common layers that allow for greater integration. Containers provide a way for a variety of “life bits” to be co-located and packaged in re-deployable ways.

APIs (Application Programming Interfaces) at their most basic level allow applications to to talk and interact with other applications. APIs are the vectors through which information travels between systems. For many years they were primarily used internally with large and complex systems, but they are now emerging into the public space. They provide you the ability to cross-post between twitter, facebook, google and instagram. They allow you to push files to and from Dropbox from a multitude of applications. APIs are increasingly accessible not just to developers but to users too. Services like IFTTT allow almost anyone the ability to harness APIs to create useful “recipes” that link their own data and interactions in ways that increase effectiveness and impact.

Founding Principles

On top of those technical foundations MYOS aims to embed a number of key principles common with the Indie Web movement and help define what the system aims to do – Empower the Node:

  1. You are in control
  2. Data is yours
  3. Connections are negotiated
  4. Enhance and enable diversity

You are in control

The focus of MYOS is to empower the individual rather than re-enforce the network. Empowered nodes provide a stronger and more resilient network that is able to not only cope but thrive on change. An empowered individual is not locked in or enclosed within a single system but is free to move between them.

Data is Yours

You should always be in control of your own data. You should be able to decide who and how that data is accessed, viewed and shared. Data sovereignty is now more important than ever as we see how state surveillance and commercial enterprise has transformed private data into a commodity that is bought, sold and exploited. MYOS should ensure that any data is ultimately controlled and managed by the individual.

Connections are negotiated

In a world that relies on the network we need to ensure that democratic values are not lost. Individual choice has increasingly been eroded by the binary – Accept or Decline. We need to move beyond the autocratic rules that have come to define much of our digital lives. Connections need to be negotiated and a key way of developing that is building in a handshake mechanism that ensures transparency but also encourages users to negotiate terms that suit them. This would include being able to decide what information is shared, how it is shared, what is hidden, what is private, what is relevant, what is preferred as well as negotiating a period of renewal. This handshake could include the development of “data lifetime” clause to ensure that data isn’t kept in perpetuity, but can be removed or forgotten without the deletion or removal of the user or service.

Enhance and enable diversity

Rather than enforce a monoculture, MYOS aims to promote diversity. While there is a need for a stable core, MYOS should promote a diverse eco-system of applications. From a technical level a containerised approach enables different application built with different languages, foundations and data structures.

Making it Work

For MYOS to work it hinges on a number of cultural concepts:

Owners not Consumers

I’ve written before about my notion that society is transitioning from passive consumerism to active ownership. The current model of networks is very much on built on consumerist conventions and why much of the potential inherent in the technology has devolved into manipulative and exploitative marketing. As an alternative Ownership requires a personal investment and active participation in order to receive a reward. An owner understand that there is always risk and a cost involved, but rather than be manipulated into supporting a venture, they wish to be informed. Value needs to be demonstrated and transparent.

Openness

In a cultural capacity openness is still a fairly new and one that is continues to challenge and disrupt existing cultural modes, model and practices. Many aspects of Western culture are built on practices that install and maintain rigid hierarchies of power and exploitation that are achieved by ensuring knowledge is limited through secrets, lies and division. openness destroys those notions and instead requires trust to be created, managed and maintained through transparency and a shared experience. Openness seeks alignment rather than consensus, cooperation rather than collaboration – which tends to turn all processes into a “consensus engine”. Openness encourages federation rather than centralisation, a key tenet of MYOS.

Community

For MYOS to ever function it requires a community, but communities don’t just happen. They require encouragement and nurturing as well as a level of active participation and contribution. Rather than being an emergent outcome of a social environment they require the result of careful fostering and cultivation. Community is the outcome of contribution, not participation. MYOS needs to be something that works with people, not for or to, and lies in the process of reclamation and liberation.

Agnostic Appropriation

Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, they just saw something. It seemed obvious to them after a while.
– Steve Jobs

MYOS isn’t a new thing. It’s an attempt to draw a line that connects a number of concepts that relate to our digital lives and the way we are increasingly living and working in this connected space. Movements (like the IndieWeb) and software (like Known) already provide aspects of the kinds of functions I see MYOS fulfilling. MYOS is an attempt to create a map of a networked idea.

Nodeware

In developing up a set of features for MYOS I started thinking about the idea of “Nodeware”. A combination of software applications, hardware and device that don’t just provide a service to the user – they empower them. They provide a rich set of tools to create, manage and maintain their online selves. Names are purely illustrative, but below is a quick list of starting features:

Identity Management – profiles and memberships
Cards – identities and personas
Keys – authorised access
Records Management – quantified self
Sash – badge display
Qualifications – certification, diplomas & degree
Shelf – web and print publications
Gallery – photos and graphics collections
Cinema – video collections
Radio – audio collections
Portfolio – assembled artefacts
Notes – ideas, notes and fragments of thought
Scrapbook – collection of the curated and salvaged

Expanded not replaced

The idea I’ve been working from is not an attempt to go and reinvent or recreate existing applications and services but to expand their features and connect them together. Open source projects make a perfect candidate for this expansion – so rather than replace Known or WordPress they can be developed in ways that integrate it into MYOS. One way that this could work is by rethinking something like cPanel and turning it into an OS level application that provides an underlying data structure and tools to connect and deploy various application via their containers.

More to come…

I’ve felt a little rushed to put this post out, but I wanted to join in the conversation not sit outside it. I’ll admit to not having everything fleshed out, or even properly specced, it’s still very much about an alternative way of thinking, designing and working with systems online. There’s a couple of posts I can see already that need to be written,in particular what the LMS and other institutional systems might evolve into when students are using MYOS. Until then I’d love to hear your thoughts and ideas.

Featured Image: flickr photo by rrruuubbb http://flickr.com/photos/rubodewig/5161937181 shared under a Creative Commons (BY-NC-SA) license

Moving Beyond The Default

Default. According to Homer Simpson the two sweetest words in the English dictionary.

To me though, default is more insidious. It represents choices denied and the removal of control by eliminating the opportunity for discussion to occur at the place and time it should – before decisions are made.

This post was triggered by some fairly innocuous tweets from Rolin Moe but they struck something that had been sitting there for some time.

While on the surface these are small fry complaints they point to something big:

What are the consequences of the default?

I’ve been doing some work on designing spaces over the last year looking at spaces that promote creativity and group work. One of the key issues we are facing is that space is at a premium, so a “feature” of these designs is that they are required to have multiple configurations. They need to be able to be re-designed and re-configured to suit a range of purposes and activities.

The work has involved visiting a range of spaces across our campuses but also looking more broadly at other universities and places which enable the kinds of work we are seeking to promote.

I’ve taken a few key things from this:

  • Furniture is too often bolted to the floor and thus it actually inhibits true flexibility. Furniture needs to return to its root and once again become mobile rather than a structure.
  • Technology is still fixed. The reality is that it still requires wiring, connections, setup, support and central control. These fixtures limit the flexibility that’s possible. Wires and cables are still the reality when it comes to technology – wireless just isn’t there in any way shape or form just yet.

But perhaps the biggest lesson was this:

The Default is what defines the space. No matter how flexible the room and the furniture in it is, it has to have a default position. No matter how flexible the space is, it has to have a starting point, a point zero that it can return to. It’s this default that defines what the space is, how it is perceived, how it is defined and inevitably how it will be used.

The simple reason is that people rarely move beyond the default.

Yes, the room may have a million-and-one configurations, but the reality is people stick with what’s there. They won’t move anything because they are used to the notion that the choice has already been made. That the default isn’t a starting point, but the end of a designed process. That someone else with more skills has looked at all this and made decisions on our behalf, whether this is true or not.

I get the reasoning behind the default. It’s something that’s necessary because decisions can’t be made all the time. There’s a cognitive load related to making decisions that is often at the expense of focussing on what really matters. Yes configurations are important, but at what cost and for what benefit?

Should we simply accept the default or be actively working to change it?

Defaults aren’t bad, and they can actually be sweet, but we have to start questioning the consequence of them:

  • What it is they entrench?
  • What do they avoid?
  • What do they hide?
  • What do they improve?
  • What do they enhance?
  • What to they leave behind?

And more importantly WHO?

  • Who it is they entrench?
  • Who do they avoid?
  • Who do they hide?
  • Who do they improve?
  • Who do they enhance?
  • Who to they leave behind?

Questioning the defaults is perhaps really interesting when applied to opt-in/opt-out scenarios. Take organ donation. It’s an area where the default has a significant effect on the outcome (It’s also one of the few occasions where I can mention the work of my brother!). Changing the default organ donation setting from opt-in to opt-out increases the number of transplants. You don’t remove or deny choice – it’s just switching the default position. It speaks to the power of The Default. It sets the agenda, it defines the space, it changes the argument and resets the tone. It’s the kind of trigger needed to move beyond the ‘gift of life’.

So perhaps we just need better defaults?

It’s important to note that the default often hide difficult and complex decisions. Those PowerPoint templates? Well they hide a huge range of design choices about fonts, line heights, placement, styles, colours, look, tone and feel. The problem is that PowerPoint hides all those decisions by not exposing you to them. There is just the default. You don’t find out about them until you actually sit down to develop your own template and you realise how messed up the system is. The Default is the choice because there are few alternatives. Customisation is a chore, or more realistically something closer to a layer in Dante’s hell, and what are consequence of changing the default?

But if you take that lack customisation into something like an LMS? Well the stakes get a lot higher. The consequences rack up quickly when you’re talking about the cost of a course and the potential impact on a life! Bad design when it comes to learning has real and definite impact. There are consequences. Big ones.

Better defaults, better modifications

I think we need to start questioning the default. Yes they’re necessary, but we need to better understand what their impact is. Simple defaults in PowerPoint effect the look and feel, but are how consequential are they? Complex defaults, like those employed in an LMS or a course design, can and do effect lives. We need to question the assumptions they make and the impacts they have.

The other area that needs considerable work are the tools that allow us to customise. At the moment they tend to suck, badly. They’re either too light weight to just too complex. This points to a design problem, one that is built on assumptions about the consequences (or inconcequences) of the default. Making customisation not only accessible, but transparent as well, is vital in enabling accountability but also encouraging learning and improvement. It provides a way for us to not just accept the default, but to move beyond it.

One way I’ve been thinking about this, particularly in the educational context is through the development of patterns and blueprints.

Patterns & Blueprints

Patterns are ways of defining components relating to structure, tone, material and activity. They are abstracted so that they do not define the entirety of a design, but make up the pieces through which it is constructed. They are multifaceted which allows them to be reconfigured in a variety of ways to suit specific applications.

Blueprints on the other hand provide a way of sharing a design. They show how various patterns fit together. They highlight areas where adjustments needs to be made but essentially what they allow is for design to be communicated and shared. They bring transparency to the process by providing insight into the design. You can see how the default has been made, what decisions have been made and what areas could be changed.

In many ways Patterns are like Lego pieces and Blueprints are the instructions.

Watching Amy Colliers videos at the end of her awesome blog post Not-yetness was an interesting way of thinking about this analogy. Blueprints can suck the creative joy out, but at the same time they provide a default. They specify the patterns required and usually in the box are multiple variations of the blueprint on the front of the box. The Blueprints provide a marketable and packagable default, but the underlying point is the Patterns they contain are able to be re-formed and re-constructed.

Remixed.

I’ve used the terminology patterns and blueprints very specifically. I don’t want to talk about templates, learning objects, learning designs, OERs, LAMS etc – because they don’t do what I think they need to do.

They lack a form that enables remix. They are like wooden blocks rather than Lego. Yes you can build similar structures, but you lack the ability for those components to be integrated. Blocks tend to sit on top rather than connect and integrate into the structure. They’re often too big and cumbersome to be shaped into exactly what you want. This leads to a compromised, rather than customised design.

What we need are ways of working that not only embrace the remix, but enhance it.

The Enclosure of the Web

It’s been a dark time in Australia when it comes to our lives in digital spaces. Both sides of government voted to instate draconian, opaque and dangerous new legislation to increase surveillance. They have traded the people’s freedom and right to privacy for “increased national security” – a term I am yet to understand. Now we can be watched, monitored and investigated at any time without our consent and with no impartial oversight.

So ridiculous are these measures that members of government have been spruiking apps, tools and practices to circumvent the legislation they were working to implement. I kid you fucking not!

Australia however is not alone in its pursuit of greater surveillance. Similar efforts are underway in Canada and the UK, perhaps trying to replicate the truly horrifying efforts of the US. Despite these efforts little has been discussed by the general public and even less about the implications of these measures. Its complex but it is vital as John Oliver pointed out vividly in his recent interview with Edward Snowden:

So what happens when we are forced into trading the open web for something that needs to be encrypted, secure, private and hidden simply to avoid someone watching over your shoulder noting your every move? Is the concept of “security” actually cannibalising itself to the point where safety and privacy are eliminated rather than upheld?

At the same time one area that really hasn’t been discussed at all is how we as a people are being forced behind a firewall and to surrender the distributed commons that is the web.

Want email? Just get inside Gmail or Outlook – just don’t use a local service because Australian big brother is watching that. Don’t worry though because the NSA is watching the others.

Want to communicate with friends and family? Just use this app that has built in encryption. Don’t worry that now you’re being surveilled by a corporate vulture who on-sell your data to the highest bidder.

Want to read the news? Just do it inside Facebook!

App this and app that. The Web is Dead. Access is no longer free.

The vectors of information have been taken over, monetised and passage is paid by surrendering our data. The commons has been taken away and eroded by corporate interests and government surveillance and all of this has happened before.

During the agricultural revolution this process was known as “enclosure“:

the term is also used for the process that ended the ancient system of arable farming in open fields. Under enclosure, such land is fenced (enclosed) and deeded or entitled to one or more owners.

I’d say the web as it was, an open commons of information, is being enclosed. The Information Revolution, or whatever you want to call it, is following the same script.

Just like the before the process is being accomplished in two ways:

  1. “by buying the ground rights and all common rights to accomplish exclusive rights of use, which increased the value”. Hello Silicon Valley and startup culture where the aim is not to contribute to the commons, but get bought out by someone bigger. Data is the asset and the value point is not in what your app can do – but how many users and how much data you can get!
  2. “by passing laws causing or forcing enclosure, such as Parliamentary enclosure”. Hello Australian government! Your actions – implicit or explicit,implied or not will have the same effect. “Come inside our walled garden, its safe in here!” they’ll say. Government surveillance destroys the commons and forces people to seek safety and privacy somewhere else.

For labour the fallout of enclosure was considered a positive sum, but that requires you to completely disregard the hunger, suffering and displacement that occurred. Sure, eventually displaced workers found jobs and their labour fuelled the industrial revolution but many died and many lost centuries of knowledge, wisdom and connection. They lost their identity and cultural heritage as they were forced off the land. This process was repeated as part of global colonisation, not because it was good, but because it worked. It worked to establish a new ruling class and elite. It effectively worked to dispossess the people of all they had so they had to trade their agrarian subsistence for the exploitation of the workhouse. It reduced skilled and knowledgable agronomists to become simply cogs in the machine.

So what looms ahead in our revolution? What do we lose as we’re slowly being enclosed?

Let’s not forget that there is value in the commons.

It’s not in efficiency or profitability it’s in building social cohesion. It becomes a place to share, to cooperate and collaborate. It becomes a place to dance and feast and celebrate as well as to mourn and cry and grieve. The commons is the heart of a community, something that urban planners are finally starting to understand. You don’t achieve social cohesion without the commons and housing projects around the world provide all the evidence you need to understand that. By focussing on efforts on building housing and not a community the commons was left off the plans and what ensued was complete social chaos.

So when I look at what’s happening on the web I wonder what is to come…

What if we lose the commons? What happens if the web is enclosed?

Image used https://flic.kr/p/nZotpM

Riffing off Remix

I’m feeling a little inspired after reading David Wiley’s The Remix Hypothesis and Mike Caulfield’s Paper Thoughts and the Remix Hypothesis. That’s on top of putting together an application for a Shuttleworth Foundation Fellowship where I’ve applied to carry on doing work around adaptive digital publishing. (The pitch video outlines a lot of what I’m going to describe in a pretty simple way – so if you want to know more have a watch and I’m happy to answer any questions). One thing I’m particularly keen to explore in this space is how to improve sharing, collaboration, reuse and remixing – is it possible to build that kind of functionality into a system so that is built for and with open content at its heart?

Over the last couple of years I’ve been playing around with the concept of Adaptive Digital Publishing. A group of us wrote a paper and developed a proof of concept. We shopped it around for funding but other people had other priorities.

Conceptually I think it stands up as the most effective way to publish materials across multiple platforms. It bought together ideas that are only now starting to emerge into the mainstream – e.g., in srcset and picture in HTML – where content is adapted depending on attributes set by the device & browser. The Adaptive Media Element we worked on did that – but in more complex ways and for all types of media – from video, data, images to audio and across print, web and eBooks.

The proof of concept we developed was built on WordPress and used the PressBooks plugin to provide many of the features we required, an easy to use interface and a solid foundation to work from. The ideas were executable more easily within an existing framework, so rather than attempting to build everything from scratch we could focus on our innovations – the AME and the corresponding Publishing Profiles.

Ever since we built that initial proof-of-concept I’ve been toying with how to make it simpler. How can we make it easier to share, collaborate and remix content? Our initial concept didn’t really think about those areas, but they’ve been bugging me ever since.

How to Support Remixing?

One way would be to expose the WordPress system via JSON. This would allow other systems to pull content in to display, but also to commingle, re-contextualised and retooled. My experience over the summer with Federated Wiki has challenged many of my preconceptions about how content, and indeed publishing can look like in a purely digital sense. I’m enthused by the concept of a JSON based system but there are plenty of dependencies and technicalities required to develop things this way.

My other idea is to go simple by removing the need for a database by abstracting authoring into a simple files & folders structure, and then focussing on developing a “generator” to the publishing. So rather than create a contained system we could build something that can be plugged into a file system and live separately locally or online. This idea builds on those already in use in a range of static site generators that leverage markdown, scripting and something like GIT to manage the whole workflow.

By simplifying the system down to the bare minimum the potential is to make content more “forkable”. You reduce the need for specific software in the authoring but also open the process to powerful versioning and management technology. In this way remixing is encouraged but with the ability to merge back the potential is a truly inspiring. This would ensure that the remix doesn’t become another standalone piece of content, but a connected component that might be co-opted back into the main branch. It enables localisation, translation and adaption to specific contexts not just to be made, but tracked, traced and attributed.

The other attraction to this more simplified model is that it also reduces the technical overheads required. It could be run locally or over a simple network. It could run offline and allows for asynchronous editing and collaborative authoring in a manageable format. I’m not sure if this will provide the simplicity or granularity of that the federated wiki has, but it’s definitely a step in the right direction.

This flat file model also means that content can be openly hosted using repository sites like GitHub but also almost any online space, and for educational and research publishing this could be a huge boon. Being openly hosted means that access is greatly improved. The ways that Mike describes data models being accessed and modified could be achieved this way.

The final plus is that switching to a flat file generator model means that there is less reliance on single technology or system . While GitHub, WordPress and certain programming language are the choice today they are also dependancies in the long term. Not relying or depending on certain technologies means that we’re creating more sustainable content that is open to change and evolution as technology and trends change.

Publishing in the digital age needs to embrace the concept of remix as it’s the most significant affordance of being digital. I’m in a state now where I can see that the technology required is getting closer to realising that idea. Once it does we’re going to be in for a ride.