Swipe to lock-in

Categories Open-source, Technology

What follows is a slightly rambling stream of consciousness orbiting around the subjects of the patent system, technological development, the open-source community and user interfaces. The whole situation is far from simple, with benefits on all sides. I guess my thinking comes from a recognition of what I see as the inevitability of product transparency and how to curb that into economic growth. I’d love to hear other people’s thoughts on these subjects, so feel free to leave a comment or email me.

/ / /

 

We need to copy, right?

 

Ok.

 

The copyright and patent system can be a messy business – and I mean business in the literal term.

 

Right now, research and development teams are setting out like early colonialists to stake claim on ideas, theories and methodologies. This would be fine, if the driving factor for such activity was truly innovation – but I don’t believe that it is. Instead, we find patents being created and filed away with the manic fervour of an arms race. In fact, recently we’ve seen how this impacts the smartphone industry with the cases of Apple v. Samsung.

 

This means that the incentive for coining, describing and protecting an “original” idea, system or method has as much to do with the implementation or progression of a company’s own product or service as it does with crippling their financial opponent’s ability to progress. The implications of such a dynamic are that our technology suffers and it’s feature-set becomes porous – a collection of innovative features and stunted features, mutually exclusive between brands – or else is obfuscated for no reason other than to avoid legal penalties.

 

 

The oft talked about example is Apple’s ownership of “slide to unlock”, this is the reason Google Android’s lock screen is rendered as “swipe to unlock”. The distaste I have for the ownership of gestural actions in an interactive environment is doubly so when those gestural actions are informed by a borderline skeuomorphic interface. How is sliding a virtual slider to unlock a device a new, patentable idea? I was sliding my cassette player into operation from “sleep” to “stand-by” 10 years prior to the iPhone. I was sliding latches to unlock gates that were made by people long before I was even born. I realise that the iPhone created new territory, and that “unlocking” in that context is a slightly different concept to the gate I mentioned above, but the gesture must surely belong to a collective lexicon outside of ownership. At some point an idea is so integral – and I daresay basic – that to deem it only useable by a single company in a massive, emerging market sector is a flavour of lunacy that is detrimental to the progression of technology on the whole. The danger lies in this attitude being applied somewhere truly integral. How would our technology look if the rotating volume knob was patented? If one brand owned the push-button?

 

But then, perhaps my opinion is skewed by hindsight.

 

Imagine three children seated at a round table with a piece of paper in front of each of them. There are a box of crayons in the centre and each child has privilege to one colour that no other child can use – one child owns red, another blue and another yellow. All other colours are shared. The children then attempt to draw the same thing. The resulting drawing’s are all shit subpar for different reasons, probably stemming from an illogical lack of colour, respectively. Or maybe they all make innovative use of a limited pallette, in which case, touché imaginary children.

 

I don’t know anyone with blue skin. Damn patent system.

 

Everything you can do, I can do different

 

The work-around, of course, is subtle deviation or obfuscation or inversion or “different”. Which often leads to users being presented with what I can only describe as counter-interfaces. Counter-interfaces are equally devious.

 

 

The new WiiU Pro Controller and the Xbox Controller feature inverted button arrays creating two interfaces that directly clash in learned operation. Is this the result of trying to avoid legal hassle, or in trying to lock users in to their system through some sort of familiarity-loyalty?

 

What do I mean by familiarity-loyalty? A loyalty borne out of familiarity with a product’s idiosyncracies. But further than that, it refers to the idea of creating these idiosyncracies for the purpose of locking users in to your system. This is perhaps doubly apparent with video game consoles because they share mutual software in the form of video game titles. You can play the same game on multiple consoles, being forced to interact with each one differently only in a schematic sense. You still have all the same functions. You just have different buttons. Inevitably, your option of multiple consoles only exists in a theoretical world, because in reality you will only ever take the time to master and operate one of them. This doubtlessly helps to fuel fanboyism, and what brand would be complete without adversaries to go into consumer combat with?

 

I’m pretty sure every chainsaw I’ve ever operated had roughly the same interface. My loyalties are influenced by the quality of the chainsaw, not my reluctance to learn a new interface structure.

 

Take a moment to imagine what Nintendo would offer.

 

Ok, so the chainsaw scenario is probably an unfair example. I remember my first mobile phone, it was a Nokia 3310. In fact, my next three mobile phones after that were all Nokia models. Why? Because it was common knowledge that the GUI’s between their products varied minutely. I was free to choose any (Nokia) phone that I wanted with minimal hassle, but to deviate from their brand meant I would be met with a jarring user experience. My fourth device, a Samsung, provided me with just that – and a few embarrassing, unintentionally sent messages, given that the BACK and SEND buttons were reversed on my new device.

 

The interesting thing about this idea of familiarity-loyalty is that the opposite directive – attempting design convergence of your interface with that of your competitors – makes it easier for consumers to switch to your product, whilst at the same time making it easier for consumers to leave your customer-base.

 

Hide your source, hide your errors, hide your potentials

 

There’s a similar dichotomy between open and closed-source mentalities. The same sort of benefits at the expense of “control”. Closed-source means that you do not share your production methods – code, manufacturing details, CAD files and so on with the general public – meaning that competitors and DIY “makers” can’t replicate your product and modify it to suit their needs or material access or fuse it with new hardware. This seems like an obvious choice of conduct from a typical business perspective. Why would you give away information that you spent plenty of money and time on researching and developing?

 

The answer is this: Open information can lead to free, highly-dispersed, communal development. Open information and modification fuels product longevity.

 

Here’s the second answer: We’re going to steal the information anyway. We’re going to modify it anyway. We’re inquisitive creatures. We do it for fun.

 

Hammer Editor: An in-house production tool released to the community.

 

By allowing your users access to legitimate information regarding your product, marvelous things can happen. Companies are slowly starting to realise the potential benefits of letting users modify their products. When Microsoft first released the Kinect, they were of the anti-mod, anti-hack mindset. But in my opinion the hackers and modders and makers and academics were doing much more exciting things with the Kinect than Microsoft was. The mindset has definitely changed:

 

The enthusiasm we are seeing in the scientific community – specifically the research and academic communities – around potential applications of Kinect, is exciting to see… It’s an exciting time for Microsoft, our customers and partners as we explore the possibilities [Natural User Interfaces] has to offer and how we can make them a reality – Kinect for Xbox 360 is just a first step.

Steve Clayton, Microsoft Blog

 

It seems Microsoft realises that free research and development by minds outside their organisation, that are excited enough about the concepts they are exploring to explore them despite not being on their payroll is a good thing.

 

Videogame company Valve learnt this lesson long ago – they are a thriving example of how understanding your craft, your technology and your audience leads to healthy financial success. One of their most popular titles, Counter-Strike, was originally a community-made modification of one of their existing products. Valve knew early on of the advantage of community created content, releasing their in-house level editor Worldcraft (later Valve Hammer Editor) to the public. Basically, you release one product that spawns more products and relates to a wider audience or keeps your existing audience captive for a lot longer (people still play Counter-Strike 1.6, which is now ~10 years old). As long as you are still selling the seeds, who cares what the fruit is made into?

 

Don’t be afraid of the culture, leverage it, help create it

 

Again, it’s going to happen anyway.

 

Young kids at a “maker faire” (source: boingboing.net)

 

 

 

Embracing the Digital Landscape

Categories Technology

Lately it seems as though every other article I read online about software interfaces is in some way related to the concept of skeuomorphic design, with the prevailing opinion amongst young digital natives being that it is often an unnecessary and dishonest factor of interface design. I tend to agree.

A skeuomorph is a physical ornament or design on an object copied from a form of the object when made from another material or by other techniques. For example, pottery embellished with imitation rivets because the object was once made of metal, or a calendar application which displays the days organised on animated month pages in imitation of a paper wall calendar.

Wikipedia.com

In my opinion, in regards to its use in digital landscapes, skeuomorphism is simply a transitional device from one medium to another. It’s a design direction capable of interfacing with a wider population-bracket (inclusive of the non-tech-savvy segments) because of it’s perceived familiarity achieved through the appropriation of the visual cues belonging to common cultural objects with analogous functions to their new digital replacements.

 

To the casual user, the command prompt (left) is much less “orienting” than the skeuomorphic world of OSX (right). Both represent differing levels of abstraction.

 

This has clear business benefits if user tests support that it does indeed broaden the spectrum of usability and thus increase potential market share. In fact, I feel this may be why Apple saw value in it as a tool.

 

But, here’s the thing: The transition period is almost over.

 

Mainstream consumers are already fully exposed to smartdevices, tablets, netbooks, touchscreen kiosks and interactive surfaces. I daresay that through intermittent frustration with the disconnect between visual appearance and interface behaviour they have also learnt that though elements of the various interfaces may look like tangible things, they don’t behave like them. Mainstream audiences understand the digital landscape a little bit better than a few years ago, it might be time to dial down the abstraction in order to facilitate complex interactions and open up opportunities for developers to allow users to solve a broader, more complex set of tasks.

 

This. We kind of want it.

 

I’ve yet to use Windows 8. I’ve only heard bad things about it. Yet, I’m finding that a lot of what I think they were trying to achieve are things that I’ve also considered myself as valuable for the furtherance of interface design that is honest, mature and incredibly useable.

 

The lack of “chrome” means a higher Content:Interface ratio

 

I was inspired to read that Metro (the namesake of the design language created for Windows 8) sought to abolish “chrome”, which got me thinking about the relationship between content and interface. The less chrome, the more real-estate for content. This is almost a self-evident notion, but when you consider and make note of the pixels used for non-content AND non-function purposes in any given applications, it’s actually still rather novel. But, as with all philosophical inclinations that laud simplicity, less is more is always harder to achieve than is first thought. Regardless of success, Windows 8’s entry into the mainstream will certainly answer some questions that I have about user interfaces and public readiness.

“The new user interface is less of a problem than it would have been 10 years ago because people have got used to mobile interfaces”

Forrester Research’s senior analyst David Johnson, UCstrategies.com

 

So, how ready are we? And as a designer I also have to ask, what familiar visual languages can I use to fabricate the desired interactions?

 

Luckily, video games exist. Video games have led to something almost as over-mentioned as skeuomorphism, that is, the gamification of things. The concept involves all sorts of behavioural theory, incentivisational practices and even lends exposure to augmented reality. In fact, I believe that gamification will be (if not already) an integral component of modern interface design and a vehicle for positive behaviour change.

 

 

No one seems to mind the infodense blend of 2D and 3D elements, HUD or realtime data overlays. In fact, it helps the user achieve a goal.

Pioneer’s augmented reality HUD for in-car GPS (read more here)

 

Icons, progress bars, real-time overlayed data, gestural inputs, rewards, social ranking, menus, inventories, micro-trading, experience points, contextual hints. There’s 2D elements hanging out in 3D space and inexplicable notifaction sounds coming from a nearby omnipresent source. It’s not an acid trip but it’s certainly not reality and we’re somehow OK with it. I like to think of the history of video games as a huge chunk of free research and development that can be applied to interactive products and user experience design, especially in regard to user enjoyment and return patronage. But essentially, video games have helped in the creation of a shared language for a new digital frontier.

 

Then you throw mobile into the mix.

  Your experience will definitely vary.

 

Mobile has demands, most of which are data based and outside my area of expertise. But it’s important as a designer to try and understand the medium.

 

Data is expensive, both to your wallet and collective global consumption.

 

New interfaces should deliver a small packet from the server and then do the rendering client-side. Basically, send the schematics, not the building.

 

For me, it’s all about vector graphics and algorithmic art. Contextually aware content zones and dynamic text. Things should be static insofar as they are contextually appropriate. Data should only be sent as contextually appropriate. Devices need to become more aware of their context of use.

 

Mobile also means variegated delivery points. I have an Android phone, the person next to me has an iPad. Let’s say we both decide to use the same web app. The design issue here is evident because the screen sizes between both devices vary greatly – so the same web app has to have enough fluidity to expand or contract into the various “frames” that it is pulled into (be that an iPad, laptop, smartphone or whatever else is commercially well-dispersed). The current solution – in regards to creating a consistent user experience between these frames – is what is referred to as responsive design. Responsive design literally responds to it’s frame according to a set of rules. There are many ways of going about responsive design, from contextually dropping elements, hiding them, conforming to a grid, setting up ratio relationships between elements or all of the above and more. It’s an automated style guide.

 

It’s all a lot to consider, but all these convergences between technology and culture are exciting to me as a designer because they signal opportunities.

 

I believe that the average consumer market is fertile soil for innovative advances in interface design and that we will have to embrace the digital landscape in order to facilitate that advance.

 

What can Interactive Design achieve?

Categories Technology

In 1996, Xerox PARC’s Mark Weiser and John Seely Brown wrote an article entitled “THE COMING AGE OF CALM TECHNOLOGY[1]” that outlined a future directive for the design of product environments in the age of ubiquitous computing. Ubiquitous computing is described as the next phase of the human-computer usage relationship, post-internet, suggested to emerge in the years between 2005 and 2020.

 

 

Weiser and Brown also propose a new mannerism for technology, foreseeing the creation of what they describe as calm technology. Calm technology is designed with our attentional needs in mind, with consideration to its permanent presence. Or, as Weiser and Brown point out, “…if computers are everywhere, they better stay out of the way.” This requires an understanding of when and how users access information. It also calls for an exploration of how that information can be delivered in new, appropriate ways. For instance, I feel that if consumption of information is increased whilst relying solely upon digital visual displays, a point of information congestion is sure to be reached (if not already apparent). This opens up the exciting search for alternative methods of information “ingestion”.

 

A project undertaken in 2005 by the University of Cognitive Science in Osnabrück, Germany dubbed “the feelSpace Project” involved alternative methods of information ingestion. Research from the project found “…data support[ing] the hypothesis that new sensorimotor contingencies can be learned and integrated into behaviour and affect perceptual experience.” (NAGEL, S. K., CARL, C., KRINGE, T., MÄRTIN, R. & KÖNIG, P. 2005.) To simplify, the project involved the use of a wearable computing belt containing vibrotactile motors – the same components that allow a mobile phone to vibrate – in order to let the wearer feel where magnetic north was (using compass data). This opens up exciting opportunities for new human-computer interfaces.

 

There is a great deal of debate surrounding the concept of interfaces themselves. User Interfaces (UI) are, through necessity, abstractions. They are virtual worlds and entities symbolically represented for the purpose of manipulation in an (hopefully) intuitive way. Interfaces are highly useful, if they are appropriate. I believe that the goal of an interface should be to make that degree of abstraction as small as possible (in the context of this discussion this means endeavouring to connect virtual actions with meaningful physical counterparts). Others go even further:

There is a better path: No UI. A design methodology that aims to produce a radically simple technological future without digital interfaces. Following three simple principles, we can design smarter, more useful systems that make our lives better.

(KRISHNA, G. 2012.)

A few examples of “No UI” are given in Krishna’s “The best interface is no interface” posted online in the Cooper Design Journal, but all of them contain some semblance of an interface – even if only initially – though I do enjoy the thought.

 

The notion of ubiquitous calm technologies in conjunction with wearable computers employing sensorial substitution leads me to the question; What are the applications? Or rather, Who should this technology be applied to?

 

I have two main directive ideas that I am by no means the first to arrive at. Firstly, the idea of Design for Inclusivity. That is, technological design used to re-equip people with previously lost abilities. This could be as fundamental as loss of any of the senses or perhaps more abstractly, involve the treatment of social disorders and deficiencies. Secondly, the idea of Design for Periphery. That is, design heavily influenced by Weiser and Brown’s vision, whereby technology manages to engage and straddle our centre-of-attention as needed. The implications of this sort of design are increased efficiency of information processing (by humans) and an attempt at “humanisation” of the oft times harsh interactions we have with the digital worlds we have created – a process of “encalming” as Weiser and Brown call it. Of the two main ideas, the former is more democratic and altruistic, the latter perhaps informed by a common good. Normalisation and Augmentation, respectively. The merit of either direction could be argued and I don’t believe that a foray into one suggests any sort of prioritisation. In all honesty, though I feel that Design for Inclusivity is the more worthwhile, fulfilling directive, I realise that in an extreme light it could also be dangerous territory; as promises of various “healings” always are. It’s important that the technology itself has some say in it’s applications via it’s effectiveness. If sensorial substitution methods or alternative interfaces are not adequate enough to enrich the lives of someone with a disability then perhaps it’s best to wait until the technology is sufficient enough to do so.

 

Ultimately, ubiquitious computing creates a mostly untouched environment to design within, the ramifications of which include the potential for true innovation.

 

 

KRISHNA, G. 2012. The best interface is no interface [Online]. Available: http://www.cooper.com/journal/2012/08/the-best- interface-is-no-interface.html/ [Accessed September 5 2012].
NAGEL, S. K., CARL, C., KRINGE, T., MÄRTIN, R. & KÖNIG, P. 2005. Beyond sensory substitution—learning the sixth sense. Journal of Neural Engineering, 2, R13.
WEISER, M. & BROWN, J. S. 1996. The coming age of calm Technology [1]. Xerox Parc, 8, 2007.