X

Intentional Engineering: Behaviour change through the lens of technology.

“It's a great technology." ― Sarah Slocum

Over the weekend of 22nd February 2014, Sarah Slocum, a Social Media Consultant and Google Glass owner, was assaulted by strangers in a bar in San Francisco for wearing said tech item. Sarah was caught by surprise, after all, she believes Glass is a great technology.

Apparently, this isn‘t a singular incident. Then, why do people react in this extreme manner? Is the animosity actually directed towards the human being, in this case Sarah Slocum, or did she become a victim of assault just because she was handling a certain type of technology? Assuming that aggression is inherently questionable, how could this be prevented in the future – particularly in the context of human interaction with technology?

This essay investigates these questions, from a designer‘s point of view and eventually turns to software developers for advice and clues for how this story might be continued.

Brain 2.0 – language, the human software update.

While history teaches us that history teaches us nothing, looking at the relationship between tools and human beings over the course of time reveals some of the underlying power dynamics and how these evolved. Furthermore, this essay is built on observations and examples that demonstrate how technology influenced human behaviour in the past.

According to scientist Richard Klein ”[t]he creation of language was the first singularity for humans. It changed everything.“ He is not the only scientist (Ian Tattersall, William Calvin) who is certain that 50.000 years ago, this fundamental addition to the human operation system set one species of our prehistoric ancestors apart from all others: Homo Sapiens. Afterwards, he conquered the world in a blink of history‘s eye.

Language enabled Sapiens to rapidly communicate ideas amongst each others, thus taking off the shackles of fortune, manifested in the sole discovery of tools. However, the most important advantage, as pointed out by Kevin Kelly in the essay The World without Technology, is “[…] not communication, but auto-generation. Language is a trick which allows the mind to question itself. […] Without the cerebral structure of language, we can’t access our own mental activity. We certainly can’t think the way we do. Try it yourself. If our minds can’t tell stories, we can’t consciously create; we can only create by accident. Until we tame the mind with an organization tool capable of communicating to itself, we have stray thoughts without a narrative. We have a feral mind. We have smartness without a tool.“

The application of words supported the transition of humans to move away from spontaneous actions towards behaviour based on considerate decisions. Yet, it is still upon the individual to use language that way, which adds an element of (uncertain) autonomy.

One great example.

The Oxford Dictionary provides an extensive list of use cases (adjective, noun, adverb) for the term 'great' in various contexts, from a great big grin to Greater Manchester. Great! With the term being greatly versatile and people making full use of this situation – it holds a top spot amongst the 1000 most commonly used English words – could the term be suffering from semantic satiation? A phenomenon coined by psychologists Leon James and Wallace E. Lambert in 1961, when words temporarily loose their meaning through continuous repetition.

Language presents itself as a complex example of technology. Its possibilities of use (really) are endless. These considerations illustrate that it is difficult to predict in what way a certain technology might influence behaviour. In the case of Sarah Slocum, great technology meant misunderstanding and trouble.

Technology, pleased to meet you.

To date, a mass of definitions have been generated to disclose the term technology. Most of which name an underlying procedure, for instance “the application of scientific knowledge“ to describe its realm. With the second industrial revolution in the 20th century technology became this paramount super-word, referring to simple stone tools, living tissue printers and everything in between. Apart from tangible – analogue or digital – objects, conceptual inventions, such as the calendar or software, are understood by it as well. At the same time, the notion of technology being a human autocracy, even nature‘s adversary is inadequate. The construction of a beaver dam, a bird‘s nest or bee hive is certainly an act of applied knowledge in the animal kingdom. Therefore, the definition of choice (as far as this essay is concerned) goes as follows: Technology is anything designed by a mind.

In the book Technopoly: The Surrender of Culture to Technology, author Neil Postman takes the reader on a journey from tool-using cultures via technocracies through to what he refers to as Technopoly – a place where “culture seeks its authorisation in technology, finds its satisfactions in technology, and takes its orders from technology.”

Building on early philosophical thoughts from Plato‘s Phaedrus and Freud‘s Civilization and Its Discontents, he compares the perception of technology to friendships. “It makes life easier, cleaner, and longer. Can anyone ask more of a friend?“ And Postman is not alone with this view, philosopher Daniel Callahan puts it as follows: “Most new technologies are introduced not as tyrants who will make us their slaves, but as choice-increasing, society-enhancing developments that we as individuals are free to take or leave.“

One straight forward example.

From when I was a child, I remember a conversation between my father and brother about tools, in particular the axe. They were most enthusiastic about the fact that using it (to chop wood) presents an immediate gratification (chopped wood). At a first glance, the axe is self-explanatory. It makes life easier, by being a reliable and sharp extension of the human arm, with the corresponding person completely in charge. However, there are multiple occasions where an axe has been applied for destructive purposes, unintended by the manufacturer or distributor. Please consult your nearest police station for further details.

While an unattended axe remains passive and it is upon the human to use it for the good or the bad, technology should not be considered neutral in general.

One could argue that Sarah Slocum was not handling Glass correctly by bringing it into a bar. However, her behaviour is not the only force at work here. It is joined by Melvin Kranzberg‘s First Law of Technology

“Technology is neither good nor bad; nor is it neutral.” – Melvin Kranzberg

It implies that technical devices and practices have the capacity to actively produce social, environmental or cultural conditions. Furthermore, the results depend heavily on the different contexts and circumstances in which devices are introduced. In modern times, this is an increasingly complex field.

No time o‘clock.

Being friends with technology does not pose an enormous problem by itself. It is the combination with a lack of time that creates tension. When the telegraph was invented at the end of the 18th century, humans were given a 100 year break before the computer was introduced. By then, we had explored and understood a concept of information that was free from the constraints of location. Today the pace of technological progress has clearly picked up speed and the intervals in which companies release new products to the market are fading. Whether this is fueled by a profit oriented industry or an entertainment demanding consumer, probably comes down to a causality dilemma.

Artist and philosopher Koert van Mensvoort is part of the editorial team of the Next Nature Network, which investigates the relationship of the technological and human world. In his essay Innovative Nostalgia he comments on the speed of technological progression:

“Every person on the planet, not only in the western world, has to deal with radical technological changes throughout her or his life. Sometimes these are delightful and liberating, but also often confusing, uncomfortable and alienating. For many of us the pace of change is in fact too rapid.“

Van Mensvoort‘s contemplation implies that people can‘t help but live by the changes technology brings onto us. In fact, the Next Nature Network operates on the notion that mechanical tools evolved into omnipresent and autonomous cultural artifacts. They blend into the living environment, in other words: become part of nature itself. Both, the ubiquity of things and their inter-connectedness with humans, nature and other things, are causes for an increasing complexity of the technological landscape.

One example from the archives.

On 4th March 2014, Patently Apple reported that the US Patent and Trademark Office granted Apple (alongside 35 other patents) a “Patent for a Portable Multifunction Device, Method, and GUI (Graphical User Interface) for Interpreting a Finger Gesture“. According to the blog, Apple successfully patented the technology that enables iPhone owners to operate their devices with gestures such as touch, slide and swipe.

While Apple‘s patent comprises only the technology, not the actual behaviour of the user – this separation is virtually redundant from a conceptual point of view. Actually, Apple managed to declare an action, performed by a human hand, as one of their own inventions. I take this as an official proof for the fact that behaviour can be engineered.

Design with Intent to change behaviour.

In the course of his PhD, Dan Lockton developed the Design with Intent Toolkit. It gives designers a range of possible starting points for the creation of products and services, which aim to influence people to improve their social and environmental behaviour. So far, this essay demonstrated that products and technologies effect human behaviour unintentionally. Lockton argues that behaviour change can be achieved through the considered application of design techniques. In his paper The Design with Intent Method: A design tool for influencing user behaviour, he states that designers from different disciplines, such as product design, service design or human-computer interaction, stick to practising within the sphere of their respective discipline, but that the applied techniques overlap. Against this backdrop, Lockton draws the conclusion, that “[i]t is therefore possible to abstract certain techniques from examples in one field, and use them in others […].”

The Design with Intent Toolkit starts with existing products or services. Ultimately, the goal of reconstructing an object is “to influence users’ behaviour towards a particular target behaviour“ through a proposed redesign process. In this regard, getting feedback from actual users seems essential. At the same time, involving people usually makes for surprises. Individuals act upon personal experiences, plus their intuition. Often enough, this results in unexpected, rather than anticipated ways of handling objects. Therefore, Lockton suggests that “new artefacts will coevolve with behaviours (Walker et al. 2009).“

A sound example.

Alexis Lloyd, Creative Director of the New York Times R&D Lab, discussed the phenomenon of emergent behaviour in a recent blog article In The Loop: Designing Conversations With Algorithms. She unfolds adaptation methods of humans by looking at an example from voice recognition services: people deliberately mispronounce a friends name so that their phone would call them.

When it comes to voice recognition services, it‘s fair to assume that companies intend to create universally smart interfaces, rather than a series of new linguistic accents. In the example at hand, the intended communication between human and service, more precisely with the embedded algorithm, went wrong. Moreover, Lloyd‘s account of adapting to the needs of the voice recognition service tells a story of apparent insufficiencies within the system. It also reveals that people are well aware of them and able to figure out new, inventive ways to make the system work, nevertheless.

Design with Intent to change attitudes to change behaviour.

While adjusting one‘s linguistic accent seems innocent, other circumstances are not as trivial says Lloyd. “Algorithmic systems record and influence an ever-increasing number of facets of our lives: the media we consume, through recommendation algorithms and personalized search; what my health insurance knows about my physical status, the kinds of places I’m exposed to (or not exposed to) as I navigate through the world; whether I’m approved for loans or hired for jobs; and whom I may date or marry.“

Mark Zuckerberg, the founder of Facebook, experimented with predictions about the status of his friends relationships himself and claims that 33% of them came true within a weeks time. These speculations can be accurately guessed “[…]not by number or cluster of mutual contacts, but dispersion of friends. If you know someone’s coworkers, friends from middle school, and people in her yoga class, it appears likely you are in a relationship. If you share your life with someone, you see that person in many contexts“, explains Joanne McNeil at the Lift Conference 2014. McNeil researches and writes about the ways technologies are shaping art, politics, and society.

Facebook‘s interface appears to be the visual result of a user‘s engagement with the platform, suggesting that they have full control over their timeline and profile. Uploading pictures, adding or blocking people and following or liking pages has an immediate effect on what is displayed. The other (hidden) half of the truth is that the content on a timeline is constructed by calculated assumptions about what is most relevant and interesting to the user, thus keeping them engaged with the service. In numbers, that‘s 1.79 billion (Nov 2016) daily active users and a multi billion dollar yearly revenue.

Scientist B. J. Fogg refers to this concept as persuasive technology. “[It] applies elements of rhetoric (Kjaer Christensen & Hasle 2007) and conditioning to influence behaviour, mainly (so far) in the context of social networking websites and mobile computing“, writes Dan Lockton and acknowledges this to be “[…] one of the main bodies of current work pertinent to the DwI concept […].“

Reflecting on the Design with Intent method, there are two different angles to the art of influencing someone‘s behaviour. On the one hand, design elements play a leading role. On the other hand, rhetoric and persuasion can be necessary to change a person‘s attitude a priori. Designers are faced with the responsibility to balance these two approaches. In some cases, it is critical that users comply to the regulations of an object or system, especially in the fields of security, health and safety. Someone working in air traffic control should not be able to make decisions based on his or her attitude – as this could severely compromise the safety of passengers sitting in a jetliner mid landing approach. In this example, there is a relatively high cost of failure.

Designing smart consumer products is equally challenging, although the cost of failure appears to be comparatively low. Looking at the example of Sarah Slocum‘s experience with Glass stresses the responsibility to get it right.

A close up of the elephant on the face.

Glass is a wearable computer for the human face. It basically works similar to a contemporary smartphone, except for the fact that it is mounted onto a frame, sits above the right eye and can be controlled by voice activations, i.e. hands-free. It was first presented, alongside a live demonstration, on 28th June 2012 at the annual, developer-focussed conference Google I/O.

Google determined two subsequent cycles for the release of Glass prototypes. Phase number one started in February 2013. In order to get a hold of Glass, prospective users had to be either selected by Google or invited by one of the initial Explorers. Today, anyone can put their name down to become part of The Glass Explorer Program. Albeit, there is a waiting list. Furthermore, eligibility is restricted by age (18), nationality (US) and sufficient funds ($1500).

In December 2013, Mat Honan wrote a retrospective report for WIRED about his experience wearing the bespoke tech item. It was significantly titled: I, Glasshole: My Year With Google Glass. Honan refers to Glass as being a class divide on the face. In consideration of the given dependencies of ownership, that does not seem to be far fetched. Being able to wear Glass automatically puts someone into a selected, elite group of people.

At the same time, it is a very visible object. Brian Bergstein, editor of the MIT Technology Review refers to it as the elephant on your face and thus throws the intentionally unobtrusive design right into the uncanny valley. Whenever a person decides to use this device, everyone close by will necessarily know. It follows, one cannot wear Glass incognito.

“[P]eople get angry at Glass,“ Matt Honan recalls, “[t]hey get angry at you for wearing Glass.“ He was openly called an asshole, even by co-workers (who passed by his standing treadmill desk). Surprisingly, they do not feel the need to justify this behaviour at all. In the course of Honan‘s report, he explains that wearing Glass is not inherently awkward. It made the people he interacted with feel uneasy and that in turn made him feel uneasy. This illustrates a catch-22 in failed human to human interaction.

Steve Mann is a computer scientist and professor at the University of Toronto. In 1999 he became known as the world‘s first cyborg after attaching a computer to his body while simultaneously wearing a camera and monitor over one of his eyes. Sounds familiar.

He argued that this was the best way to interact with a computer. Whereupon Brian Bergstein replied that “[…] it was certainly a suboptimal way to interact with a fellow human being,“ after meeting Mann for a day in Toronto. It struck Bergstein “[…] as fundamentally rude to put something in front of your eye but not let the other person see it—the equivalent of whispering a secret in front of someone else.“

Glass seems to fuel separation between people in a number of ways. For one, through exclusive access to the club of Explorers. Owning the physical object poses a distinction mark. Furthermore, using it puts individuals into a visual, as well as an emotional bubble. Another reason for the aggression against Glass can be found by reading through the comments below the Kron 4 article about the Slocum incident: privacy. People are put off by the prospect of being recorded in public by Glass (or Sarah) against their will. However, since humans display an otherwise carefree interaction with invisible (e.g. street cameras) and visible (e.g. smartphones) technologies that compromise personal privacy, this explanation seems insufficient.

Two classified examples.

Inhabitants of contemporary cities live by the presence of certain pervasive technologies, such as smartphones or tablets. When walking over Westminster Bridge across the Thames in London, the chances of being captured in a picture of a random tourist are quite high. From my own observation, this leaves (most) of the involuntary subjects passive. If anything, humans halt to not interrupt other humans taking a picture.

In June 2013, Edward Snowden, a former system administrator for the United States‘ NSA (National Security Agency) disclosed classified documents about global surveillance programs of his past employer to the public. Amongst other things, these documents shed light on the fact that online and mobile communication between humans are being recorded and that the location data of mobile devices are being tracked, non-stop, worldwide. One would imagine that people know they are living a transparent life and there is no such thing as privacy in western, modern society.

While the first example of privacy infringement comes down to an apparent activity of a human with a camera. The second example is based on a pretty much undisclosed infrastructure. Glass combines both traits. As mentioned before, it is an object that catches a lot of attention based on its visual design features, however, its functions are obscure. There are elements of uncertainty, for instance what or who it is directed at, if it is on or not. Uncertainty is an unsettling aspect of Glass.

Considering all of the above and assuming that Glass was the result of a time intensive product development process, involving researchers, engineers, designers et al. – how did they get it so wrong? Or did they?

Rumor has it, Glass is actually a great tool for health professionals, e.g. allowing doctors to call up information while using their hands otherwise. Why not bring those use cases to the fore? Instead, Google promotes Glass through lifestyle enhancing features.

In a Forbes magazine article from April 2014, Tarun Wadhwa called Glass a social experiment and wonders about Google‘s curious marketing decisions.

“Augmented reality and wearables can provide significant benefits to our lives in certain situations. What’s strange is Google chose not to focus on this; instead of showing the many ways Glass can enrich the work of doctors, fire fighters, or extreme athletes, the company chose to let the technology loose into the world to see what happened.“

Wadhwa proceeds with the assumption that the company was aware of the risk, but accepted the chance of a backlash from society.

Intentional engineering.

I would not agree that Glass is a social experiment. Google did not just put it out there to see what happens. I believe it was a deliberate decision and an inventive way to get feedback, which was supposed to be part of the production cycle from the outset.

“[T]here‘s something really offensive about a super-rich company asking people to pay $1500 to beta-test their product.“Dave Winer

However, from a purely economic point of view, that‘s a brilliant plan. Glass is a prototype. Yet, Google released it nevertheless. To be fair, it is a prototype in the form of a shiny, new tech gadget; with a high, but reasonable price tag. Reasonably priced for that it worked as an essential selection criterion to find the very first flight of Explorers. Google was neither looking for experienced professionals, nor for the every-user. The company was looking for enthusiasts.

Ed Sanders, the head of and behind the marketing of Glass explains: “The high price point isn’t just about the cost of the device. We want people who are going to be passionate about it.” Simply giving it away to beta-testers wouldn’t have produced the same kind of self-selection effect, “[w]e wanted people who really wanted it.”

So, who really wanted it?

Jon Gottfried, for instance. He is a Twilio alumni, StartupBus Conductor, co-founder of Major League Hacking and as far as I‘m concerned (after meeting him), an intriguing human being. Unsurprisingly, Glass wanted him too. He became one of the first independent developers to build applications for Glass, which also meant early access to every bit of related software and hardware that keeps the developer heart beating.

In Jon‘s own words, developers love technology for the sake of technology. That‘s not all, there is the prospect of being first. New technologies, such as Glass, give developers the opportunity to enter unknown territory and as cheesy as it sounds: make history.

“As developers, we have the unique opportunity to quite literally define the experiences that consumers have with technology.“ – Jon Gottfried.

This statement hits it right on the head. Calling the Design with Intent method back into mind, its primary goal is to redesign an existing product or service so that it changes user behaviour into a target user behaviour. Google skipped the redesign loop by releasing a prototype of Glass and motivating an independent group of enthusiasts to define its applications and the corresponding behaviour themselves. I‘m calling this approach intentional engineering. Another really interesting aspect of intentional engineering is the diversion of power over a company‘s output onto a broad independent community, in this case a community that can code.

Jon Gottfried goes as far to say that developers have the capacity to “define the success or failure of an entire product line. If they innovate and create amazing experiences, it can pave the way for mass consumer adoption of a product, and if they fail or are mistreated by their platform providers, they can create a product wasteland. It is a symbiotic relationship.“

It is safe to assume that Google is well aware of this relationship. As a consequence, Glass was introduced in the manner described. With that in mind, this essay is a tale of a company that risked social backlash in favor of an empowered and likewise excited developer community that is also capable to create attractive applications.

One conclusion and a non-inclusive list.

As previously mentioned in the discourse of the Design with Intent method, some procedures and applied techniques from varied design disciplines overlap. Practitioners benefit from leaping over the fences to exchange knowledge and having an open dialogue. For the most part, software developers have been missing in the loop of these conversation.

In conclusion of this essay, the next section introduces a compilation of concepts and characteristics from the habitat of software developers, that could be valuable for professionals working in the field of design (hint: sustainable design).

Strong communities.

Software developers build strong networks to socialise, but more importantly help each other to solve specific problems, as well as work collaboratively on code related mysteries – just for fun. From the outset of the internet, they met in online forums to ask questions, exchange code or put their most recent software up for discussion or applause.

Today, while significantly greater in number, they continue to do so with nobody left behind. “[W]e now have the bright, bold, user-friendly colors of the social web, where the current generation of coding wizards can connect with seasoned veterans to brainstorm the future of the Internet“, writes Matt Silverman for Mashable.

Amongst others, GitHub is an online platform that knows how to harness the natural need of coders to congregate. It is the largest hosting service for code in the world. While the commercial half of the company offers private repositories, the other half is comprised of free online space for open source projects. In GitHub‘s own words, they were able to attract over six million users to build amazing things together. Major League Hacking is another on- and offline platform of interest. It has set itself up to create a system that brings value to (collegiate) coders and benefits them beyond their sole participation in hackathons around the globe. They fulfill this intention by transforming the landscape of scattered hackathon events into the well-known format of sport leagues. Continuing along the lines of this metaphor, hackathons would be the sports ground of coding.

Creating spaces.

Warsaw, Poland saw the first Makerland open its doors in March 2014. It was a hybrid event – half conference, half workshop. Makerland was unique in the way that it created space for people to meet and build relationships. Often enough, attendees of conferences find themselves in a dark lecture theatre, hiding behind the screens of their laptops and instead of engaging physically or mentally, resort to answering work emails.

Makerland achieved the opposite. Talks were scheduled to be in the morning hours, leaving the afternoons free for collaboration, tinkering or (in my case) spontaneous encounters and extensive coffee machine conversations. Kuba Kucharski, one of the organisers, sums it up by saying that “Makerland is a community. When I think Makerland, I think all those people here together.“ Up until today, attendees are still in touch with each other. Even though they are based in different corners of the world, they still find a way and make time to communicate.

Makerversity in London‘s Somerset House is a permanent physical institution, that fosters encounters. Based on the concept of co-working spaces, Makerversity challenges and transforms the concept of conventional work environments. It manages to attract a diverse crowd. Thus, designers might find themselves sitting at a desk next to a software developer, while watching someone else in the corner, printing a thing on a 3D printer. The sheer potential for synergy and chance for collaboration is created by the space itself and not by deliberate planning.

Providing tools

As mentioned before, GitHub is a hosting service, however, the website can also be used for knowledge exchange and collaborative work on open source projects. Stack Overflow is another online platform and tool. It aims to establish a library of answers for every possible programming related question. Codecademy gives people the opportunity to “[l]earn to code, interactively, for free.“

While these institutions, platforms and tools support humanity to build and develop its programming skills, there are other physical objects that serve to create more tangible outputs. Arduino is a tool (as well as an open-source online computing platform) that simplifies the way people can communicate with a microcontroller. Arduino performs tasks such as making an LED blink, but can also be used to develop complex prototypes that interact with input from environmental sensors.

For one, the success of these tools can be derived from the fact that people make use of them to a great extent. Furthermore, people use programming tools to create new tools that someone else might want to use – and buy for instance from Tindie – a marketplace for all sorts of hardware, build by independent innovators. These tools create dynamics that are in many ways more exciting and self-sustaining than a block of post-it notes and a pen.

Having fun.

In September 2013, just before starting my Master‘s Degree in Sustainable Design, I attended an event called FutureFest. One quote from Jaan Tallinn, programmer and co-founder of Skype stuck with me in particular:

“It is in everyone‘s interest to have the world shaped by people who enjoy and appreciate life.“