LinkedIn: How design can make tech products less addictive

alvaro-reyes-qWwpHwip31M-unsplash.jpg

It’s the summer of 2018, the summer of Fortnite, and we all know we are addicted. Addicted to email, Snapchat, Instagram, Fortnite, Facebook. We swap outdoor time on the trail for indoor time around the console. Our kids log into Snapchat every day on vacation to keep their streaks alive and then get lost in the stream.

We move less and watch more. In particular, the rise of the smartphone tipped the balance. It is now our omnipresent companion, to the point that in research studies, subjects prefer electric shocks to being left, deviceless, to their own devices. Needless notifications flood us on date nights, at family time, and at sports events, invariably when we are supposed to be in the moment. And then there’s Netflix, guiding us into insomnia and sleep deprivation as we blissfully binge watch, an act of willful ignorance of the fact that even small diminutions of shut-eye can cause bumps in depression and significant declines in cognitive functioning. An increasing pile of evidence points to our obsessive use of tech products as diminishing the most important parts of our lives — our relationship with family and friends, our work lives, and our physical and mental health.

For the technology companies, of course, dependency has been at the core of product design. These companies knowingly used many techniques from cognitive science to drive and hold our attention. That’s not entirely negative. The point of any product design is to make it easy to use. But with soft drinks, cigarettes, and gambling, for example, there is some acknowledgement of the negative impacts. Perhaps more responsibility should live with the inventors of these devices and apps who need to make design changes to help people live more healthfully with their tech.

How can we redesign technology to better respect choice, reduce techno stress, and foster creative and social fulfillment? The ideal solution would be easy to implement and customize and easy to apply to multiple devices and platforms. It would have a centralized user account that allows you to customize all your interactions and notifications, to which all applications would refer for guidance and permission. It would be, in other words, a true user agent, an intermediary that brokers our attention and implements our rules in eliciting it. The concept of such a user agent has been discussed repeatedly in industry but has never been instituted. Given our growing collective discontent, our epidemic loneliness, and our declining productivity, the time may have come when such a solution is no longer simply ideal, but essential.

Ultimately, such an agent will have to be habit-forming technology. It will have to take all the techniques that Silicon Valley’s “user-experience designers,” say, at Facebook and Netflix, have used in forming destructive habits and invert them. We need good magic. We need technology to enhance chronic focus rather than bombard us with chronic distraction; to encourage beneficial habits rather than motivate us to pursue pathological addictions; to promote productivity, connectedness, creativity, spontaneity, and engagement rather than cheap facsimiles of those qualities. The well-lived life, which has never been further from our reach, is one that good technology design could and should make more straightforwardly and universally attainable than ever before.

Applications like Moment, Siempo, Unglue, Calendly, and SaneBox are aiming to deliver that kind of beneficial magic and focus enhancement. They seek to reduce the frictions that we as users must endure in attaining focus by batching notifications, setting limits on phone usage, and other modes of helping control our relationships with our devices. Most of the mechanisms that inhibit or destroy our focus create stress, unhappiness, regret, or sadness once they become too interruptive.

In sympathy with Tristan Harris’ user-rights manifesto, we have a vision of a technology world that works for humans rather than against them and that has each and every company consider the long-term health and benefit of its users to be an imperative design consideration. Even if it meant less profit in the short term, they would restrain themselves from inducing patterns of destructive overconsumption. We propose that this would work as follows.

First, technology makers would define patterns that suggest problem use — preferably without identifying problem users as individuals. Such patterns would include spending an inordinate amount of time with the product, spending too much money, or regularly exhibiting unhealthy behaviors such as binge watching. Triggered by such patterns in its use, the technology product would treat the user differently, offering help in altering these patterns. This may seem like a patronizing approach, but we would wager that, given the option, many people would welcome the help.

In a work context, we might see Slack warning heavy users not only that they must keep desktop notifications enabled, but also that they are in the upper percentage of GIF senders or message senders. Email providers might offer batch receiving of messages to its users who otherwise tend to respond the most quickly (which could indicate compulsion to check for and respond to messages). Or every email client could offer batch receiving as its default mode, or simply ask us every day how many times we want to check email that day (in a Siri-like voice, of course).

For consumers of video, product designers for Netflix and YouTube, for example, would make auto-play an opt-in function. In fact, opt in would become the default rather than requiring opt out as the standard product design. And when product designers did choose to deploy opt out, they would allow people to opt out very easily whenever the feature was showing. For example, every video auto-play would also display a “Stop Auto-Play” button as a preference. That might slow consumption, but then again it might help all of us feel more in control, be more productive, and be more loyal customers.

But how can initiatives such as these be given teeth and profit motive? We are hopeful that, in some cases, the profit motive will take care of itself. Both Netflix and LinkedIn have cracked that nut, as have Spotify and numerous other subscription-based technology businesses. In such a case, inducing massive consumption beyond a certain point becomes counterproductive of customer satisfaction; we suspect that these businesses know exactly where that threshold lies.

And, yes, those platforms are now just as guilty of the same attention-grabbing offenses as the free platforms. But they have the benefit of paid users and a willingness to put a value on attention, participation, and services rendered.

The challenge is to price attention, participation, and customer satisfaction and loyalty in the attention economy.

So how might this work? Imagine that Facebook charges looked like our regular mobile-phone bills with a set of à la carte services. We could opt in or out of those services — for example, no ads in our feed, and a “Focus” button on our homepage that blocks all notifications — and pay for them as features.

We realize that charging users is exceptionally difficult and is probably not going to happen with Facebook or Google; it will probably be the next entrant that cracks this model. But we can point to one example in corporate America where businesses are showing exceptional ability to put a price on such fuzzy costs: benefit corporations (B Corps), whose ranks are growing quickly. Some extremely profitable and successful brands, such as Patagonia, Athleta, and All Birds, have become B Corps. And technology companies that don’t run factories full of low-paid workers. The tech elites would find it at least as simple to enact a similar ethos of ensuring that the product and service on offer does no harm and is in the best interests of society.

The B Corp validation and rating process could easily incorporate a set of values and measurements specifically designed for technology companies. For example, a tech company that could be rated as a B Corp would allow users to unsubscribe from the service in no more than three clicks and without having to send an email or make a phone call. (California just passed a state law that mandated precisely this.) The government of China mandates that game companies put in place user warnings beyond a certain number of hours; B Corp tech companies would have to warn users that their actions were perhaps unhealthy after they averaged more than, say, two hours of use per day over a week.

We know now that the major technology companies are considering how to make it easier for users to control the way they interact with those products. Both Apple and Google announced new sets of features for the iOS and Android operating systems, respectively, designed to allow users to better control their experience (and Apple is finally adding robust tools for parents to better monitor and control their children’s technology consumption). Facebook is planning to release a new feature that will help users monitor their own usage of the network.

Whether the tech giants can truly use their product-design superpowers to help users build healthy long-term relationships with technology remains to be seen. A core tenet of behavior design is reducing barriers to the desired behavior as a means of maximizing that behavior. For so long, the desired behavior has explicitly been bingeing and mindless consumption. The economics of these companies, driven by the attention economy, made it so. No, we’re not going to stop using search engines and social media or streaming movies. Nor should we. But maybe, just maybe, the companies that can more closely align with user needs — that sell products to users, like Apple and Netflix, rather than advertisers — can lead the way. There is no free lunch, ever. The same goes for seemingly lucrative lines of business built on behaviors that, frankly, the creators of these same technologies would prefer not to encourage to excess in their own families and friends.

This is an extract from my new book, Your Happiness Was Hacked: Why Tech Is Winning the Battle to Control Your Brain—and How to Fight Back, coauthored with Alex Salkever.

Original link