Lifestyle Computing: VisibleNation and the Machine Quest to Improve Human Lives

Why taking advice from a machine is probably a bad idea.

None

VisbileNation's new service offers advice via consensus. There's just one problem: Machine's don't have feelings.

Written by MIchael Thomsen (@mike_thomsen)

 

Before you can deceive someone you need to convince them they need something from you. This is best accomplished by insinuating there is some general inadequacy in them and suggesting you know how to fix it, a kind of charlatanism for which the Internet is ideally suited. It presents a user with a mass of consensus opinions, any counters a person might make about their decency are subsumed in the avalanche of support data to the contrary.

The recently launched VisibleNation opens with a simple but broad claim that their service has something you'll need. The Internet application ties into either your Facebook or Twitter account and presents users with a series of questionnaires organized in categories like Finance, Career, Health, Education, and Travel. After answering all the questions in a category you'll be shown a series of charts that compare your answers to all other users. The purpose of VisibleNation seems entirely philanthropic, "to fundamentally change the way people look at the world, each other and make decisions." 

Instead of taking input from a person whose suggestions can be put in context of an intimate relationship and judged against an understanding of the other person's personality and habits, decision-making powered by collective data becomes a process of aligning with consensus.

The service claims to be "growing and maintaining the world's largest continuous personal lifestyle tracking service," which it says will empower users to make better informed decisions about their lifestyles. VisibleNation is not a tool for questioning the validity of the institutions that delimit what we think of as possible when imagining our ideal lives. Instead its limited pull-down menus, sliding scale indicators, and multiple choice questions guide users through a loosely modeled life where the only choice for meaningful changes concern individual lifestyle. All sources of discomfort or unhappiness come from a statistical disparity between one's choices and those of everyone else.

It reduces the idea of advice-taking to a matter of personal reflection before an anonymous body of statistics. Instead of taking input from a person whose suggestions can be put in context of an intimate relationship and judged against an understanding of the other person's personality and habits, decision-making powered by collective data becomes a process of aligning with consensus. The lifestyle shortfalls are your own, it's a matter of objective data, but you can fix yourself with the right kind of empowerment. And though there is no acknowledged plan for making a profit, it seems clear that VisibleNation is designed around first creating incentive for users to contribute to the data pool that will eventually become a sellable asset.

It's hard not to wonder if this move toward asking depersonalized mechanisms to guide us won't seem like the mesmerism of our era in a few years. A group of researchers at Microsoft Research Asia this week revealed an algorithm that allows people to navigate the awkward task of trying to friend someone you have a professional interest in but who isn't in your immediate group of friends. The algorithm, which the group calls STINA (Selective Invitation with Tree and In-Node Aggregation), takes a target and then charts out a chain of friend requests necessary for you to have the greatest chance of being accepted by them. The group claims that the algorithm has so far produced significantly better results than test subjects operating on their own. It would be easy to take these findings as proof that humans need machine intervention to lead better lives, but this would be to forget the test medium is itself a mechanical platform.

 

Machines are more efficient at understanding one another's systemic limits and branching possibilities. When the roles are reversed and machines are given to operate according to the standards of flexibility and nuance of humans the results are perhaps embarrassing. Apple's autocorrect software has spawned its own subculture of comic misunderstanding. Google Voice Mail transcriptions have created a similar flourishing of computerized nonsense like, "Hi. This is Fedex hi ken says, okay i can move it to you because this phone number from the mother of."

When computers flounder in the human realm, we excuse it by acknowledging human language and social exchange as being overwhelmingly complex and nuanced. When humans flounder within the limitations of computers we assume the human is to blame and should be re-routed on some path of self-improvement. Bizarrely, many see the inefficiencies of human behavior in synthetic systems as cause to develop yet another synthetic add-on to help the human operate the machine a little more smoothly.

There is something incredibly frightening about the conditions underlying Microsoft's research: The most important people in our lives are increasingly distant people, often strangers, whose approval we seek for work, advancement, and mentorship. The corollary is that there are fewer and fewer people in our buildings, neighborhoods, and immediate friend circles that we rely on to validate us, help us advance, and find meaningful work. We see the irreducibly complex people who love us and live with us as increasingly useless, while the data apparitions of distant strangers suggest models for how we might one day be better. Now a brood of digital opportunists are eagerly building technical intermediaries to help make us better, after the old intermediaries helped convince us we weren't good enough. This is problem-solving through the creation of new problems, a process that will eventually leave us back where we started. 

Latest in Pop Culture