Showing posts with label dystopia. Show all posts
Showing posts with label dystopia. Show all posts

Tuesday, August 21, 2018

BadTech Is the Next New Thing

Forget about Martech, Adtech, or even Madtech. The next big thing is BadTech.

I’m referring to is the backlash against big tech firms – Google, Amazon, Apple, and above all Facebook – that have relentlessly expanded their influence on everyday life. Until recently, these firms were mostly seen as positive, or at least benignly neutral, forces that made consumers’ lives easier.  But something snapped after the Cambridge Analytica scandal last March.  Scattered concerns became a flood of hostility.  Enthusiasm curdled into skepticism and fear.  The world recognized a new avatar of evil: BadTech.

As a long-standing skeptic (see this from 2016), I’m generally pleased with this development. The past month alone offers plenty of news to alarm consumers:
There's more bad news for marketers and other business people:
Not surprisingly, consumers, businesses, and governments have reacted with new skepticism, concern, and even some action:
But all is not perfect.
  • BadTech firms still plunge ahead with dangerous projects. For example, despite the clear and increasing dangers from poorly controlled AI, it’s being distributed more broadly by Ebay, Salesforce, Google, and Oracle
  • Other institutions merrily pursue their own questionable ideas. Here we have General Motors and Shell opening new risks by connecting cars to gas pumps.  Here – this is not a joke – a university is putting school-controlled Amazon Echo listening devices in every dorm room
  • The press continues to get it wrong. This New York Times Magazine piece presents California’s privacy law as a triumph for its citizen-activist sponsor, when he in fact traded a nearly-impossible-to change referendum for a law that will surely be gutted before it takes effect in 2020.
  • Proponents will overreach. This opinion piece argues the term “privacy policy” should be banned because consumers think the label means a company keeps their data private. This is a side issue at best; at worst, it tries to protect people from being lazy. Balancing privacy against other legitimate concerns will be hard enough without silly distractions.
So welcome to our latest brave new world, where BadTech is one more villain to fear   It's progress that people recognize the issues but we can't let emotion overwhelm considered solutions.  Let’s use the moment to address the real problems without creating new ones or throwing away what’s genuinely good.  We can't afford to fail.

Monday, May 07, 2018

The Black Mirror Episode You'll Never

I’m no fan of the TV show Black Mirror – the plots are obvious and the pace is excruciatingly slow. But nevertheless, here’s a story for consideration.

Our tale begins in a world where all data is stored in the cloud. This means people don’t have their own computers but can instead log into whatever machine is handy wherever they go.

All is lovely until our hero one day notices a slight error in some data. This is supposed to be impossible because the system breaks every file into pieces that are replicated millions of times and stored separately, blockchain-style. Any corruption is noted and outvoted until it’s repaired.

As he investigates, our hero finds that changes are in fact happening constantly. The system is infected with worms – we’ll call them snakes, which has nice Biblical overtones about corruption and knowledge – that move from node to node, selectively changing particular items until a new version becomes dominant. Of course, no one believes him and he is increasingly ignored because the system uses a reputation score to depreciate people who post information that varies from the accepted truth. Another security mechanism hides “disputed” items when they have conflicting values, making it harder to notice any changes.

I’m not sure how this all ends. Maybe the snakes are controlled by a master authority that is altering reality for its own purposes, which might be benevolent or not. The most likely result for our hero is that he’s increasingly shunned and ultimately institutionalized as a madman. Intuitively, I feel the better ending is that he ends up in a dreary-but-reality-based society of people who live outside the cloud-data bubble. Or perhaps he himself has been sharded and small bits begin to change as the snakes revise his own history. I can see a sequence of split-second images that illustrate alternate versions of his story co-existing. Perhaps the best ending is one that implies the controllers have decided the episode itself reveals a truth they want to keep hidden, so they cut it off in mid

Monday, January 09, 2017

Artificial Intelligence, Virtual Reality, and Government Control: Perfect World or Perfect Storm?

If it weren’t the print edition, I would have sworn today’s New York Times business section had been personalized for me: there were articles on self-driving cars, virtual reality, and how “Data Could Be the Next Tech Hot Button”. That precisely matches my current set of obsessions. It’s especially apt because the article on data makes a point that’s been much on my mind: government regulation may be the only factor that prevents AI-powered virtual reality from taking over the world, and governments may feel impelled to create such regulation in self-defense of their authority. The Times didn’t make that connection among its three articles.  But the fact that all three were top of mind for its editors and, presumably, readers was enough to illustrate their importance.

I’m doubly glad that these articles appeared together because they reinforced my intent to revisit these issues in a more concise fashion than my rambling post on RoseColoredGlasses.Me. I suspect thread of that post got lost in self-indulgent exposition. Succinctly, the key points were:

- Virtual reality and augmented reality will increasing create divergent “personal realities” that distance people from each other and the real world.

- The artificial intelligence needed to manage personal reality be beyond human control.

- Governments may recognize the dangers and step in to prevent them. 

Maybe these points sound simplistic when stated so plainly. I’m taking that risk because I want to be clear.  But depth may add credibility. So let me expand on each point just a bit.

- Personal reality. I covered this pretty well in the original post and current concerns about “fake news” and “fact bubbles” make it pretty familiar anyway.  One point that I think does need more discussion is how companies like Facebook, Google, Apple, and Amazon have a natural tendency to take over more and more of each consumer’s experience.  It's a sort of “individual network effect” where the more data one entity has about an individual, the better job they can do giving that person the consistent experience they want.  This in turn makes it easier to convince individuals to give those companies control over still more experiences and data. I’ll stress again that no coercion is involved; the companies will just be giving people what they want. It’s pitifully easy to imagine a world where people live Apple or Facebook branded lives that are totally controlled by those organizations. The cheesy science fictions stories pretty much write themselves (or the computers can write them for us).  Unrelated observation: it's weird the discussions which Descartes and others had about the nature of reality – which sound so silly to modern ears – are suddenly very practical concerns.

- Artificial intelligence. Many people are skeptical that AI can really take control of our lives. For example, they’ll argue that machines will always need people to design, build, and repair them. But self-programming computers are here or very close (it depends on definitions), and essential machines will be designed to be self-repairing and self-improving.  Note that machines taking control doesn't require malevolent artificial intelligence, or artificial consciousness of any sort. Machines will take control simply because people let them make choices they can’t predict or understand. The problem is that unintended consequences are inevitable and for the first – and quite possibly the last – time in history, there will be no natural constraints to limit the impact of those consequences. Random example: maybe the machines will gently deter humans from breeding, something that could maximize the happiness of everyone alive while still eliminating the human race. Oops. 

- Government intervention. Will governments decide that some shared reality is needed for their countries to function properly?  How closely will they require personal reality to match actual reality (if they even admit such a thing exists)?  Will they allow private business to manage the personal reality of their citizens? Will they limit how much personal reality can be delivered by artificial intelligence? These issues all relate to questions of control. Although there’s an interesting theory* that the Internet has made it impossible for any authority to maintain itself, I think that governments will ultimately impose whatever constraints they need to survive on individuals, companies, and the Internet. This probably means governments will enforce some shared reality, although it surely won't match actual facts in every detail.  It’s less certain that  governments will control artificial intelligence, simply because the benefits of letting AI run things are probably irresistible despite the known dangers.

So, is the choice between having your reality managed by an authoritarian government or by an AI? Let's hope not.  I prefer a world where people control their own lives and base them on actual reality.  That’s still possible but it will take coordinated hard work to make it happen.


___________________________________________________________________________________
*For example, Martin Gurri’s The Revolt of the Public














-