I just ran across the California writer and journalist Scott Allan Morrison (who is apparently just launching his new book, entitled “Terms of Use”), and this statement really hit home with me: “There would be nothing inherently wrong with this if we could be absolutely certain the companies that control this technology will act only in our best interests. But if not, we could all be susceptible to manipulation by powerful systems we couldn’t possibly understand. Some academics have even raised the specter of techno-social engineering and questioned whether we are moving into an age in which “humans become machine-like and pervasively programmable”
This meme really struck a chord with me: what if we do become programmable ? Read on: Techno-social engineering is freaking insiders out (via Boing Boing)
What if one of the big social networks started offering background checks that predicted and ranked the suitability of job applicants based on each candidate’s data set – regardless of whether the information was “public” or not? Many of us are starting to use wearable computers on our wrists. What if your insurance company could marry your biometric data with your health history and genetic profile and was able to, for example, predict you were 10 times more likely than average to suffer a heart attack? Might you one day be required by your insurer to live a certain lifestyle in order to minimize its financial risk? Another contact, who did classified work for one government agency (he couldn’t possibly say which one), offered a different but equally chilling twist. Sooner or later, he predicted, we will all come to fully understand that we won’t be able to say, search, browse, buy, like, watch or listen to anything without our actions and thoughts being sliced, diced, and churned through powerful analytical systems. And then what? Will we, creeped out and perhaps a little afraid, start to second-guess our every move? Will we self-censor our speech and behavior to avoid being labeled? The profit-driven companies that dominate the Internet insist the trust of their users is of paramount importance to them. And yet, these are often the same companies that keep moving privacy goalposts and rewriting their terms of use (or service) to ensure they enjoy wide latitude and broad legal protection to use our data as they see fit. Yes, some of these scenarios seem pretty far out there. But not to some of the Silicon Valley insiders I count as friends and contacts. They understand the consequences – certainly better than I do – should these powerful technologies be misused. And I couldn’t have written Terms of Use without them.
What IF AI gets exponentially better at understanding language, visuality and unstructured data?
What IF Google's Quantum Computing experiment will become reality in the near future?