Technology Seeks Society’s Forgiveness, Not Permission
A key difference between today’s and past transformations is that technological evolution has become much faster than the existing regulatory, legal, and political framework’s ability to assimilate and respond to it. It’s a Moore’s Law world; we just live in it.
Disruptive technology isn’t entirely new. Back in the days of the robber barons, the ruthless capitalists of the early United States built railroads without seeking political permission. And, more recently, in the personal-computer revolution, company employees brought their own computers to work without telling their I.T. departments. What is new is the degree of regulatory and systemic disruption that the savviest companies in this technology revolution are causing by taking advantage of the technology triad of data connectivity, cheap handheld computers, and powerful software to grab customers and build momentum before anyone can tell them to stop what they are doing.
In 2010, Uber had no market share in providing rides to the U.S. Congress and their staffs. By 2014, despite the constituencies of these political leaders, Uber’s market share among Congress was a stunning 60 percent.Talk about regulatory capture. Companies such as Uber, Airbnb, and Skype play a bottom-up game to make it nearly impossible for legacy-entrenched interests and players to dislodge or outlaw newer ways of doing things.
In fact, most of the smartphone-based healthcare applications and attachments that are on the market today are, in some manner, circumventing the U.S. Food and Drug Administration’s cumbersome approval process. As long as an application and sensor are sold as a patient’s reference tool rather than for a doctor’s use, they don’t need approval. But these applications and attachments are increasingly replacing real medical opinions and tests.
Innovators’ path to market aren’t entirely obstacle free. The FDA was able to quickly and easily ban the upstart company 23andMe from selling its home genetics test kits to the public, though it later partly revised its decision. Uber has been fighting regulatory battles in Germany and elsewhere, largely at the behest of the taxi industry. But the services these two companies provide are nearly inevitable now due to the huge public support they have received as a result of the tremendous benefits they offer in their specific realms.
Ingeniously, companies have used the skills they gained by generating exponential user growth to initiate grassroots political campaigns that even entrenched political actors have trouble resisting. In Washington, D.C., when the City Council sought to ban Uber, the company asked its users to speak up. Almost immediately, tens of thousands of phone calls and e-mails clogged switchboards and servers, giving a clear message to the politicians that banning Uber might have a severe political cost.
What these companies did was educate and mobilize their users to tell their political leaders what they wanted. And that is how the process is supposed to work.
“That is how it must be, because law is, at its best and most legitimate— in the words of Gandhi— ‘codified ethics,’” says Preeta Bansal, a former general counsel in the White House. Laws and standards of ethics are guidelines accepted by members of a society, and these require the development of a social consensus.
Take the development of copyright laws, which followed the creation of the printing press. When first introduced in the 1400s, the printing press was disruptive to political and religious elites because it allowed knowledge to spread and experiments to be shared. It helped spur the decline of the Holy Roman Empire, through the spread of Protestant writings; the rise of nationalism and nation-states, due to rising cultural self-awareness; and eventually the Renaissance. Debates about the ownership of ideas raged for about three hundred years before the first statutes were enacted by Great Britain.
Similarly, the steam engine, the mass production of steel, and the building of railroads in the eighteenth and nineteenth centuries led to the development of intangible property rights and contract law. These were based on cases involving property over track, tort liability for damage to cattle and employees, and eminent domain (the power of the state to forcibly acquire land for public utility).
Our laws and ethical practices have evolved over centuries. Today, technology is on an exponential curve and is touching practically everyone— everywhere. Changes of a magnitude that once took centuries now happen in decades, sometimes in years. Not long ago, Facebook was a dormroom dating site, mobile phones were for the ultra-rich, drones were multimillion-dollar war machines, and supercomputers were for secret government research. Today, hobbyists can build drones, and poor villagers in India access Facebook accounts on smartphones that have more computing power than the supercomputers of yesteryear.
This is why you need to step in. It is the power of the collective, the coming together of great minds, that will help our lawmakers develop sensible policies for directing change. There are many ways of framing the problems and solutions. I am going to suggest three questions that you can ask to help you judge the technologies that are going to change our lives.
Three Questions to Ask
When I was teaching an innovation workshop at Tecnológico de Monterrey in Chihuahua, Mexico, a couple of years back, I asked the attendees whether they thought that it was moral to allow doctors to alter the DNA of children to make them faster runners or improve their memory. The class unanimously told me no. Then I asked whether it would be OK for doctors to alter the DNA of a terminally ill child to eliminate the disease. The vast majority of the class said that this would be a good thing to do. In fact, both cases were the same in act, even if different in intent.
I taught this lesson to underscore that advanced technology invariably has the potential both for uses we support and for uses we find morally reprehensible. The challenge is figuring out whether the potential for good outweighs the potential for bad, and whether the benefit is worth the risks. Much thought and discussion with friends and experts I trust led me to formulate a lens or filter through which to view these newer technologies when assessing their value to society and mankind.
This boils down to three questions relating to equality, risks, and autonomy:
- Does the technology have the potential to benefit everyone equally?
- What are the risks and the rewards?
- Does the technology more strongly promote autonomy or dependence?
This thought exercise certainly does not cover all aspects that should be considered in weighing the benefits and risks of new technologies. But, as drivers in a car that’s driverless— as all of our cars will soon be— if we are to rise above the data overload and see clearly, we need to limit and simplify the amount of information we consider in making our decisions and shaping our perceptions.
Why these three questions? To start with, note the anger of the electorates of countries such as the United States, Britain, and Germany, as I discussed earlier. And then look ahead at the jobless future that technology is creating. If the needs and wants of every human being are met, we can deal with the social and psychological issues of joblessness. This won’t be easy, by any means, but at least people won’t be acting out of dire need and desperation. We can build a society with new values, perhaps one in which social gratification comes from teaching and helping others and from accomplishment in fields such as music and the arts.
And then there are the risks of technologies. As in the question I asked my students at Tecnológico de Monterrey, eliminating debilitating hereditary diseases is a no-brainer; most of us will agree that this would be a constructive use of gene-editing technology. But what about enhancing humans to provide them with higher intelligence, better looks, and greater strength? Why stop at one enhancement, when you can, for the same cost, do multiple upgrades? We won’t know where to draw the line and will exponentially increase the risks. The technology is, after all, new, and we don’t know its side effects and long-term consequences. What if we mess up and create monsters, or edit out the imperfections that make us human?
And then there is the question of autonomy. We really don’t want our technologies to become like recreational drugs that we become dependent on. We want greater autonomy—the freedom to live our lives the way we wish to and to fulfill our potentials.
These three questions are tightly interlinked. There is no black and white; it is all shades of gray. We must all understand the issues and have our say. Are you ready?
Vivek Wadhwa is an Entrepreneur, Academic, Author, Keynote Speaker wrapped into a wonderfully vibrant and charismatic package. His research is focused on the critical advances in robotics, artificial intelligence, computing, synthetic biology, 3D printing, medicine, genetic science and nanomaterials, and how these advances are creating disruptive changes for companies, industries, governments and the culture at large.