One of the most mysterious things about new technology is that over time we forget how it was to live without it.
Ask people born in this millennium to imagine living without the Internet. What would they struggle with? Finding facts? Keeping in touch? Or, if you are slightly older, think of what it was like before invention of photography. Can you imagine slowly forgetting how people looked like?
We become the new technology with no memory of the way we did things in the past. This impacts the way we see and make sense of the world. Things that were impossible are starting to be quite real. But we also lose parts of ourselves that we cease to rely on. For example, I don’t remember last time I actually had to memorise a phone number… these days when I make a new acquaintance we ring each other and add the unanswered call to our contacts.
New technologies give birth to new possibilities, but it is hard to go back to the way things were. I was reading a book yesterday. A real physical book. And I put my finger on a page because I wanted to highlight a phrase. We can call it progress.
We not only become the technology we use, we also look for ourselves in it. We create new technologies because they hold a promise of the kind of future we want. Technology is a reflection of our dreams.
How does that work?
Dreams become alive first through language. To make them real, we turn our dreams into ideas and later into arguments for how things should be. That’s how they enter the social sphere — through rhetoric. We make things because they represent the best arguments we could agree on for how the world should be. And we buy things, because they resemble our ideas for the kind of world we want to live in. Of course sometimes we buy things out of necessity. But once our basic needs are met, our consumption is more about opinion then necessity.
Businesses make arguments in form of their products and services. A standing desk is an argument for being healthy at work. A car and bicycle are two very different arguments for how to commute. McDonald’s is an argument for convenience, whereas the Michelin Guide is an argument for indulgence. Email and Facebook are two very different arguments for how people should stay in touch, and so on.
As we use these products, we become influenced by them. Consciously or unconsciously we follow the arguments they represent. And so our reality changes. We start doing things differently, enabled by new products and technologies. Our aspirations and dreams change. Our ideology shifts, and gives birth to new arguments.
From dreams to ideas. From ideas to new technologies. From new technologies to new dreams. And so forth. Technology and society are two wheels that have span together since the dawn of humanity. The technology we create is the embodiment of desires of our social systems. The social systems we design are reflections of the nature of our technology. All nice and neat, except recently it’s harder to tell the difference where do "we" end and when do "things" begin.
Today with the rise of digital technologies that permeate our lives more deeply, this entanglement is becoming much more complex. This is driven by the connective property of digital technologies. Thanks to this property we can connect and keep in touch with people from all over the world. But more is happening: things are connecting as well. We can now create a web of connections that are not only between people but anything really.
From the Age of Enlightenment to the Age of Entanglement?
This socio-technical process of wrapping ourselves with technology has been accelerating. In many areas we are more surrounded by human-made objects than by nature itself. Technology is now even concerned with our mental states: the app of the year 2017 on Apple Store was Calm. It is an app to help you… well, be calm. It works by offering pictures and sounds of nature accompanied by stories and meditations. It represents an argument that we can rely on technology to influence our mood and control our anxiety. That we may need technology to get in touch with nature.
Another dimension of this progressing entanglement is the raise of machine learning and AI technology. This may be the ultimate attempt of replicating ourselves in nature, and perhaps it is one of the most revealing of dreams we have. AI is a meta-argument that technology should augment or even replace us in certain contexts. There’s a lot of misconceptions about what machine learning algorithms can and cannot do, but the possibilities have certainly captured our imagination.
In the beginning of this essay I have given a very innocent example of relying more on my phone’s superior memory than on my own. It’s convenient, and I am lazy around such things. I am giving up a part of me, my cognitive muscle, and outsourcing it to the machine. So, let’s look carefully at what we could be potentially outsourcing when we start using machine learning technologies in every day life. Here’s an interesting example of Google Assistant innocently performing a menial tasks like booking a service on our behalf:
There are two problems I see with this use case. First issue is not just a degradation of a potential human connection into a machine-to-human transaction, rather it is that it is enabled by deception. This technology was designed to deceive the other person to believe they are talking with a real human being. This is the most obvious issue, and the one that caught most of the media attention.
The second problem is more nuanced and its impact is longer-term. It is best illustrated by this question: What part of ourselves would we be slowly giving up as we rely on this tech?
Obviously it’s not ability to speak to another human being. It’s the potentiality to connect with strangers and relate with them over simple tasks. Those innocent and banal connections build up to something much more significant. All human conversations, even the most menial ones affect the value system we constantly evolve. So it may sound strange, but by delegating this conversation to a machine, I am outsourcing any ethical dilemmas that may raise unexpectedly in what otherwise would be a human conversation. Perhaps booking a restaurant isn’t as perilous as autonomous vehicles on pedestrian crossings, but it’s the promise that this algorithm holds, the argument it stands for that is disturbing. And the argument this application is making, is that simple things in life are for algorithms not people.
So, it’s not hard to predict a reality in which two things are happening simultaneously. One is the increasing connectivity of everything (that’s already happening). The other is increasing reliance on machine learning to manage these connection on our behalf. While relying on a phone to memorise a number was a cognitive event, outsourcing a conversation with another human being is that and also an ethical event. Using Machine Learning poses a “risk [of] abstraction of accountability and the production of ‘thoughtlessness’”. It is amazingly banal in its complexity. It’s not really a learning system, because it cannot set its own goals, as it doesn’t have a value system (yet?). That won’t stop us from having emotional attachments to these objects, especially as they become more autonomous and life-like.
What does this mean for leaders and practice of leadership?
The above examples show that products today are more than just objects or interactions. They are an amalgamation of environments, services, people, relationships, interactions and computational power. In a way, they are more like human-machine “ecologies”.
An organisation cannot design products like these in a way that you would design an old phone. This kind of “product” is not a stand-alone object anymore. It is a socio-technical system, with which you don’t merely interact, rather you become an active participant of, even its co-designer. In such context we are designing systems of relationships, rather than just interactions or objects.
This reality calls for a new kind of design and a new kind of leadership. And there’s a range of practical considerations around that, into which I will go on another occasion. For now I want to focus on the longer term perspective.
As business leaders we are responsible not only for the performance of the organisation, but also its purpose and direction. That is not merely a strategic question. It is also a question of politics and ethos.
What is the leadership toolkit to apply to the longest time horizon? It’s not strategic analysis or operational effectiveness. Rather it’s values and value judgements. There’s a lot to be said about being agile and making fast decisions. There’s one problem with that. If we do not have a value compass that guides this agility, than all we are doing is spinning a hamster wheel faster. We entangle in an agile way. Looks great for a short while.
In order to design and lead more consciously in this entangled context we cannot frame the dilemma we face as nature vs technology. This is a naive stance and a false dichotomy. We need to accept that technology is not a separate aspect of who we are. Rather than reject technology, we need to embrace it and love it. Love it not in an idealistic, Silicon Valley kind of way, but with care and attention to what it is telling us about us.
All real love requires sacrifice. What are we willing to sacrifice for this love of technology? Our ability to make value judgements? That will lead to disaster. How about we sacrifice our idealistic obsession with technology as a reflection of the perfect world we want to create. Someone will always pay the price for technocratic idealism. That’s not a dream, that’s a nightmare. And it always starts oh so innocently... “Hmm [insert AI name], I would like to book a dinner for two”…
We now have access to one of the most potent technologies available. We need to make more conscious choices on what we are making and what we are surrounding ourselves with.
After all, what we make and what we use is what we become.