Originally published on LinkedIn
Have you ever thought about the future of human thought?
I know, it seems kind of silly. It’s like wondering if the sun will come up tomorrow or if trees falling in the forest make sound if no one is around.
Except I don’t think it is silly; I think it may become the key question many of us will be working on over the next few decades.
That’s the conclusion I came to after spending the last few years studying the impact of technology on the human brain. Looking back millions of years at the impact of various “technological innovations” have had on the evolutionary arc of human cognition, I learned our story is not a simple one. We did not advance to our current capabilities in thought because of some one or two big breakthroughs. This wasn’t just discovering how to use tools, harness fire, or developing language.
In my opinion, based on extensive research, our cognition developed over millions of years through the evolution of several dozen “innovations” that, taken together, gave us a chance to develop our current thinking abilities. Yes, that includes the standards, like bipedalism, tool use, fire, agriculture (and its direct corollary – warfare) and language. But it also includes many less well known or considered innovations like cooking, throwing, menopause (and its direct corollary – the role of the grandmother), cooperation, hunting, culture, “home”, monogamy and other factors.
Many of these breakthroughs themselves required physical evolution to our bodies. Our bodies went through big changes to our size, shape, diet, and physical relationships to our ecological niches. This included small evolutions, like developing the whites of our eyes to better convey the focus of our visual attention to others. Our brains underwent gross structural changes to facilitate not just complex conscious thought, but sophisticated sensory, emotional and unconscious adaptations. They also evolved microscopic changes, in the form of new types of neurons, changing cytoarchitecture and evolved brain metabolism. All of these, then, co-evolved in a changing social landscape, in which our culture adapted to take advantage of these and other changes. Each of these changes came online individually at different times, spread thousands or millions of years apart. Over time, they ratcheted up, or bootstrapped us, and collectively they combined to create an emergent property. I call this the emergent brain, the one we use today.
Or, rather, the emergent brain of a several decades ago. My research leads to me to conclude that rapid human technological progress over the last few decades has caused dramatic change to nearly all of those conditioning factors that shaped us over millions of years. If you change any one of these factors, you are likely to see a change in human cognition, or intelligence, over time. My hypothesis is that if you change most of them, all at once, you are likely to see major changes in human intelligence, soon.
Call me crazy, that’s OK. However, recently I presented my thoughts to a small group of leading technologists, neuroscientists, medical experts and others, and surprisingly, they agreed. So, if I’m crazy, I’m not alone.
Where does that leave us?
Most of us are steeped in the status quo of today. We teach our kids how to behave, learn and act based upon how we were trained, based on our experience. But if everything is going to change (hypothetically), what should we be teaching our kids, now, for their future, and how should we be adapting ourselves? We don’t want to be driving into the future looking into the rear-view mirror, as Marshall McLuhan said.
Obviously a key factor here is technology, and many of us are busy creating or shaping in different ways. The cumulative effects of Moore’s Law, Metcalf’s Law, the Internet of Things and others has been talked about extensively, from the general predictions of Alvin Toffler’s Third Wave, to the very recent and specific work of Klaus Schwab in his Fourth Industrial Revolution. Many of us are working to perpetuate this technology, developing new cloud platforms, artificial intelligence, and connected devices. But most of us don’t think too much about where this is leading us.
Those that do think about the future spend a lot of time thinking about artificial intelligence (AI). They look forward to the point in time at which computers smarter than humans are ubiquitous, and they work towards different visions of this Singularity. There are pundits here, like Kurzweil and others that talk about greatly expanded human lifespan and increased potential for the human race. And there are critics, which fear a huge rise in unemployment, as people’s work is replaced by AI. Technological unemployment has now developed to become a concern and focus of even mainstream economic institutions and thought leaders.
Within these forward-looking groups, there are many that insist that the focus should be less on AI and more on intelligence augmentation (IA), where AI is harnessed to improve humanity. For example, maybe you shouldn’t be bothered to check to see if there is enough milk in your fridge for your kid’s breakfast, when your IA-enabled fridge can take of this for you, freeing your brain up to do better things, argues Pattie Maes. People are debating the philosophical distinctions, nuance, and semantics between AI and IA, in different pockets or forums around the world,
Lost in this debate, however, is the future of human thought.
What will we think about when we have extensive AI at our beck and call, and our IA allows us to do more than ever before? Let’s be explicit – augmented towards what? How will we use our brains to create value, and how will we entertain our brain, prepare our brain, and basically leverage our brain to participate in a better human society?
A few weeks ago I had a chance to talk with the founder of a major tech company, who now helps lead a couple of the world’s leading organizations that work on the future impacts of AI. I asked him who else in the world was working on the future of human thought. He told me that it was an interesting question, one that most people aren’t really thinking about. Those that are, focus on AI. “Pat, human thought is just a short term problem, when you consider the vast implications of AI”, he told me.
That “short term problem” may have been what Elon Musk was thinking about, when he said a couple of days ago that humans must become cyborgs if they are to stay relevant in a future dominated by artificial intelligence.
In a couple of months I’ll be giving a TED talk on this topic, and I’d like to hear your thoughts on the subject. I’d like to harness the technological and social innovations of social networking to augment my thinking on the situation 😉
My focus right now is on standing up a new multi-disciplinary consortium of experts from different fields to look at our evolving human context. I’m not a luddite – I can see us using virtual collaboration tools to work together to understand situations that are too complex for any one of us to grasp, and with those insights develop new solutions in anticipation of likely outcomes, resulting in outcomes that leave us better off. The focus will be a humanist one: what are the prospects for human thought in the coming years, and how can we improve those prospects?
AI may require us all to become cyborgs, facilitated, not doubt by IA. But how do we proceed forward with human intelligence, or human thought, into this future? How do we manage through that transition, if it is in fact, going to come about?
That’s what I’m thinking about, and the next steps I think make sense on this topic. What do you think?
Feel free to respond to this post on LinkedIn, or reach out to me directly if you have thoughts, insights or feelings on this topic. You can also read the introduction to my book on this project, on Medium, here.