Select Page

Originally posted on Medium

“Your experiences today will influence the molecular composition of your body for the next two to three months, or, perhaps, for the rest of your life. Plan your day accordingly.”

Steve Cole



Chapter 1: Introduction

Your brain is different, now that you have read this sentence.

Really.

Light has reached cones on the back of your eye, triggering a reaction, and that reaction traveled as an impulse at 423 miles per hour into pattern recognition processing centers at the middle of your brain, and then to the visual cortex at the back of your brain, on to your temporal lobes on the sides of your brain, and then on to your frontal lobes where it is first perceived by your brain as words that make sense.[1] There, other neural networks have been stimulated which have brought about certain associations with other connections in your brain. These associations are developed by the patterns of the neurons firing back and forth across different parts of your brain’s conscious and unconscious processors, memory and emotional centers.2 All of this happens in a few milliseconds — picture these patterns lighting up a glowing path that begins to fade almost immediately, but in a way that leaves your brain definitely different than before you started reading this page.

Now that you have been thinking about that nugget for a few seconds, that difference in your brain is even larger. Those pathways have been burned in just a little more, and may now include other associations — perhaps of echoes of some college anatomy class, cocktail party discussion, blog post or a TV show you remember seeing about how the brain works.

What is the effect on your brain, then, of the 11 hours you spend with technology each day?[2]

If you are the average American, you spend an hour a day on your computer surfing the Internet, 5 hours a day watching TV, and you check your cell phone almost two hundred times a day. What is THAT doing to your brain?

It seems that everywhere you go these days there’s a flat screen TV hanging on the wall, and the majority of the people are, if not actively staring at the TV, then certainly glancing at it regularly- despite what they are doing or who they are with. And the stations on those TVs are not playing calm, peaceful, unobstructed scenes of water lapping up onto beaches — even with the sounds off the TVs are notifying us of “breaking news!”, and they have scrolling headlines juxtaposed next to “news updates” and other graphical and text pop-ups. And it’s not just the TVs that people are focused on in restaurants, DMVs, airports and work lobbies; everyone is continually checking their phones. It’s now unusual to pull up to a stop sign or stop light without seeing someone in a car next to you staring down at their phone. Go to any school event, and you are as likely to see as many phone and tablet screens glowing back at you as you see backs of heads.

This is just a high level description of how much time we spend in a mostly visual experience, with certain devices or screens, of course. We haven’t yet gotten into how much time you spend with the applications, which often span multiple devices. Applications that can be visual, like social media, or gaming, or audible, such as when Internet of Things (IoT) around you both talk and listen to you: “Alexa, order more dish soap” (you say to your Amazon Echo), or “You need to leave now to arrive on time to your next appointment, because there is a traffic jam on 95North”, says your BMW, paired with your iPhone’s calendar.

And what about technology that we don’t look at — like the Fitbit we carry around on us to measure our steps, which feeds its information into our phones, Facebook and lives? Or other technology we’ve created that increasingly monitor us, and helps us plan our lives and interact with those we care about? Or the car that parks itself, maintains its place in the middle of the lane, and watches our blind spots?

This line of thinking — about how you interact with applications, people and the things around you increasingly through technology — quickly leads to conversations about your changing roles within your social units, evolving daily routines, and even changes to your health and lifestyles.

Which brings us to my area of interest: what is the overall effect of this technology on us? Why do we interact with technology the way we do, and how will all of this likely affect us in the future?

Those are the questions that this book set out to answer.

The irony is that I’m partly to blame for how the average American spends their time with technology.

You see, for the last 20 years, I, like so many others, had a hand in the development of the technology around us. I had the opportunity to help create and shape the early years of the Internet during early nineties, then help with the birth of mobile data and messaging. Then work with the big technology companies, like Dell, HP, and Sony, who sought to use mobile data to transform their laptops and other electronic devices by allowing them to stay “connected” anytime, anywhere. Eventually my work evolved into what is now known as the Internet of Things. This included working with companies like Google, Cisco, Amazon, and the cellphone and cable companies, as I helped to create new platforms like interactive digital advertising, mobile “TV anywhere” services, mobile video conferencing aps, new consumer electronic device categories like e-readers and “connected cars” and augmented reality platforms. Even today, as I write this, I am hard at work on several projects that revolve around advanced technologies that promise to change the life of the everyday person.

Mind you — I’m sharing this background not to impress you; I provide it to give you some context bearing on the question at hand. I am not some super-technologist driving decades of change, I am just a cog in the wheel of the technology industry, like so many millions of others. Sometimes I led, and sometimes I followed. Some were big successes, many never got off the drawing board.

So no, I do not write to you claiming to be the father of all the technology that is all around us today. But here’s the thing — I was lucky enough to help match-make some of the parents, support the pregnancies, and be in the delivery room of many of the technology platforms that I find myself questioning now.

This breadth of experience gives me a somewhat unique or different perspective on technology than the end-user or consumer. I’ve come to believe there is a responsibility that comes from having a unique perspective. As Albert Einstein is reputed to have said, “Those that have the privilege to know have the duty to act”. Included in that responsibility, I think, is the obligation to step back every so often and examine the impact of what you are working on, in the big picture.

The consumers of this technology didn’t write the business case, develop the product requirements, nor pitch the “strategic synergies” or use cases enabled by this new technology, so they can be forgiven for sometimes being overwhelmed by it. When lots of people start to feel overcome by technology, across many parts of their lives, then critics begin to talk about the downside of technology. Eventually, books come out with titles like “Information Overload”, and “The Shallows: What the Internet is doing to our Brains”. And new medical conditions are proposed, like “Information Fatigue Syndrome”.

For every book that is critical of technology, there are others that show that technology is helping drive the economy, connecting people in new and better ways.

But — people wouldn’t be using technology this much if they weren’t getting some benefit from it, right?

And people are adapting to technology. Young kids show grandparents how to configure iPads for FaceTime sessions; no one gives directions anymore –they give you an address and assume that Google Maps will direct you in the most efficient way, with real-time updates on the traffic on the way, to boot. 60% of people under 30 don’t know the phone number to their significant other. Like Einstein said about knowing his own phone number — they don’t need to know it. Their phone knows it. Need to remember that gift you bought for Aunt Suzie? — check you order history with Amazon. Your second cousin’s daughter’s name? — Facebook it. Who you worked with 10 years ago? — LinkedIn.

But across this landscape of change, I rarely came across people asking why. Yes, for each technology project we launched, we had market surveys, and financial ROI models, and we communicated areas of “strategic synergy”, and of course — customer benefit.

When I first set out to find the answer to the question of “why?”, and “to what purpose does technology serve people?” I assumed there were already plenty of books out there on this topic, and so I originally set out to read, not write a book. Reading is fun, and when you find what you are looking for, your brain feels good. Writing is hard, and the only reason to write is that you didn’t find what you are looking for, so it means your brain didn’t feel good. I wanted my brain to feel good.

“The broadest pattern in human history”, according to Jared Diamond, “is the disparate results different populations of humans have had as they’ve developed under differing environmental conditions.”

If that is true, then perhaps the deepest pattern in human history is the cumulative effect that technology has had on the human brain, and it does not appear to be much of a topic of discussion.

Yes, there were a lot of books out there that appeared to speak to so many of the parts of the question I had. In fact, I learned that the debate around emerging technology and its effect on people was a time honored tradition, summed up by Adam Thierer as “techno-pessimists” predict the death of the old order (which, ironically, is often a previous generation’s hotly-debated technology that others wanted slowed or stopped), and “The pollyannas, by contrast, look out at the unfolding landscape and see mostly rainbows in the air.”[3]

Writing in 2010, Thierer established the camp of techno-optimists to include Nicholas Negroponte, James Surowiecki, Clay Shirky, Chris Anderson, Kevin Kelly, Jeff Howe and others, and one would guess that today he’d presumably add Ray Kurzweil and Miguel Nicolelis. Though each of these authors present a unique perspective and argument, taken as a whole this camp makes one feel that technology has had a very positive effect on humans and will continue to do so: it is going to make us all live forever, and is what makes us, as humans, so much more superior than every other species that has ever lived.

On the other hand, the techno-pessimists camp could be summed up (when I reduce it to an absurd degree) as “technology is ruining your brain”, and this camp includes writers such as Neil Postman, Nick Carr, Mark Bauerlein and Jaron Lanier, and no doubt Thierer would probably add in Susan Greenfield, Clay Johnson and several others who have written since 2010.

I found valid points in both camps, and have spent extensive time with all of these books and so many other others which examine our current times. They can be very interesting, and useful (I cite and rely on dozens of those books in this one), but ultimately I found them unsatisfying.

That unfulfilled feeling may have been because, having made up their minds on the outcome of humans and technology, these writers were displaying a certain bias that shaped their argument and helped them to make a cogent case to address the questions they were looking at, but clouded the question I had in mind. Clay Johnson, in The Information Diet, explains why writers tend to do this: because readers tend to make media decisions that reinforce decisions they themselves have already made. Writers, too, make a case for the decision they have already concluded, and they do it in a way that is compelling (and marketable) to readers who spend their money to reinforce their decisions.

The other issue is — let’s face it — this problem is so darn complex. Each author tackles the problem from their perspective, and provides their take on just one facet of the problem. I know it’s a cliché, but it’s a classic case of the three blind men and the elephant: each describes what they can feel — a snake, a rope, or a tree, while the broader truth only emerges once they move around the elephant and talk to each other. Market forces prevent today’s authors and thinkers from tackling the broader issue. Friends, agents, and fellow authors will counsel them (as they do me), that writing a book over 400 pages is the kiss of death commercially (unless you can find a way to include wizards and dragons, which, with this sentence, I just did ;). Readers who are used to tweets and listicles don’t have the ability to focus on a 500-page book, or worse, a multi-volume treatise on one single issue. Not anymore. But the problem for me was worse than that: I had to read hundreds of books to wrestle with the problem. And I did not find any single book to be sufficient, specifically because each author had to limit their scope so that their project was viable.

That’s not to say the existing books are not useful, and to be sure, some had broader scope than others. And I did find one class of work to be more satisfying — authors who discussed the “how”, and who develop working models of the mechanisms behind these interactions. Writers in this vein include Klingberg, Pinker, Whybrow and others.

But even after reading them I still felt that I didn’t quite yet have the answer I was looking for, and I was still scratching my head when trying to understand the impact of technology on my daily life.

Let’s take an example:

My two daughters, ages 9 and 11, are sitting and laying on the floor. The TV is on — some Disney show where the kids are all wise-cracking, fun-loving kids just nerdy enough to be cool, and the adults are all bumbling, well-intended (for the most part) but hapless creatures. My two girls are not watching the TV, though- they are both staring intently at their respective iPads. There is a spiral bound notebook next to one girl, and a laptop sits open next to the other one.

‘Turn of the TV and go outside and play!’ I can clearly hear my mom’s voice, barking to me from the kitchen, 35 years ago, as I sat in front of a (color!) TV, complete with rabbit ear antennas and the requisite tinfoil tuning.

“Umm, aren’t you guys supposed to be doing your homework?” I ask, without any resemblance at all to the confused and unsure adults on the TV.

“We’re taking a break, Dad.”, says one, flashing a bright grin and holding direct eye contact for 4 milliseconds.

“Yeah, and I’m waiting for my report to print up downstairs”, says the other, without looking up.

“Uh… so what are you doing now?”

“We’re playing Minecraft and watching TV”

“Against each other?”

Yes, she’s in my world, and she accidently knocked down the wall of the new castle I’m building — see?” “and Ainsley is playing with us — she’s visiting my world now.” Ainsley’s also 11, and lives down the street. Of course, she’s not physically present in the room now, just visiting, virtually, in the Minecraft game.

And here’s where I rise above the clueless parents on the Disney channel: “All right — iPads off, TV off, downstairs to the table to finish your homework. When that’s done, go outside and play until dinner.” I say, brooking no argument, but with love and concern for their wellbeing.

My mom would be proud.

The problem is — I’m not so sure that I’m doing the right thing.

Yes, we all know that TV rots our brain, and the rise of screentime has been scientifically proven to turn the attention span of the average elementary and middle-schooler into that of a gnat. But still, something about getting on this bandwagon gives me pause.

Reader: OK — Captain Obvious, we know, we live here, too. And, umm, didn’t you say you helped bring some of this technology about — aren’t you being a hypocrite?

Yes — it’s obvious that something is going on, and yes, I did help perpetuate some of it. But it still leaves me wondering a) what is going, and b) what I should do about it?

Let’s go back to my kids — should I be telling them to turn it off? And what’s so wrong with people staring at flat screens in public spaces? Why are we as a society, and as individuals turning to these activities, and what effects will it have on us going forward? Let’s be specific: why does someone watch TV 5 hours a day?

The answers seem obvious. “To relax”, “To unwind”.

Then why does the average “reality” television feature plot lines like, “if Joey doesn’t get a deer in the next 30 minutes, his season is over, and his family won’t make it through the winter…”? And a typical prime time show will show a “ripped from the headlines” plot featuring the millionaire cross-dresser wanted for multiple murders…? How is that relaxing?

I’m not judging — I’m asking. Really. What makes us want to watch TV? What’s the evolutionary advantage, biological imperative, cognitive mechanism, or psychological rationale that drives us to watch that flat screen, and how effective is TV at giving us what we seek? As always, broad questions don’t have simple discrete answers, and the reasons and effects vary. But there appears, at first and even second glance, to be surprisingly little research on this.

When we pick up our phone to glance at it, what are we hoping to find? Why do we do it, and what effect does it have on our thinking, feeling, effectiveness, body and other aspects of our self?

For example — maybe my kids’ future will depend precisely on their ability to multi-task and balance simultaneous information streams. Maybe they need to develop their ability to maintain their “continuous partial attention”, a term coined by Linda Stone. Just because my mother told me to go outside and play, is that what I should be telling my kids to do?

Also, let’s be practical: these technologies are compelling, and they are here to stay. The techno-pessimists often offer little advice for dealing with the rise of “screentime”, other than limiting it. Have you tried to tell your children to turn off the TV on a rainy day? Have you made subtle hints to get your significant other off the couch during Sunday football season, or in the middle of a Law and Order rerun marathon? How about suggesting to your dining partner that they focus on you, not the shopping channel playing behind you on the wall of the sandwich shop? What luck do you think you might have knocking on the window of the car next to you and suggesting to the soccer mom (or dad), that they shouldn’t be checking their emails at red lights?

This isn’t just about evaluating the technologies that currently dominate our visual attention. We can quickly develop a preliminary argument to explain these based on specifics of the human brain. This would probably start with our uniquely specialized visual circuitry, and show how the unpredictability of rapidly changing information on televisions and our mobile phones route through our amygdala to stimulate our emotions through activation of mesolimbic dopamine pathways, and manipulate our autonomic nervous system, creating powerful physical and psychological experiences. A primatologist might pull the lens back from the neuroendocrinology, and talk about how technology like the mobile phone has become an important tool in maintaining our connections to others, and we use it to service certain innate needs and curiosities about others in a way that is driven by a Machiavellian Intelligence, and how our brains co-evolved with social complexity. And these primatologists would probably share some common ground with the sociologists who have their own special human concepts of social capital, and would point out that an iPhone compresses space and time, allowing individuals to share and observe signals and cues across large networks of “weak social bonds”. Such an argument might even include evolutionary biologists noting the adaptational changes that have been reported by modern day neurologists who’ve shown a loss of vagal tone within our “connected lifestyle”, which brings us back to the beginning of story.

We’ll talk about these and more in this book. But this trend in flat screens and mobile use is just an obvious manifestation of a much more pervasive trend going on — where humans create technology, and that technology deeply effects humans. And this isn’t planned. Martin Cooper didn’t decide to create the mobile phone in 1973, saying, “Hey, this will become a powerful connected device, with a visual and haptic experience that will light up V1, V2, S1 and S2, triggering powerful dopaminergic forces. This compelling user experience, using both top down and bottom up cognitive processes simultaneously, will cause beneficial behavioral and cultural adaptions that will have a positive, important effect on the arc of human history”.

At least we have no evidence that’s what he said. Maybe I’ll get a chance to ask him some day. The current evidence is he was just trying to help people connect with each other, by voice, no matter where there were, while trying to make a few dollars for Motorola, and that he probably did not fully envision how the technology would evolve into today’s mobile phone experience.

I’m not a Luddite. Who doesn’t appreciate the opportunity to go online at 11 PM, ordering something from Amazon, and have it delivered to your house first thing the next morning? Or accessing all of the knowledge of the world’s scientists through Google Scholar, or increasingly, the ability to search the text of any book every written? And if we move from the individual to the national or global level, where would our economies be if we pulled out of every sector that has been dramatically improved with technology? By the broad definition of technology which I will adopt in this book, our economies would be vacant today without technology. In a world where everything is so finely tuned and integrated in real-time that even the smallest ripple has broad implications, reducing technology adoption even a little could have dramatic negative economic implications.

I don’t suggest these benefits in a lazy, defeatist manner. Philosopher Michael Sacasas refers to this as the “Borg Complex,” which, he says, is often “exhibited by writers and pundits who explicitly assert or implicitly assume that resistance to technology is futile.”[4] I point them out to suggest that a) much of technology is here to stay, and b) we should pick our battles and understand why, specifically, an activity might harm the user, and c) we might want to determine the net impact on the user, taken into account any of the positives they might gain from technology, before we girdle our loins to stamp out its use. If pushed, we might even consider how we might adapt in response to the purported harm.

However, I am reminded of a story Temple Grandin wrote about in her book, Animals in Translation, where a red laser pointer was used to drive two indoor cats crazy. The cats couldn’t stop themselves from stalking and pouncing on the light as it flickered around the room. Grandin didn’t think that a wild cat would act in such “mindless” behavior: “A cat wants to catch the mouse, not chase it in circles forever.”

I’m not Temple Grandin saying we are the cats, and technology is the little red dot, and I’m also not Nicolas Carr, saying the Internet is ruining our brains. I’m saying we made the laser, and we made the Internet.

So, as I set out to find the answers to understand this area better, I began to feel that the broader question of “what is the cumulative effect of technology on the human brain?” quickly lead to the related “what implications does that have for how I live, parent, and navigate this information-rich world we live in, now and in the future?”, and “what should I be spending my time, as a professional technologist, if I really care about these things?

Don’t we want to take a step back and consider the bigger picture into which we introduce technology, and consider what it is doing and might do to our brains? If we do, I humbly suggest not that we might want to make and use less technology, but that we might be able to do a better job of making and using technology.

Answering those simple questions, I came to believe, required a broad, complex and nuanced understanding of the cycle of interactions between people and the technologies we develop and use. If something is compelling and receives widespread adoption, should we throw it out because we’ve identified some specific new harm that it causes? It seems more reasonable that we should look at the net impact of a technology, understand the specific benefits, the specific harms, and weigh them out. I also think the techno-pessimists present a limited range of options for dealing with new technology: keep it, get rid of it, or reduce the interaction with it. A person with nutrition questions isn’t told by a doctor that they should stop eating, or just eat less; they are given an understanding of the different types of food, and a suggested balanced diet. They are given proposed changes to their life styles and activities. There are no silver bullets in life, but there is an almost infinite amount of choices we make to adapt to our environment, and while some of them matter more than others, no one decision dictates your fate.

Yes, there are plenty of book, blogs, stories and scholarly articles out there about modern technologies, but fewer that pair that with the question of “Why?”, and none that I’ve found that weighed the pros and cons and developed recommendations on how to interact with technology for maximum benefit while minimizing risk of specific harms. Some discussions root their analyses in the evolutionary past, and some look forward into visions of the future. But I found none that took an honest and full view of technology today, answering the question of “why?”, that took in the full arc of the past that lead us here, and that shined the light forward into where we might be going. That was the book I was looking for. I hope, in some small way, this book here in your hands (or displayed on whatever screen you are looking at, or piped into your ears) begins to do that, if nothing else, by pulling together some of the relevant work by leading thinkers in various fields.

Who is the audience for this book? Simply put — it was me. I wrote the book I wanted to read,[5] that reviewed all of the relevant literature and summed up the available answers to the question — “what is the cumulative effect of technology on humans and specifically the human brain?” and “How can I use that information to be a better dad and a better person?” “What should I be considering as a technologist, as I try to responsibly promulgate new technology into today’s world?” In pulling together that information, it is my hope now that this book provides a high level map that helps to orient other curious people, technologists and possibly academics who are performing their own rigorous detailed work on the various parts of the elephant.

Besides providing a reference map that attempts to integrate the work of so many other thinkers that inform this broad area, it is my intent to show the gaps between those works, and when possible, provide some modest new contributions to begin to fill those gaps. To what end? It is my hope the reader will gain some broader perspective on this ever changing world around us, and see the world in some new light. This perspective may be internal as well as external, as this book should cause readers to re-examine some of their most closely held basic assumptions; not just about technology, but about also about what it means to be human, and the unique opportunity of being a human at this point in history. I hope, too, to show that we may not be able to plan for tomorrow just using the lessons of the near-yesterday, as the insight from those lessons is rapidly obsolescing in the face of a changing human condition.

In this book I take the provocative stance that many humans born since 2002 belong to a new era of humanity. Given the case that I make for this, this particular point shouldn’t be that controversial, actually. In the next book in this series, I’ll try to make a case for the fact that our current generation could in fact be the first generation of a new species. Sure, people will argue procedure with me — ‘a new era is usually ushered in over time, and implies evolution’, and evolution is a “a natural process over a very long time; a process of slow change and developments”.[6] And a new era of humanity at least implies some genetic change, doesn’t it?

That may have been the case for the history of life on the planet until recently, but may not be the case anymore. Strong statement? Yes. Hyperbole? I can only encourage you to read on. In the course of my years of research I’ve come to believe that the power of human technology has risen to such a level that we need a new way of considering the forces that shape human development and behavior. This book considers deeply both sides of the nature/nurture debate, and suggests that we now add a third: “notion”. Notion is the simple name I’ve given to human technology that has dramatically change the behavior of the individual and the species.

“Well, Pat, ‘throwing’ was a technology that allowed us to hunt, and ‘fire’ allowed us to cook, and both of those dramatically changed the course of human development, so aren’t you basically including everything humans of ever done?”

That will be a fair critique of my approach- that I’m so broad as to be meaningless, or worse, useless. But take the journey with me and see; personally I believe the thinking behind this is more parsimonious and rigorous than that. We’ll discuss the unique impact of throwing and fire as being very impactful on human development, for example. However, the effects of throwing and fire accreted over time and generations, and diffused culturally and geographically through patterns shaped primarily by factors explained by nature and nurture.

No, I’m talking about technology we create within a generation’s time, and diffuse across the world in literally an instant, like the ability most of us have right now, to access all of the world’s past and present knowledge through a device we carry in our pocket. Humbly, I submit that it is an enabling technology that has come onto the human stage very quickly, relative to standing upright, throwing a spear, cooking, or written language, and within a generation will dramatically alter the course of human existence.

I submit, too, technologies like CRISPR/CAS9, which allow humans to quickly and cheaply make very, very targeted changes to DNA. We’ll see how scientists have already shown the potential to use CRISPR at the human embryo stage to permanently cure congenital disease, change human physical makeup like the shape of the nose, or alter the psychology of that future human, by, say, making them a “morning person”, or more or less likely to have addictive tendencies. We’ll talk about how this technology is not only likely to be common during our children’s lifetime, but that technologies like this are likely be available to affect you, the reader, and me, enabling us consciously to change our own DNA, during the course of our life (and not just within the embryonic window). Unlike previous species, which had to pass on random genetic mutations, which were vetted over hundreds of generations into new adaptations and eventually new species, we may be entering a stage of humanity where that might occur under directed control, within our own current lifespan, in less than a singlegeneration.

And in between the everyday mobile phone, and the exotic gene alteration potential of CRISPR/CAS9 are a panoply of other technologies, about which we may not fully appreciate their potential to shape human existence. Examples of these range from the mundane cochlear implants which have brought hearing to deaf people for decades; rapidly evolving prosthetics for missing limbs; artificial intelligence behind Siri and the self-driving cars; and the “cute” or “fun” technology within augmented reality platforms such as Google Glass or Pokemon Go.

In the course of this book, I’ll make the case that the cumulative impact of human technology has made it possible that new human ideas can have such a powerful affect on human existence that they become something we should consider alongside the traditional forces of nature and nurture. I can’t qualify or quantify the impact (mine is a theoretical argument, not an experimental one), so I don’t claim it’s more powerful than these other 2 forces. Nonetheless, my hope is that within this first book you’ll see evidence to support the fact that if we want to see where humans go next, a “nature/nurture/notion” framework could be useful. Personally, I don’t think this should even be controversial, after you read through the extant literature, which this book tries to pull together.

What should become more controversial, or if I am somewhat successful — more discussed — is the issue of what we should be doing about our present and future interactions with technology.

For example, there is widespread understanding and even some adherence to what constitutes good nutrition in America; everyone knows what kinds of food they should be putting in their bodies, even if they don’t do it. But there does not seem to be a general or even expert consensus on what we should be doing about technology. To me, that is the more provocative issue this book brings up. Technology is here to stay; saying that it is ‘bad’, and that we need to ‘use less of it’ is ducking the question, a lazy approach to a serious matter. It is kicking the can down the road.

In the chapters that follow, I try to organize what I’ve learned so that it might be some help to others who find themselves facing this same questions. As we shall see, a linear structured path is not always the best way for every person to understand a given topic, so the reader may find it useful to read this work in any order that suits them best.

However, as we shall also discuss, complex new topics are often best understood in “chunks”. Here we use the simplest organization, and organize the work into what has now become four separate books or sections:

1. Past — the Rise of the Emergent Brain

2. Present — The New Emergent Brain

3. Future — The Next Emergent Brain

4. So What?

With these sections, I attempt an integrated view. What do we get when we do that? According to E.H. Carr, in his series of lectures at Oxford titled “What is History?” –

…only the future can provide the key to the interpretation of the past; and it is only in this sense that we can speak of an ultimate objectivity in history. It is at once the justification and the explanation of history that the past throws light on the future and the future throws light on the past.

In this first book, The Past, I assemble a view of the evolution of technology from the prehistory of tens of millions of years ago up until 2002. Headsup: this is going to seem like a big boring tour through an old history museum for many of you. Feel free to just skim this book. For myself, I found this work to provide the most valuable kind of learning: learning that gives one a rare perspective that offers non-intuitive insights on important matters. To do this, I lean heavily on experts in fields that I am now at best a dilettante in: evolutionary biology, ethology, evolutionary psychology (and its precursor — sociobiology), anthropology, philosophy and history. During this journey spend a lot of time looking deeply into factors which inform both the Nature and the Nurture camps.

If we are really asking ourselves about the effects of Facebook, video games and Twitter on today’s society, why is it necessary to spin the clock back so far, and to look at history through so many different fields or perspectives?

First, because it’s not clear to me that we have appropriately placed our new technology in the proper context of human development. While it is not always true that ‘there is nothing new under the sun’, many time we tend to wrestle with new questions that have been asked and answered by prior generations under similar if not identical circumstances. As we shall see, there are predictable patterns that pop up across this span that serve as a sort of momentum to inform and/or shape likely outcomes of our present interactions with technology.

One thing that really surprised me in this look into history is how predictable and precedented our current interactions with “new technology” are. As Diamond says: “There really are broad patterns to history, and the search for their explanation is as productive as it is fascinating.” [7]

The other thing that also surprised me as I’ve looked at our current technology against the broadest backdrop of human evolution is how unprecedented and momentous I found the time we are living in really is, when looked at with a frank and full view of our past.

Yes, this second insight is literally as old as the hills — every generation has always felt they have lived in a time of momentous change that will alter the course of human generations, and in many ways each one has. However, after reviewing the past in this section, I think the reader will find it hard to ignore our special circumstances and opportunities today. It is within this context that we can begin to consider that powerful ideas (notions) developed and disseminated within the span of just a few years have the ability to have a material effect on the arc of the human generation in which they occur, and powerful compounding effects on future ones.

The second reason to look so far back, besides context, is that I’ve come to believe that much of a person’s interaction with technology is NOT guided by rational thought. I’ve come to think of the human brain as an iceberg, where the conscious thought or executive function is visible to us above the surface, but an underappreciated fact is that our motivation and many of our driving forces lie in the unconscious, spread across various centers or modules of our brain (which are often in competition with themselves) beneath the conscious threshold. It’s plausible to me that much of that unconscious thinking and control developed in our distant past, before we became human, and to really understand why people do what they do today we have to look far into the past.

No, this is NOT a “caveman brain in a digital world” book. But this is a book about the human brain. Want to know what constitutes a human, versus say a chimpanzee? Look up it up. What you’ll find is that primatologists study animals, mostly, with passing references to the human field. Cognitive psychologists study humans mostly, because the animals supposedly lack the key enhancements which govern our thinking.[8] When Carl Linnaeus published his taxonomy describing each species and the specific characteristics that distinguished each one from another, under humans he put, simply, “Know thyself”. [9] And my position is that for all of our knowledge, technology, and the special place we have in the world for ourselves, we don’t understand the human brain very well. We don’t know ourselves.

So, if so many experts have written about the various pieces of the puzzle, and stayed within the areas of their specialized expertise, why bother writing something new, and how is this new material going to be constructive? Let’s use evolutionary psychology as an example. Here is a robust group of leading thinkers who are specialized neuroscientists, and who’ve looked into our evolutionary past for insights into how our current brain works. What can I add to that? With apologies to Tooby & Cosmides, Edward O Wilson, Robert Wright, Steven Pinker and so many other experts in the field, I felt something was missing in how evolutionary psychology was being explained. Many of their models, for example, stayed within the Homo family, harkening back hundreds of thousands of years to Neandarthals, caveman and other early humans.[10]

So I decided to step further back in time, with a simple approach — begin as far in the past as practical, and move forward to the present day. Therefore, the first book examines the evolutionary roots of the human brain from a period starting just after the massive dinosaur die-off around 65 million years ago, with a specific focus on the technologies used by our ancestors (including their biological and cultural adaptions), the environments and cultures in which those ‘technologies’ were used, and how those technologies in turn influenced their users. This Past section spans from those remote ancestors up through the emergence of our current dominant human species, known anthropologically as “behaviorally modern Homo sapiens sapiens”, sometime around 50,000 years ago, and continues to a relatively arbitrary end point of 2002 in the modern era.

So yes, the first part of this book is full of monkeys & lemurs, animal instincts[11] and behaviors. Yes, there will be critics of this approach. But before you join this crowd, consider that somewhere right now there is a professional rational woman you know wearing lipstick, an act that a primatologist would describe as an epigamic display that evolved in coordination with bipedalism, and her display is intended to affect certain changes in the social structure around her. And she’s probably not even consciously aware of these motivations; if asked she might say that she just wanted to have a splash of color to brighten her appearance. This first book (especially chapter 6), should begin to give her and any other thinking person some insight into themselves.

And while modern day evolutionary psych researchers get plenty of ink regarding their work and insights, it pales in comparison to the breakthroughs coming from emerging technologies such as brain scanning and genetics. It seems that every week another story comes out about how some research has located the precise spot in our brains where we generate envy, sugar-craving, or the ability to speak more than one language. Powerful new technologies such as fMRI and gene sequencing are both increasing in capability while decreasing in cost, enabling more and better insights than ever before.

Thus, we use the evolutionary timeline as a logical scaffolding, and as we progress through the timeline we fold in the relevant evolutionary psychology perspective, but also from other fields, and including the most recent results gleaned from technology, as well as other perspectives that inform the question.

The takeaway is that both of these areas (evolutionary psychology and the use of modern scanning and testing techniques) inform us about ourselves, and my argument here is not for one over the other, or to the exclusion of others, but that we need to first understand the human brain[12] better. To do that we need to understand each of these areas better than we do now, and consider alternative approaches to understanding the question, as well. This first book hopes to put the pieces together in a way that moves our understanding forward.

So, reviewing evolution and the modern understanding of the brain is useful, but as Steven Johnson says in his 2004 book Mind Wide Open,

“neither story tells you something about your own present-tense experience that you don’t already know. You’re already familiar with your sugar cravings, and while it’s nice to learn things about their origins, knowing the role of the dorsal science is going to tell you something useful about your brain, it has to go beyond simply explaining the roots of some familiar phenomena.”[13]

How are we going to do that? At the end of the Past book, I introduce my concept of the basic Emergent Brain Model, which tries to explain the traditional human brain’s functioning as an emergent property of a broad set of factors that evolved over tens of millions of years, which, taken together, create a milieu from which the emergent human brain emerged, some 50,000–100,000 years ago. Then I take the position that humanity has begun to operate under a second model, which we can call the modern Emergent Brain Model, and I give it a birthday: March 20, 2002. I make the argument that the modern Emergent Brain is a wholly new entity that springs forth from — but cannot be decomposed into — 3 entities: the exocortex, the basic Emergent brain, and the human body. I adopt the term exocortex, first coined by Ben Houston in 2000, to refer to all of the external digital technology, information and anthropogenic systems that interact in some way with the human brain.”

In making the claim of an “emergent brain”, I whole heartedly accept the criticism articulated by Fredrik deBoer on his blog: “I promise: anyone telling you something is an emergent property is trying to distract you. Calling intelligence an emergent property is a way of saying ‘I don’t really know what’s happening here, and I don’t really know where it’s happening, so I’m going to call it emergent.’ It’s a profoundly unscientific argument.”

My response: He’s right. I don’t really know what’s happening here. What I’ll be doing in this first book is to show a large number of factors that I think affect our modern life, in deep context, so that hopefully you’ll have a richer understanding, from which you might understand things better than you do now. Mine is a profoundly anti-reductionist stance, which, much like an anthropologist who uses a “participant-observer” method to document a rich, ‘thick’ description of a culture to reach a holistic understanding of that society. In fact, these books could accurate be called works in the developing fields of digital anthropology or cyborg anthropology.

I think many of the modern theories for how the human brain evolved, and explanations of how and why technology came to be central to our society fall short, specifically because they try to be parsimonious. They each find a small subset of factors that they feel play a large role in our modern society, and point to those few forces as being “the real reason”. It’s elegant to say, ‘it’s because of our cooperative societies’, ‘our extended childhood’, ‘because we learned to cook food’, or, “because we developed multi-generational rearing, the rise of grandmothers helped us to evolve cognitively’. Those examples are caricatures of real theories, and in each of today’s theories that isolates one or two forces, we leave out 30, 40 or 50 more. Each of these theories is very useful for calling out the effect of one or another factor. We need these theories. But we also need someone to step back and look at all the theories, and what each of these theories misses, when taken collectively. Do I know what’s happening? No. Do I ascribe for more influence to many more forces than most modern day scholars, who wield their “ocham’s razor”? Yes. And I think that is the next step to getting us to a point where we REALLY undestand what’s going on. And this is what I attempt to layout in the first book.

The Second book, the Present, will provide an outline and summary of some of the representative key technologies that both drive today’s modern Emergent Brain and provide the basis for tomorrow’s Emergent Brain.

In that Present section, I also make the case for saying that a certain percentage of the generation of humans born since 2002 might be more properly under stood anthropologically as “emergent Homo sapiens sapiens”, due to the development of the modern Emergent Brain. The case for this is fairly straightforward, and based in large part on the fact that the difference in “anatomically modern Homo sapiens sapiens” of 100,000 years ago, and the “behaviorally modern Homo sapiens sapiens” of say, 40,000 years ago, is a much smaller distinction in almost every way than the environmental, cultural and technological differences between what the average American child born 45 years ago faced, compared to a typical environment facing a 10-year-old today. Normally I’d say this is a gray/grey semantic distinction, except the academics seem to think that the change in our ancestors’ culture and technologies that occurred about 50,000 years ago had a fundamental influence on the current state of our species today; important enough to make the naming distinction. This distinction is important enough that they’ve renamed the culturally distinct populations of the species, without a significant inherited genetic change.[14]

If this is true — that humanity’s cognitive leap forward 50,000 years ago (without any underlying genetic distinction) is so important that we need to name the populations differently (and it appears to be an ‘asked and answered’ question of anthropology), then what happens when we recognize the distinction between this current generation and mine? Much of this last question is addressed in the Future book, but we set the foundation for it in the Present book by laying out the evidence as we see it, and by looking closely at three areas — the exocortex of technology that surrounds our brains, the brain itself, and the modern day body as an actor/partner/host with the brain.

While most of us don’t use terms like exocortex, many of us may feel we are already pretty familiar with modern technology, since, as I have pointed out, we use it for most of our waking lives today. However, in the Present section we review what I refer to as the exocortex — the technologies that we use to manage our lives throughout the day. There we’ll try to show intended and unintended consequences, as well as the accumulated effects where possible, summarize the work done by other authors capturing their fears and hopes for this wave of modern technology, and generally pull together the relevant information about technology that we as a society may not be discussing or thinking about every day.

Today’s brain is better understood than ever before, due in large part to new technologies such as fMRIs, PET, MEG, SPECT and other neuroimaging and other techniques that have resulted in an explosion of new material and insights we have about the workings of the brain itself. The Present book attempts to capture the key working concepts of the brain and the state of knowledge, understanding and current debates in the field.

The brain is still housed in a human body (at least as of this writing) and it is the living breathing human body that still largely influences much of the organ called the brain. And technology that effects the brain also has dramatic effects on the body, which in turn effect the brain. It’s a mistake to separate the body and the brain as individual units of study, at least in today’s modern world. But it is a tempting one to make as it allows us to reduce the scope of inquiry into meaningful but manageable chunks. So in the Present book, we also pull the body back into the discussion, and thus make explicit the intrinsic linkages and limitations imposed on the modern Emergent Brain because of the body in which it lives.[15]

It is not enough to observe what is going on today, or to look back into the past to see how we got here; it’s critical to look at where we are going, and consider how we are going to get there. In the Future Section, we look forward. This wave of technology around us fascinates many scholars and technology futurists, and there is no lack of leading thinkers to make sense of the next 1–3 years. Mary Meeker, for example, is a bright light in bringing sense and insight into these trends. Every year her annual report is eagerly awaited, as it helps people to make sense of what has happened in the last year, and what is likely to happen in the next. It’s very much a boiling frog problem — many people don’t have the perspective across the whole range of today’s technologies to see the trends coalescing in a number of domains, and Mary and others provide some of them that perspective.

Fast forward 30 years. Ray Kurzweil and others have done a great job describing Singularity; the point in time when the technology around us develops to the point in which computers are smarter than humans. Coincidentally, he also projects that by this time, about 25 or 30 years from now, medical technology will have improved to the point in which every year we will be able to extend average human lifespan for an additional year. Make it to this point, and who knows how long you will live?

To me, somewhere between today’s emerging technology and the possibilities that Kurzweil describes is a chasm or frontier that is not very well thought out or addressed. In particular, the area that calls out for deeper exploration in this timeframe is the frontier between the human brain and technology. We attempt to do that in the Future book, by extrapolating the various driving forces affecting the exocortex, brain and body, and bring them together into various future scenarios. Singularity scenarios, for example, are driven by advances in certain technologies and assumptions. What happens when you change some of those assumptions? And what are the alternatives future scenarios to Singularity? Is the future populated by thinking machines, humans augmented by technology such that they are barely recognized as humans, and other “humans” whose lives are not constrained by the biology of the human body as we understand it today? Or does the future city of 20 years from now resemble modern downtown Tokyo, much like today, but with more and brighter lights and cool technology for us to use? So in the Future book we map out the possible scenarios, as we and others see them: making the case for them, yes, but also examining the critics that claim that these various scenarios of the world will or should never happen.

So What? That’s the question asked in the last book. Yes — it’s part precis or executive summary, condensing what have we learned in the first three section. But it goes beyond the scope of each of the first three books to see the emergent trends. Then we compare and contrast the suggestions, prescriptions and strategies developed by the various thought leaders. In doing so, we can see the overlaps and the gaps. One of my primary criticisms of much of the existing literature is the lack of effective prescriptions or advice. The goal of the final book is to be able to propose some new insights in part because we have filled in some of the gaps in the extant thinking, but also in part because of what we’ve learned by looking at across the whole sweep of human development, and previous examples in human history. The optimist in me feels the hope of humanity lies in our ability to consider the future, and knowing what’s coming, having the fortitude to make the decisions that will improve the future world for our children. This fourth books attempts to contribute to that process in an achievable practical call-to-action, in a format that is shorter and easier to read than the more comprehensive format of the first three books.

I think that the reader will find that a balanced objective assessment of this body of material informs the questions we set out to address, but also brings up new ones that need to be asked. How does an individual or a society appropriately and responsibly interact with new technology around us today? Is it minor individual choice, like when your mom told you to sit up, (otherwise you’d face a lifetime of bad posture), or do you, with the tech around you today, have the opportunity to amplify (or hold back) the human condition?

Unlike our first bicycle ride or first car drive, there is no one to explain the rules of the road to us. Even if we think we understand the new rules of the road, we almost always understand things based upon where we’ve been, not where we are going, and thus, as Marshal McLuhan says, “We look at the present through a rear-view mirror. We march backwards into the future”. [16] So, the intent of this project is to instigate, promote and inform a discussion about the question at hand, “what is the cumulative effect of technology on the human brain”, but also on the new ones that pop up, “what should we do, individually and collectively, about the future that we are rapidly going into?”

So, with such ambitious scope, where to start?

Let’s just grab an ordinary moment in an American life, and track back the changes to technology and the brain that were required to make it:

Imagine your name is Sally, and you are walking through Costco and out of the corner of your eye, you glimpse a new ski jacket. Winter is coming — you are going to be cold, you need a jacket. The bright teal blue might help show of the color of your eyes. You grab your phone, snap a picture, and post it to Facebook with the question “Should I get it?” You open up your RedLaser app[17] and scan the barcode to see if Costco’s price is a good one. The jacket doesn’t look that warm, so on your phone you tap on the Amazon app to check product reviews. 4 stars, because while stylish, there have been some quality control complaints by some people who bought it, claiming the zippers disintegrate after the first year. But Jennifer Aniston has been seen it. And it’s only $74.69. You notice two texts, which you check really quickly (could be the kids, and important), and noticing you have Facebook notifications, you quickly check those out. Now, back to the coat. Hhmmm..

If we walked up to you right then and asked you why you chose this particularly jacket out of so many thousands of products on display at Costco, you might say “I don’t know, it just struck me”. If pressed you might look at the jacket and say something like, ‘out of the corner of my eye I noticed that it had a high collar, and my neck always gets cold, so that’s why it looked like it was a good choice for me’. Why did you post to Facebook and check Amazon? “Just checking on second opinions, to be sure it was a good choice”. This is a classic case of an educated, informed human mind making the most of today’s complicated technology to make rational calculated decisions while weighing abstract concepts. No animal could do this, and no one argues that point.

Malcolm Gladwell wrote about spontaneous decisions like this in his book Blink, and, as usual, Gladwell provides a very approachable and understandable way of reconsidering something complex that we all take for granted. In Blink, he shows that the unconscious brain, which he says occupies 90% of your brain, makes a lot of these decisions, and that the conscious brain is not only not involved, but that when asked to explain the decision, the brain will often manufacture some rational sounding but completely fabricated explanation, without even knowing that it is a fabrication. He points out that the unconscious mind can take in vast amounts of information in literally a blink of an eye and can make decisions that far surpass the accuracy of decisions researched and studied by the conscious mind. He calls this process “thinslicing”. This unconscious processing occurs, however, according to Gladwell, within a “locked vault”, using processes that the conscious mind can’t penetrate. It is not an infallible process, however, as it is subject to idiosyncrasies like personal bias and manipulation. So what we hold out to the world as a considered rational thought process (we honestly believe that), is actually a set of processes and biases that we don’t even know are going on.

This model resonates with me, at least from my own personal experience. However, I don’t think it goes deep enough, it doesn’t provide us with information to effectively help us control and adapt to these processes, and I don’t think it scales. In short, Gladwell’s explanation doesn’t help me to understand decisions across hundreds of situations we encounter every day, in a way that helps to inform me of how we might do them differently, or better. Believe it or not, to really understand what’s going on here, I think you have to go way back in time. Way, waay back.

Your LG phone didn’t just show up like magic under your Christmas tree… to get to this generation of phone required literally thousands of preceding phones and hundreds if not thousands of years underlying technologies to make it happen: You can’t get the smartphone without “dumb” cell phones, which you didn’t get until after you had the landline phone. That required the telegraph. That required electricity. Electricity required metal. Metal required metal smiths and organized societies. Organized societies required irrigation. Irrigation required tools. Tools required learning. Learning required group activity. Group activity requires that you and your offspring survived long enough for cooperation to evolve.

Wow, slow down!’, you might say. Aren’t we taking this a bit far? Perhaps. But consider this backdrop provided by psychologist and cognitive scientist Steven Pinker, in his book How the Mind Works:

“First, selection operates over thousands of generations. For ninety-nine percent of human existence, people lived as foragers in small nomadic bands. …Our minds are designed to generate behavior that would have been adaptive, on average, in our ancestral environment…”[18]

Pinker only goes back as far as humanity, as we noted earlier, but his point is a good signpost into the past. So let’s follow it, and go back through history, even farther, and see if we can gain any new perspective on our modern use of technology. That walk will require us to be specific about the things we want to know, frank about what we think we know, and then honest about what we really know (and don’t).

Why go back farther than humanity? Academics have been debating the intelligence of early humans ever since Darwin. The problem with the question, whether viewed through the lens of archeology, paleoanthropology, evolutionary biology or evolutionary psychology is testability. How do we test hypotheses or conclusions we have about the development of intelligence, and thus of technology? It’s not as if we can trot a 100,000-year-old ancestor down to the local college for a series of psychology tests. But if we go far enough back — we find we can test out some hypotheses — those regarding some of our most closely held assumptions.

So, let’s jump in the time machine, and jump back into history, and see what we can learn about the future…

[1] Anshel, Jeffrey. “The Eyes and Visual System.” Visual Ergonomics (2005): 5.

2 Aitchison, Jean. Words in the mind: An introduction to the mental lexicon. John Wiley & Sons, 2012.

[2] http://www.nielsen.com/us/en/insights/reports/2014/an-era-of-growth-the-cross-platformreport.html

[3] http://techliberation.com/2010/01/31/are-you-an-internet-optimist-or-pessimist-thegreat-debate-over-technology%E2%80%99s-impact-on-society/

[4] Michael Sacasas, “Borg Complex: A Primer,” Frailest Thing, March 1, 2013, http://thefrailestthing.com/2013/03/01/borg-complex-a-primer

[5] “Why is it that the words that we write for ourselves are always so much better than the words we write for others?” Sean Connery, playing the part of William Forrester in Finding Forrester.

[6] http://www.merriam-webster.com/dictionary/evolution accessed on June 6, 2016

[7] Diamond, Guns, Germs and Steel

[8] Byrne, 2016, p 3

[9] Linnaeus, Carolus. Systema naturae per regna tria naturae secundum classes, ordines, genera, species,… Vol. 1. impensis Georg Emanuel Beer, 1788. (Actually, what he wrote here was “Nosce te ipsum”, but I have it on good authority that means the same thing as “know thyself)

[10] And with further apologies to evolutionary psychologists, my current assessment is that the Computational Theory of the Mind is at best over-applied (I think Fodor’s LOTH is layered over something deeper, which has extremely fuzzy logic, not suited to syntaxt, and closer to Damasio’s emotions, which is tapped into by Sach’s music, proving that it is not just “cheesecake for the brain”, but that something deeper is going on). I also rate poorly as a Neo-Darwinist (I give more weight to genetic drift, random chance, and Gould’s spandrels than a strict “optimal fit” Darwinist would, and my understanding of the inheritable effects potential of epigenetics and social genomics taints me as a possible Lamarckian). So, like any average graduate student (and Pinker himself), I choose to use from schools of thoughts (such as EP) those things that help me to understand the questions better, and part ways with them where I find more compelling explanations

[11] Note, in this book we will use the popular/vernacular term “instinct” to mean “innate characteristic”. The latter connotes only a relative degree of biological potential based on inherited characteristics, not a deterministic view point, which can be implied in the former. See Lorenz, Gould, Lerhman, Hebb and the rich debate they spawned for more on this. “Free will”, as it were, is viable, but becomes more interesting as we look at executive cognition as a function of multiple modules or forces, including “base” ‘instincts’, such as inherited drives to copulate, more “sophisticated” instincts such as our “instinct to learn”, and our ability to reason using the evidence we’ve accumulated over a lifetime, which is stored and retrieved in memory using heavily biased neurological mechanisms that we are only now beginning to understand.

[12] I use the term “brain” here quite broadly, and perhaps interchangeably with what others might call the human mind. Normally I’m a stickler for semantics, but for now we’ll stick with “brain”, and later discuss the implications of it, versus the term mind.

[13] Johnson, Steven. Mind wide open: Your brain and the neuroscience of everyday life. Simon and Schuster, 2004. P 15

[14] Later, I suspect you’ll find we have a case for Homo sapiens emergent, but for now I’ll hold off on that, as it seems too provocative. Later, as we become more familiar with the material, we’ll see if that’s warranted.

[15] For a great exposition of this, see Clark, Andy. “Where brain, body and world collide.” Material agency. Springer US, 2008. 1–18.

[16] McLuhan, Marshall, and Quentin Fiore. “The medium is the message.” New York 123 (1967): 126–128.

[17] When out shopping, 86 percent of shoppers consult their smartphones,according to Shopatron’s Retailer eCommerce Study. Over half of them are looking to compare prices, but they also want some extras, like product reviews. (accessed on march 19, 2015)

[18] Pinker, Steven. “How the Mind Works. 1997.” NY: Norton (1997). p 42