How our online life is affecting the human mind

Nicholas Carr’s book The Shallows examines the effect Internet technology is having on the human mind. In the digital age, we are overwhelmed with stimuli. Our computers, phones and digital tools allow us constant access to seemingly infinite information and give us a sense of connectivity. We are more socially focused and efficient than ever before––but these benefits come at a price. Carr worries that we are trading in valuable skills for a type of intelligence that is adapting users to their computers, instead of the other way around.

One of Carr’s primary comparisons in the book is between two types of intelligence. The older definition is associated with the era of print literature. Humans used to define intelligence as the extent to which a person had a “literary mind,” or a mind capable of sitting quietly and solving complex problems. However, after the Industrial Revolution, a new definition of intelligence started to take hold, one that privileged efficiency and multi-tasking over deep thinking. The system as a whole was seen as more important than the individual. With the invention of the Internet, this obsession with efficiency spiraled out of control. Our apps and tools are so easy to use that we are developing a ravenous appetite for more and more information, all consumed at breakneck speeds. The sheer volume of data we are exposed to when we surf the web may be impressive, but our brains, Carr argues, are not equipped to both navigate the distractions inherent in the design of the Internet and consolidate deep and meaningful new elements of knowledge. Increased reliance on and skyrocketing use of the Internet has indoctrinated users into an age of distraction and, as a result, impaired our ability to find a balance between the meditative thinking of the “literary mind” and the efficiency-centric learning style of the computer age.Get the entire The Shallows LitChart as a printable PDF.”My students can’t get enough of your charts and their results have gone through the roof.” -Graham S.Download

Not only, Carr argues, is our definition of intelligence changing due to Internet use, but our brains are being rewired in a disturbing way. Carr emphasizes throughout the book—using official studies, scientific concepts, and brain science—that the changes made to us by our use of the Internet are not simply changes in our thoughts, but rather anatomical alterations in the brain itself. The Internet provides such a feast of distraction that no energy is left for the parts of our brains responsible for complex thought and developing subtle human emotions. The great warning presented by The Shallows is not only that the Internet is changing our brains but that it may be diminishing the very skills and traits that make us human.

The Net’s great paradox, Carr argues, is that it captures our focus only to split that focus in a thousand directions. We are returned again to a previous state––this time, the bottom-up distractedness of primal man. Carr points out that not all distraction is bad. Breaks in attention are necessary for the subconscious to solve problems but only, as Dutch psychologist Ap Dijksterhuis points out, if we have first defined the problem. The Net is a blaring pool of stimuli with no singular problem, stopping deep and creative thinking in its tracks. The longer we spend time doing Net thinking––skimming, hopping links, and so forth––the weaker the neural pathways that support intellectual thinking become. These effects follow us offline and into real life.

For Carr, the most significant consequence of prolonged Internet use is a reversion back to the distracted state of primal humanity. The constant stimuli of the Internet literally rewire our brains to continually seek the next thing, creating a feedback loop that turns our brains into skimming machines. We see, once again, a historical reversion. Any muscle for focus honed by the literary style of learning is being systematically weakened by the Internet’s structure.

Going deeper into the science behind exactly why the Net makes it so difficult to concentrate, Carr references a study examining brain function in novel web surfers being taught how to use the Internet. In the study, which compared the novices to veteran Net users, the novice web surfers were seen to develop the same amount of prefrontal cortex brain activity as veteran surfers after only five days of practice. Contrarily, the brains of book readers showed significantly less action in the prefrontal cortex. The key to understanding why the Internet makes it hard to focus is activity in the prefrontal cortex. Here, Carr sets up for a scientific argument to prove that the Internet is designed to distract us, and is succeeding to do so on an anatomical level.

While Carr notes that extensive activity in this section of the brain can help keep the brains of elderly Net users sharp, there are downsides to the way the Net forces us to use the problem solving part of our brain. The intense prefrontal cortex activity required to decide whether or not to click on a link or play a video redirects mental resources from more interpretive functions. We return, again, to a previous puzzle-solving state. In this way, present Net usage closely reflects the early, laborious scriptura continua sort of reading. In both cases deep arguments and deep thinking are sacrificed as the majority of effort goes into decoding the information. In short, it’s a mistake to think that more neurons firing is always better. The calm brain activity viewed in readers is the brain of a deep thinking human, rather than a decoding machine.

The fact that book readers have less activity in the prefrontal cortex when reading does not mean they aren’t thinking. On the contrary, it means that the brain is freed up to do the deep meditation that only a human can do. Carr wants us to understand that using the Internet––with an interface requiring constant choices––creates a neural situation that limits our ability to think deeply.

Another scientific concept that helps to understand how the Internet affects our learning is working memory. A particular type of short-term memory, working memory is what we are conscious of at any given moment. “The depth of our intelligence,” Carr explains, “hinges on our ability to transfer information from working memory to long-term memory.” The problem is, unlike the vast holding tank of our long-term memory, working memory can hold only a few elements at once. As a result, trying to transfer information from working memory to long-term memory in the chaotic environment of the Internet is hard. To to use Carr’s metaphor, it is like trying to transfer water blasting from a room full of faucets into a bathtub using just a thimble as your tool. The overload of incoming information, also known as the cognitive load, impedes our ability to distinguish important information from what is irrelevant, a problem many studies link to ADHD. Going into further depth about the difficulties presented by the interface of the Net, Carr explains that what is hindered when we are distracted is our ability to create new memories. If being distracted overloads our working memory and makes it hard to learn new information, and if our ability to learn new information is the measure of our intelligence, then we are led to the concerning question of what kind of intelligence, exactly, the Internet creates.

To further explain how Internet use impairs cognitive load, Carr cites a study in which two groups of students were both given Elizabeth Bowen’s short story “The Demon Lover.” One group had hypertexts in their version and one group did not. The hypertext group, because their prefrontal cortex was busy navigating decisions about whether to click, proved to have significantly more trouble comprehending the story and reported being confused. The research suggested a correlation between disorientation––or cognitive overload––and the number of links on a page. Carr concludes that supplying information in multiple forms takes a serious toll on the human ability to retain information, comprehend ideas, and solve problems.

Carr provides a scientific study as proof for his hypothesis. By telling us that Net users were more confused about Bowen’s story when they read it with hyperlinks, we conclude that eliminating distractions leads to deeper comprehension. Carr is suggesting, between the lines, that all Net reading is a risky way to learn. If just being on the Net increases confusion, then a book is probably the more productive choice––but productivity is not the Net’s aim; its aim is to be used.

Next, Carr turns his attention to the new style of reading that takes place on the web. In short, he casts doubt that what we do on the Net is truly reading at all. It might be better described as scanning, or power-browsing. Carr shows that screen-based reading behaviors have been proven in studies to be non-linear, characterized by an F shape in which the eyes skip around the screen, spotting keywords and pausing on graphics. Carr again points to an interesting reversal. We are evolving backwards from being literary cultivators of knowledge and have entered the age of informational hunter-gatherers. Once more Carr is using scientific studies to prove that we are distracted by the Net’s interface. We don’t deep read but instead scan, which is another concerning reversion back to the state of primal humanity. Carr implies that technology is not always a force that brings straightforward progress. The Net enhances some skills and sends others back to prehistoric times.

Carr makes a point to tell readers that the Internet does have mental benefits. Video games increase visual focus and the mental calisthenics demanded by Internet use could help a small expansion in working memory capacity, an adaptation that would help us better juggle data. As jobs and social lives increasingly depend on the use of electronic media, it appears that the better we are at multitasking the more valuable we become as employees and friends. The question, however, is whether optimizing our brains for multi-tasking is the type of intelligence we want. While Net use has led to increased visual-spatial skills and the ability to multi-task, our abilities to think deeply and read for extended periods are eroding. We are adapting our brains to be best at functioning the way the Internet functions––as machines for decoding and sorting through the forum where all the knowledge is kept, rather than singular intelligences that contain knowledge within themselves.

The Internet, being a machine for multi-tasking, has made us excellent multi-taskers. Carr writes this to show that he does not take issue with the argument that the Internet is enhancing certain mental abilities. What he does take issue with is Net users failing to ask whether we should be adapting our mental abilities to the functionality of a machine. Here we return to the question of what kind of intelligence the Net is fostering. Considering how the Net reshapes things in its own image, it makes sense that the intelligence of the Net user is an intelligence that serves the Net.

Chapter 9

Reading has proven to actually improve rather than deaden memory. To display this point, Carr writes that the Dutch humanist Erasmus advised students to memorize notable passages from their books. For Erasmus, memorization was not a mechanical process but a way to synthesize or internalize knowledge that speaks to you. Memorization fell out of fashion with increasing technologies for knowledge storage, and in the age of the Internet, we have a seemingly endless external database. As NYT columnist David Brooks puts it, we have “outsourced” our memory, putting us in the strange position of having access to everything and knowing less than ever before.

In 1890, philosopher William James concluded there were two kinds of memories: primary memories, which we forget almost instantly, and secondary memories, which we can remember forever.To investigate whether our memories have been affected by the Internet, Carr ventures to explain how the process of making memories works on a scientific level. He introduces the concept of primary and secondary memories to set the stage for explaining why we forget some things but remember others forever.

Studies on boxers who develop amnesia after blows to the head imply that even strong memories remain unstable after they are formed. Further research suggests that the brain requires a certain period of hours to “fix” a memory and transfer it from short-term to long-term. The process is delicate and any disruption can erase the memories forever. In fact, the storage of long-term memories, as proved by U Penn neurologist Louis Flexner, is biological, requiring the synthesis of new proteins, whereas the creation of short-term memories is notKandel’s continued research on the sea slug, in which he traced the neuronal signals, not only proved that repetition of an action encourages the consolidation of a short-term memory into a long-term memory, but also cast light on Flexner’s discovery. Kandel found that the creation of long-term memories stimulated growth of new synaptic terminals. In other words, the anatomy of the brain had to change in order to store the long-term memories, proving––as Kandel wrote in his 2006 memoir––that the anatomy of the brain is changed with learning. Karr returns to Eric Kandel and his sea slugs to emphasize with a real world example how we retain knowledge. The takeaway here is that the process of memory making is both delicate and biological. The process is delicate because the brain requires time to transform a primary memory into a secondary memory. If the brain is interrupted, the memory is gone forever. Most importantly, the process requires the creation of new synaptic terminals––meaning that memories are anatomically located. Memories require protein creation, evidence which directly supports Carr’s claim that the brain physically changes in response to stimuli.

Carr illuminates two other types of memories: implicit and explicit. Implicit memories are recalled automatically from the unconscious when doing a performative action like riding a bike. Explicit memories are recollections of facts and happenings in our past experienced in conscious working memory. Carr points out that the memories we are usually referring to when we talk about our memories—in this book and in general—are the explicit ones.This segment serves as an introduction to the concept of working memory. The takeaway here is that working memory contains all the explicit facts and recollections in our conscious mind.

Carr notes that when storing explicit memories, or consolidating them into long-term memory, an ancient part of the brain called the hippocampus plays a pivotal role. In 1953 a man named Henry Molaison had part of his hippocampus removed to cure his epileptic seizure. Unfortunately, though his seizures stopped, Molaison was unable to remember many of his recent explicit memories and was no longer able to store new ones at all. His experience suggests that the hippocampus is the holding place for new memories. Once the memory is fully consolidated, it is sent to the cortex for secure storage––but the process can take years, explaining why so many of Molaison’s memories vanished. Molaison’s seizures serve to emphasize how delicate the process of memory consolidation really is. Quite simply, the process of memory consolidation requires a “stay” for x amount of time in a part of the brain called the hippocampus before the memory is transferred to the cortex for long-term storage. By showing how disruption of this process hindered Molaison’s ability to make new memories, Carr implies that other types of disruption could hinder our own processes of memory consolidation as well.

Carr’s in-depth explanation of human memory consolidation serves to highlight the problems with an analogy that compares human memory to computer memory. Human memory, unlike computer memory, is alive––it is a biological process. Carr quotes Kobi Rosenblum, an extensive researcher on memory consolidation: “While an artificial brain absorbs information and immediately saves it in its memory, the human brain continues to process information long after it is received, and the quality of memories depends on how the information is processed.”This segment summarizes the above argument to prove how wildly inaccurate the comparison of computer memory is to human memory. Carr has proved with scientific evidence that memory is a biological process, meaning that our memories are alive and change over time. The contrast with static computer memory is clear, and casts doubt on whether “outsourcing” memory to computers is really the wisest choice.ACTIVE THEMES

Scientific Context Theme Icon

The idea of outsourcing memory, then, Carr suggests, is invalid because memories have a unique history that changes each time we recall them. If a memory is brought back into working memory, it turns back into a short-term memory and so gains a new context. Biological memory is in a perpetual state of renewal, where memories change each time they are moved from one place in the brain to another. In contrast, computer memory is comprised of fixed items that always stay exactly the same, no matter how many times you move them back and forth. Also unlike a computer, the human brain has no storage cap. Our cognitive powers aren’t constrained when we store new long-term memories and we can, conceivably, keep storing new ones forever. The idea, then, that online databases free our brains for intelligent thought by outsourcing memory is flawed because the two types of memory function so differently.Once again, Carr emphasizes the differences between machine memory and human memory––but in this segment, he takes his argument further. The biological nature of memory undermines the claim that outsourcing memory to computers “frees up” the brain for more complex thought. As a result, we are forced to consider what benefits outsourcing memory to online databases and other Net locations really has, if it has benefits at all.ACTIVE THEMES

Value, Depth, and Intelligence  Theme Icon
Scientific Context Theme Icon

The Internet, in fact, places so much pressure on working memory that consolidation of long-term memories is obstructed. Consolidation of long-term memory depends on our level of attentiveness. As Kandel writes: “For a memory to persist, the incoming information must be thoroughly and deeply processed.” If the working memory is overloaded and we are unable to attend to it, that information is released in a matter of seconds––meaning the Internet, by overloading working memory, is not helpful to the consolidation of long-term memory.Carr returns to the concept of working memory to explain that not only does the Net lack the capacity to do the work of human memory, it actually hinders the very delicate consolidation process we do have. The influx of stimuli means we don’t have enough time to bring the information from working memory into long term storage. This is, quite plainly, because we are too distracted to give any singular item the attention it needs to make the jump.ACTIVE THEMES

Distraction and Focus Theme Icon
Scientific Context Theme Icon

Worse yet, due to neuroplasticity, the more we use the Internet the more we train the brain to process information quickly but without attentive focus. We become, in other words, very good at forgetting and worse and worse at locking information into our biological memories. This creates a feedback loop in which our trained-to-forget memories rely increasingly on the Net’s databases. Carr’s point is that the connections of the Internet are not remotely as rich and complex as our own synaptic connections. As Carr puts it: “When we outsource our memory to a machine, we also outsource a very important part of our intellect and even our identity.” What is at stake is the very nature of our identities. We risk becoming spread as thin as the Internet––becoming “pancake people”––without the complex internal architecture of personality taken as a given in the decades before we had such overwhelming access to knowledge.In this segment Carr uses the micro problem of memory consolidation disruption when using the Internet and then shows us the big picture. The brain is plastic and learns from experience. As such, the more our memory consolidation is disrupted by lack-of-focus, the less our brains rely on our increasingly forgetful memories. Carr uses science to show how the fallacy of machine brains being like human brains affects our very identities. Increased reliance on Net databases not only accustoms our brains to being distracted, but the knowledge-incorporation-process that is the foundation of identity development is disturbingly limited.

ACTIVE THEMES

Distraction and Focus Theme Icon
Efficiency, Speed, and Relevance  Theme Icon
Value, Depth, and Intelligence  Theme Icon
Scientific Context Theme Icon

RELATED QUOTES WITH EXPLANATIONSDigression. Carr addresses the issue of how he was able to write this book at all in the age of distraction. Carr writes that at the beginning he struggled immensely, only able to write in spurts and constantly distracted by the Net. He made a drastic change in order to really get the work done and moved to an isolated house in the mountains of Colorado. There, Carr only had a slow DSL connection and no cell service. He canceled his social media connections and kept his email program turned off for the majority of the day. Dismantling his life on the Net caused definite withdrawal pains, but eventually Carr felt like his brain readjusted to literary thinking. He began to calm down and regained the ability to focus on his work.In this digression, Carr outlines the lengths he had to go in the age of distraction––moving to Colorado, dismantling his accounts––to focus sufficiently to write this very book. Though it is possible to go off the grid, Carr is careful to admit that most people do not have this luxury. Work and social life often demand constant attention to digital devices. Carr’s success in breaking away proves that however deep we’ve gone, the brain does have the ability to bounce back.ACTIVE THEMES

Distraction and Focus Theme Icon
Distraction and Focus Theme Icon
Efficiency, Speed, and Relevance  Theme Icon
Value, Depth, and Intelligence  Theme Icon

Digression. In, “On the buoyancy of IQ scores,” Carr references a study done by James Flynn showing that IQ scores have been rising steadily since WWII. This so-called “Flynn effect” has been used to defend everything from television to the Internet. However, Carr points out that IQ scores have been going up for a long time, suggesting the change is dependent on societal factors rather than recent technologies. Verbal SAT scores, for example, have been steadily dropping. Flynn himself eventually concluded that the rise in IQ scores had to do with a change in the definition of intelligence. With the dawn of the technical age, Carr argues, scientific aptitudes for classification rather than drawing new conclusions became the defining factors of smartness. We aren’t actually any smarter than our parents, he points out, we’re just measuring intelligence by increasingly tech-influenced standards. IQ scores may technically be rising, but if they are based on tech-influenced categories, than we are only testing for a very narrow definition of intelligence. What’s more, we should be wary of the fact that technology is so deeply influencing every aspect of our lives––even our IQ tests. The takeaway from this digression is that the definition of intelligence changes with our intellectual technology.

Chapter 8

Carr identifies Google’s “intellectual ethic”––or the privileging of efficiency––as the prevailing ethic of the Internet.In this segment, Carr explains that Google’s focus on efficiency and categorization may not be a purely righteous mission. The more we surf the web, the more links we click; and the more links we click, the more ads we see. Google’s intellectual ethic may be efficiency, but that is because efficiency pays off. We use the web more and more because it is increasingly user friendly, and Google rakes in the profits. 

Carr argues that competition between Google and other web publishers has encouraged user appetite for rapid and easily consumed bits of information. Because each company aims to be fastest and most efficient in order to get revenue, a cycle is created in which users come to expect an ever-increasing amount of information at lightning speeds. In 1999 blog users realized they had to post multiple times a day to keep traffic on the uptick. Soon after, RSS readers came onto the web as a way of sorting and “pushing,” or highlighting, news headlines. Most notable was the rise of social media sites like MySpace, Facebook, and Twitter––all dedicated to providing endless updates for the user about what is happening, in the news and in their social groups, and providing these updates in as close to real time as possible.

Competition, Carr points out, has only amped up the focus on efficiency as the be-all end-all of Internet companies. Users have a role to play in this process as well, of course. The more efficient a social media service is the more we use it, creating an environment of constant one-upmanship. Users come to expect updates in real time, placing an unforeseen emphasis on speed as the marker of quality in a web publisher.

Google has also been playing the game as ferociously as ever. For Google, page importance is no longer judged solely on links coming into a site anymore but monitors at least two hundred different “signals,” or indicators of importance, at all times. The signal with greatest priority of late is page “freshness.” Google checks popular sites every few seconds to place priority on how recently updated and thus relevant the page is, the goal being to eventually fulfill the dream of a real-time and total Internet index.

Carr again emphasizes the importance of speed for a web publisher. Google is at the forefront because they check for updates in as close to real-time as possible. Their criteria for judging what sites they prioritize in results is worrisome because, again, agency is given to the search engine rather than the user.

Carr pauses to reinforce that Google’s seemingly ever-changing business model is in fact very simple: The more time we spend on the internet, the more money Google makes. This is because, Carr explains, Google’s model for revenue is based around complements, or in business terms, two things that are consumed together. Everything you do on the Internet is, for Google, a complement. This is also why Google provides services like email. The more time users spend using Google’s free information services and staring at computer screens, the more money Google rakes in from ad revenue. YouTube, for example, is not profitable in itself, but Google bought the company because it enabled them to gather more user information. As Carr writes: “Google wants information to be free because, as the cost of information falls, we all spend more time looking at computer screens and the company’s profits go up.” It makes sense, then, that Google’s overarching goal is to digitize every conceivable sort of information, transfer it to the web, catalogue it in their search engine, and dispense it to users in small, easily-digested bits with ads in tow.Carr returns to the important insight that Google is monetizing our logged Net time. Services provided for free are only free because they further enmesh us in the web, which gives Google profits in the long run. Carr emphasizes the monetization of efficiency to show how the proclaimed desires of Silicon Valley types to “make all information accessible” in fact may contain ulterior motives and unforeseen consequences.

RELATED QUOTES WITH EXPLANATIONSThe hunger to categorize all information can be seen in Google’s “moon shot,” or their aim to digitize all the books ever printed. Google’s book project caused controversy, however, because they failed to pay authors for rights. The real notable significance of the project, Carr argues, is how Google measured the value of a book not as a work of art but as “another pile of data to be mined.” The Google library represents, for Carr, the irony of the digital age’s definition of efficiency. The technology of the book was more efficient than scriptura continua, freeing reader’s minds for deeper thinking. Google’s efficiency, however, frees the reader’s mind to consume an increasing amount of shallower, bite-sized content.

Google’s book project serves as an excellent microcosm for Carr to lay out what he believes to be the general intellectual ethic of the Internet. For Google and others, what one era used to see as a work of art is simply a pile of mineable (and thus, monetizable) data. While there are benefits to the book digitization project, Carr wants us to pick up on the trend towards categorization and compiling rather than thoughtful and meditative consumption.

To better explain Google’s role, Carr turns his attention to the difference between two different philosophies about knowledge and enlightenment. Transcendentalism, as represented by Nathaniel Hawthorne and Ralph Waldo Emerson, proclaims that enlightenment is the result of introspection, solitude, and meditation. Transcendentalism was in conflict with the ethic of the Industrial Revolution, which placed a prevailing emphasis on efficiency. To put it another way, Transcendentalism opposed the idea that access to information, rather than contemplation, was the key to human development. A modern incarnation of the Industrial ethic opposed by the Transcendentalists can be found in Google. Carr uses the Transcendental attitude to go deeper into what is at risk when we privilege efficiency over everything else. Transcendentalism, for Carr, is another way to access the meditative and subjective type of learning that is the hallmark of the literary mind. Saying that Google is an embodiment of the Industrial ethic is, similarly, another angle to look at the company’s devotion to efficiency.

Carr makes clear that his issue is not with the accessibility to information provided by the Internet but with the lack of balance between meditative and efficiency-based modes of learning. Forced to adapt to Internet speeds and live in perpetual motion, we no longer know how to strike a balance that incorporates quiet, calm learning. Though more information is available to us than ever before, we don’t have the Transcendentalist’s skills––the knack for reflective depth––to make use of it. Carr is careful to emphasize that he is not arguing for the elimination of all models of efficiency. What he calls for is a balance. The problem, however, is that the Net has already rid us of the skills necessary to strike that balance.

What this boils down to, for Carr, is a new definition of intelligence. It is telling that the prevalent metaphor today for brain function is a machine. If the brain is like a machine, then it makes sense to measure intelligence in terms of productivity––but this leads to a warped conception of the mind. A perfect example of this conception lies in the foundation of Google’s desire to create AI: “What’s disturbing about the company’s founders is not their boyish desire to create an amazingly cool machine that will be able to outthink its creators, but the pinched conception of the human mind that gives rise to such a desire.” Google and its executives hold fast to the Taylorist belief that intelligence is the result of a process that can be pinpointed and optimized just like the workings of a factory.

The type of intelligence promoted by Google is warped, Carr argues, because it measures itself by the signposts that make a good machine: Efficiency, productivity, and speed. Carr wants the reader to think about what it is, exactly, about computers that cause us to conflate human intelligence and machine intelligence––and whether we really asked for, or want, this new definition of a keen mind.

Chapter 9

In 1890, philosopher William James concluded there were two kinds of memories: primary memories, which we forget almost instantly, and secondary memories, which we can remember forever.

To investigate whether our memories have been affected by the Internet, Carr ventures to explain how the process of making memories works on a scientific level. He introduces the concept of primary and secondary memories to set the stage for explaining why we forget some things but remember others forever.

Studies on boxers who develop amnesia after blows to the head imply that even strong memories remain unstable after they are formed. Further research suggests that the brain requires a certain period of hours to “fix” a memory and transfer it from short-term to long-term. The process is delicate and any disruption can erase the memories forever. In fact, the storage of long-term memories, as proved by U Penn neurologist Louis Flexner, is biological, requiring the synthesis of new proteins, whereas the creation of short-term memories is not. Kandel’s continued research on the sea slug, in which he traced the neuronal signals, not only proved that repetition of an action encourages the consolidation of a short-term memory into a long-term memory, but also cast light on Flexner’s discovery. Kandel found that the creation of long-term memories stimulated growth of new synaptic terminals. In other words, the anatomy of the brain had to change in order to store the long-term memories, proving––as Kandel wrote in his 2006 memoir––that the anatomy of the brain is changed with learning.Karr returns to Eric Kandel and his sea slugs to emphasize with a real world example how we retain knowledge. The takeaway here is that the process of memory making is both delicate and biological. The process is delicate because the brain requires time to transform a primary memory into a secondary memory. If the brain is interrupted, the memory is gone forever. Most importantly, the process requires the creation of new synaptic terminals––meaning that memories are anatomically located. Memories require protein creation, evidence which directly supports Carr’s claim that the brain physically changes in response to stimuli.ACTIVE THEMES

Scientific Context Theme Icon

Get the entire The Shallows LitChart as a printable PDF.”My students can’t get enough of your charts and their results have gone through the roof.” -Graham S.Download

The Shallows PDF

Carr illuminates two other types of memories: implicit and explicit. Implicit memories are recalled automatically from the unconscious when doing a performative action like riding a bike. Explicit memories are recollections of facts and happenings in our past experienced in conscious working memory. Carr points out that the memories we are usually referring to when we talk about our memories—in this book and in general—are the explicit ones.This segment serves as an introduction to the concept of working memory. The takeaway here is that working memory contains all the explicit facts and recollections in our conscious mind.ACTIVE THEMES

Scientific Context Theme Icon

Carr notes that when storing explicit memories, or consolidating them into long-term memory, an ancient part of the brain called the hippocampus plays a pivotal role. In 1953 a man named Henry Molaison had part of his hippocampus removed to cure his epileptic seizure. Unfortunately, though his seizures stopped, Molaison was unable to remember many of his recent explicit memories and was no longer able to store new ones at all. His experience suggests that the hippocampus is the holding place for new memories. Once the memory is fully consolidated, it is sent to the cortex for secure storage––but the process can take years, explaining why so many of Molaison’s memories vanished.Molaison’s seizures serve to emphasize how delicate the process of memory consolidation really is. Quite simply, the process of memory consolidation requires a “stay” for x amount of time in a part of the brain called the hippocampus before the memory is transferred to the cortex for long-term storage. By showing how disruption of this process hindered Molaison’s ability to make new memories, Carr implies that other types of disruption could hinder our own processes of memory consolidation as well.ACTIVE THEMES

Distraction and Focus Theme Icon
Scientific Context Theme Icon

Carr’s in-depth explanation of human memory consolidation serves to highlight the problems with an analogy that compares human memory to computer memory. Human memory, unlike computer memory, is alive––it is a biological process. Carr quotes Kobi Rosenblum, an extensive researcher on memory consolidation: “While an artificial brain absorbs information and immediately saves it in its memory, the human brain continues to process information long after it is received, and the quality of memories depends on how the information is processed.”This segment summarizes the above argument to prove how wildly inaccurate the comparison of computer memory is to human memory. Carr has proved with scientific evidence that memory is a biological process, meaning that our memories are alive and change over time. The contrast with static computer memory is clear, and casts doubt on whether “outsourcing” memory to computers is really the wisest choice.ACTIVE THEMES

Scientific Context Theme Icon

The idea of outsourcing memory, then, Carr suggests, is invalid because memories have a unique history that changes each time we recall them. If a memory is brought back into working memory, it turns back into a short-term memory and so gains a new context. Biological memory is in a perpetual state of renewal, where memories change each time they are moved from one place in the brain to another. In contrast, computer memory is comprised of fixed items that always stay exactly the same, no matter how many times you move them back and forth. Also unlike a computer, the human brain has no storage cap. Our cognitive powers aren’t constrained when we store new long-term memories and we can, conceivably, keep storing new ones forever. The idea, then, that online databases free our brains for intelligent thought by outsourcing memory is flawed because the two types of memory function so differently.Once again, Carr emphasizes the differences between machine memory and human memory––but in this segment, he takes his argument further. The biological nature of memory undermines the claim that outsourcing memory to computers “frees up” the brain for more complex thought. As a result, we are forced to consider what benefits outsourcing memory to online databases and other Net locations really has, if it has benefits at all.ACTIVE THEMES

Value, Depth, and Intelligence  Theme Icon
Scientific Context Theme Icon

The Internet, in fact, places so much pressure on working memory that consolidation of long-term memories is obstructed. Consolidation of long-term memory depends on our level of attentiveness. As Kandel writes: “For a memory to persist, the incoming information must be thoroughly and deeply processed.” If the working memory is overloaded and we are unable to attend to it, that information is released in a matter of seconds––meaning the Internet, by overloading working memory, is not helpful to the consolidation of long-term memory.Carr returns to the concept of working memory to explain that not only does the Net lack the capacity to do the work of human memory, it actually hinders the very delicate consolidation process we do have. The influx of stimuli means we don’t have enough time to bring the information from working memory into long term storage. This is, quite plainly, because we are too distracted to give any singular item the attention it needs to make the jump.ACTIVE THEMES

Distraction and Focus Theme Icon
Scientific Context Theme Icon

Worse yet, due to neuroplasticity, the more we use the Internet the more we train the brain to process information quickly but without attentive focus. We become, in other words, very good at forgetting and worse and worse at locking information into our biological memories. This creates a feedback loop in which our trained-to-forget memories rely increasingly on the Net’s databases. Carr’s point is that the connections of the Internet are not remotely as rich and complex as our own synaptic connections. As Carr puts it: “When we outsource our memory to a machine, we also outsource a very important part of our intellect and even our identity.” What is at stake is the very nature of our identities. We risk becoming spread as thin as the Internet––becoming “pancake people”––without the complex internal architecture of personality taken as a given in the decades before we had such overwhelming access to knowledge.In this segment Carr uses the micro problem of memory consolidation disruption when using the Internet and then shows us the big picture. The brain is plastic and learns from experience. As such, the more our memory consolidation is disrupted by lack-of-focus, the less our brains rely on our increasingly forgetful memories. Carr uses science to show how the fallacy of machine brains being like human brains affects our very identities. Increased reliance on Net databases not only accustoms our brains to being distracted, but the knowledge-incorporation-process that is the foundation of identity development is disturbingly limited.ACTIVE THEMES

Distraction and Focus Theme Icon
Efficiency, Speed, and Relevance  Theme Icon
Value, Depth, and Intelligence  Theme Icon
Scientific Context Theme Icon

RELATED QUOTES WITH EXPLANATIONSDigression. Carr addresses the issue of how he was able to write this book at all in the age of distraction. Carr writes that at the beginning he struggled immensely, only able to write in spurts and constantly distracted by the Net. He made a drastic change in order to really get the work done and moved to an isolated house in the mountains of Colorado. There, Carr only had a slow DSL connection and no cell service. He canceled his social media connections and kept his email program turned off for the majority of the day. Dismantling his life on the Net caused definite withdrawal pains, but eventually Carr felt like his brain readjusted to literary thinking. He began to calm down and regained the ability to focus on his work.In this digression, Carr outlines the lengths he had to go in the age of distraction––moving to Colorado, dismantling his accounts––to focus sufficiently to write this very book. Though it is possible to go off the grid, Carr is careful to admit that most people do not have this luxury. Work and social life often demand constant attention to digital devices. Carr’s success in breaking away proves that however deep we’ve gone, the brain does have the ability to bounce back.ACTIVE THEMES

Distraction and Focus Theme Icon