Story Theory: Confessions of a Literary Darwinist

 

By R. Salvador Reyes

First confession. I didn’t start out this way: believing that art is a Godless domain, a tactically-consumed, evolutionarily-wrought siren to the mind—just another victim hunted by our massive, pulverizing desire to devour and catalog every pattern in the universe that presents itself to our perpetually-ravished brains. I didn’t believe any of those things. Not in the beginning.

In the beginning, I just wanted to write. Why should I care how humans had come to love literature and art? I didn’t care. Until I asked the question. How had humans come to love literature and art? People have been asking this for centuries, and they’ve put forth a plethora of fascinating answers. But during the last couple of decades, theorists have started examining the question through the lens of evolution—and it’s beginning to look like a new future for literary studies is taking shape. Two of the most eloquent and compelling arguments for art’s evolutionary roots are the recently published On the Origin of Stories (Harvard University Press), by Brian Boyd and The Art Instinct (Bloomsbury Press), by Denis Dutton. Both seem destined to become part of the foundation of the emerging field of Literary Darwinism—where literature is being examined from new viewpoints, like neuroscience and evolutionary psychology. It’s a field that’s beginning to gain a following in the halls of academia, but my own journey to Literary Darwinism was trod outside those halls. I was simply a writer who wondered why readers sought out and consumed the literary objects I was trying to create. And while Boyd’s and Dutton’s books focus on the evolutionary answers to why humans began and continue to create art, as an artist I was more interested in how evolution has shaped audiences’ responses to art.

Because Literary Darwinism’s territory is still raw and untamed, taking the journey without a tour guide has given me the chance to carve my own path into its wilderness. Think of this as my traveloguesomething to provide a view of this freshly-discovered fauna and flora from a writer’s perspective. The tale winds from the maple in my front yard to the banks of an ancient waterfall and the stage of a Vegas magic act, but it begins with the first time I stood in front of Pollock’s One: Number 31 at the MOMA in New York. Something happened as I stared at it. Something I felt. Something I loved. I spent almost an hour in the Pollock room, losing myself in each of the paintings, soaking it all in. Then the same thing happened when I witnessed Clifford Still’s giant canvases for the first time at the Hirshorn in Washington, D.C. And after I left each museum, I had the same question, the one I couldn’t shake: what the hell just happened in there?

What The Hell Happened In There?

What happened was biology, chemistry, electricity—all the forces that combine to fire up those parts of the brain that consume, analyze and respond to patterns. Everything that the brain consumes is converted from raw data into patterns that it can store and use. Think of the act of seeing a tree. You’re not really seeing what’s there—the millions of differentiated cells that connect and combine to make the tree. You’re seeing the larger patterns they form, the way they reflect the light. And your eye takes in that pattern data, uses your brain to process and analyze it, then loads the image into your frontal lobe—dragging along all the cross-referenced data it already has stored. Voilà! You’ve seen a tree! Although it’s incredibly complex, our brain makes it feel simple: all of these patterns—some static, some in perpetual motion—crowding and whizzing about every inch of the universe. And there we are, caught in the middle of it, like a lost tourist stranded in the center of Times Square.

Our brain is built to navigate the chaotic traffic of patterns that make up the universe. And although it presents the world to us—to our conscious selves—in the “macro” view (the branches, leaves and trunk of the tree), there is a deep part of our brain that is aware of and able to understand or process the raw pattern data that comes in as reflected light, audio waves, and so on. There is a deep part of our brain that is a pattern junkie; it feeds on them, needs them, wants to find them everywhere. And it doesn’t do this without reward. Through experience we know that the macro versions of these patterns evoke emotional responses tied to cross-referenced, memory-stored data that is attached to the pattern. We are not surprised when the painting of a grand mountain-scape stirs something in us that is presumably born from associated memories and pre-conceived concepts. But the feelings aroused when standing in front of the abstract splashes of a Pollock do not seem to be drawn from such an obvious source. Yet, the feelings are there, and they’ve now been experienced by generations of viewers. Could it be that while our conscious brain is entranced but stupefied, that deep-in-the-brain pattern junkie is going gonzo with pure pattern pleasure, for once having nothing to send to your frontal lobe except the essential feeling of its pattern-processing experience, but no macro form to serve as its Trojan horse into the conscious mind?

~

There is mounting evidence and emerging theory supporting the idea that our brains are “attracted” to certain fundamental patterns in a way that is not primarily informed or mitigated by our conscious mind. We are genetically attracted to near-symmetry in faces. We experience an innate pleasure in the particular mesh of sound created by rushing water or wind through leaves. We enjoy certain stories almost regardless of specific details as long as they follow particular plot algorithms. These types of patterns “light up” our brains, making our subconscious stand-up and take notice. In essence, it’s our brain’s evolutionarily-developed way of saying “That looks like something I want” or “We need to find out where this story goes.”

When a pattern hits the jackpot, we feel it. Pollock and other successful abstract expressionists hit the pure-pattern jackpot. It’s likely that the harder you look at an abstract painting—the more you try to think it into something concrete—the less pleasure you tend to get from it. You’re pushing the pure pattern seeming-nonsense back to the deep brain and asking for something you can use, but it’s got nothing else to send. And when we consider the difficulty in hitting this pure-pattern jackpot, we can better understand why Pollock’s splashes aren’t the same as the paint splotches my two-year-old daughter throws against the cabinet doors. Pollock had one of those brains where the pattern junkie lived a little too close to the main house. As with many artistic types, this had its obvious downsides, but the kind of hypo-manic mind that often comes with an over-familiarity with one’s inner pattern junkie can also help those same artistic types to see and construct elaborate patterns that hew powerfully close to the fundamental natural patterns that give us such deep pleasure. In other words, even if Pollock wasn’t great at consciously explaining what he was doing, there was a subconscious part of his brain that was clearly influencing how he assembled his great works.

~

The more I thought about it, the more sense it made. As dynamic organisms whose evolutionary advantage was predicated on our ability to adapt, the consumption and analysis of patterns would be crucial to our survival. Over time, these patterns made their way into great art and our pattern-hungry brains were happy to find them there. So much so that we said: bring us more! So we did, and we continue to do so. And in our ongoing search for new and more interesting ways to light up each other’s synapses, we’ve discovered surprising and unexpected techniques for appealing to these essential desires. One of the great periods of such discovery was the 20th-century wave of modernism that spawned Pollock and Still. It brought us not only a new set of pattern-pleasure experiences, but a new window into what we’re truly, deeply seeking in art.

My own experience with the paintings from that era provided my first major stepping stone toward Literary Darwinism. In many ways, I was already there. But two big questions still hung in my head. First, beyond a simple pleasure/attraction response and a method for processing “micro data” into usable macro patterns (essentially, things)—what other evolutionarily-advantageous ability might this pattern obsession provide? And, before I got too far off my original goal, what exactly did this all have to do with literature?

What’s This Pattern Junkie Really After?

Second confession. I believe this is all about making predictions. Think of it this way: that deep part of our brain—those regions that hide behind the veil of our subconscious and do the heavy lifting—is like the best personal assistant that the conscious brain could ever have. It takes notes on all the important stuff, keeps track of our moment-to-moment schedule, and maybe best of all, it tries to prepare everything the consciousness needs in those milliseconds before the consciousness even realizes it needs it. To accomplish this last task the deep brain has gotten into the business of predicting. In small ways, this is happening on a constant basis. You hear a familiar voice behind you, and before you’ve even turned around your deep brain has guessed at the identity, retrieved the face, and has it ready for display in the consciousness viewfinder. The vase at the edge of the table begins to tip. The moment your eye catches its first unsteady tilt your deep brain is calculating the trajectory, the likely result, and predicting the risks associated with preventative action. After all that pattern data is quickly analyzed, an order is sent to the hands to place them exactly where the vase has been predicted to fall. And they’re there a whole half-second before the vase lands in their grasp, before you even knew what you were doing. Our pattern junkie clearly has another vice: prediction.

When you step back and take a wider look at this predictive capability, its larger evolutionary advantages become obvious. Some of the most significant milestones in the development of our species’ ability to adapt came from exploiting our capacity to predict. For starters, the ability to identify and predict the patterns that make up life-defining environmental cycles. Day to night. Spring to summer to fall to winter. We could plan our hunts and excursions according to the most suitable times of day, prepare our shelter and stores of food according to the seasons. And our cleverness didn’t stop there. In addition to this “environmental prediction” (all based on identifying patterns) we developed an uncanny knack for “creative prediction”—the ability to predict what results might come from our own present and future actions (or a future set of events that we can set in motion). Burying a seed in spring and cultivating it will bring something edible in the fall. Igniting a huge quantity of rocket fuel and sending the rocket up at exactly this trajectory and shedding much of the craft after launch…yadda, yadda, yadda…will land a vessel on the moon. You have to admit, we’ve gotten pretty good at predicting stuff.

~

This ever-increasing ability to predict likely grew hand-in-hand with our expertise in pattern identification, analysis and storage. As we discussed earlier in regard to Pollock and his handmade natural patterns, the brain seems to come with its own set of genetically-defined pattern attractions. We’re born with an attraction to fundamental symmetries and sounds that subconsciously guide us toward what is useful or desired for survival reasons. But those types of essential pattern attractions have limited predictive value. Consider that the pleasure response to the sound of rushing water can help lead you to a critical resource, but the ability to use creative prediction to maximize the resource by, say, building a dam requires pattern data that is not likely to have been genetically programmed. Our inborn pattern attractions couldn’t possibly provide enough data to support the massive amount of predictions and responses that take place during a human lifetime. That huge store of predictive pattern data must be accumulated through some form of experience or study. Without developing an expertise in consuming and managing these experienced or studied patterns, our predictive skills would probably be pretty unimpressive.

In this context, it makes sense that our brain’s evolution would lead to powerful, finely-tuned internal mechanisms devoted to patterns and prediction. And it makes sense that lots of brain rewards—like dopamine rushes—would be given out when those mechanisms are engaged in certain useful ways. When you think about the kinds of patterns that are useful for prediction—patterns that are defined by a certain string of actions and reactions that occur within a specific set of conditions—it is easy to see that these types of patterns are, in essence, stories. Most predictive patterns are ultimately a type of narrative. Think again about how we just defined a pattern that’s useful for prediction: a certain string of actions and reactions that occur within a specific set of conditions. Aren’t those also descriptions of plot and setting? When we step back and look at how we experience our world, aren’t we always trying to turn the data from what we study and experience into a narrative pattern that we can make some sense of—and which, consequently, we might be able to make use of in the future? In fact, we’re so addicted to ferreting out a useful pattern from consumed data that we’ll often see narratives when they aren’t really there: conspiracies, astrology, the man who ends up on “Dateline” after being wrongly accused of killing a wife who actually slipped in the tub. Randomness, events that seem to have happened without just, plausible cause or didn’t lead to a logical, believable result—this is data that our brain has no use for. Much better for us to find a narrative in the nonsense, because a narrative is what we seek, what brings us that pattern pleasure, what satisfies our need to turn the chaos at the center of our Times Square-ish universe into something we can contemplate and navigate. Experiencing, identifying, analyzing, and cataloging narrative patterns in order to make use of them later as predictive tools is one of our deepest human desires. We don’t just love stories—we need them.

A Little Thing I Call Story Theory

Third confession. I have my own pet name for this story-need, this desire to arrange the data from life into narratives: Story Theory. And although Story Theory is where all of the Darwinism we’ve been discussing finally attaches itself to the Literary, I’m going to take a quick detour into history. When I was in college, my sister (who was a couple years behind me at the University of Illinois) once asked me to help her study for a history exam. Although she’s bright, attended all the classes, and did all the reading, she wasn’t a great history student. When I first quizzed her with questions from the practice exam, she did not do well. Although she’d highlighted all of the key items in her texts and notes, then tried to memorize them, she still had a hard time recalling and describing them when I quizzed her. As I listened to her struggle through the answers, it struck me that she was simply thinking of the events, dates and people as abstract facts. Each item was its own idea in her head, catalogued by topic or name. So I told her to try a different approach—to think of these events, dates, and people not as individual items to be memorized, but as elements in the historical story that they were part of. It was not groundbreaking advice; it’s the approach commonly taken by Liberal Arts textbook writers and professors. But my sister was more comfortable with math and science, so she was prone to see data as factual, as pieces of information. Once she began tying the data to a narrative arc, the answers came easily. When the narrative as a whole made sense to her, its component parts were not hard to recall. In this new narrative context the “facts” were no longer isolated ideas, rather, her brain was able to identify a pattern in which each element necessarily led to the next. As long as she was able to unfold the story in her mind, she could slide back and forth along its arc, scouring the narrative for the information that she needed.

This historical detour highlights Story Theory’s key principle: not only does narrative provide us with a pattern that can aid in future prediction, it first connects and arranges the data we’re consuming in order to give a comprehendible form and meaning to our experiences in the present. It’s another version of the way our eye and brain translate the tree’s raw pattern data into a macro tree. Our brain wants to turn everything into a story—the same way it wants to turn line, color, texture and light into objects that can be identified and managed by our consciousness. Our brain even shows the same compensatory tendencies when building a story as it does when building an object. It will take a shape it doesn’t fully recognize and try to translate it into a pattern it already knows: when you swear you saw a shark but it was really a big exotic fish, or the random, inexplicable, yet vaguely-related events sewn together in the shape of a familiar conspiracy. In addition, the brain will often start to run its prediction subprogram when the first pieces of raw data begin to stream in: preloading the face behind you just based on the voice, guessing where the story will go based on a few early key plot points. The deeper we dig, the more it appears that the brain uses narrative in the same way it uses fundamental tools like vision and hearing: to construct our perception of the world.

From this perspective, narrative no longer looks like an ancillary human intellectualizing tool, used primarily to help organize and communicate stories and events between humans. Instead, narrative looks like one of our brain’s core consciousness and universe-building tools. It’s something that we were using to assemble our understanding of existence long before we were using it to assemble the plots of our novels. I like to think of Story Theory as a bridge between these two primary uses of narrative—navigating existence and navigating literature—a bridge built from a set of essential story principles that underpin all our encounters with narratives. Investigating how we respond to and apply narrative in our daily lives can help us to better understand our responses to literature. Conversely, we can use our understanding of narrative concepts in literature to provide insight into the ways in which narrative elements shape and affect our worldview. And by uniting the knowledge about narrative gleaned from both of these primary uses, we can create a more robust view of how this pattern junkie cum prediction addict cum story fiend in our brain is aiding in the assembly of our consciousness, helping to cobble together our tenuous sense of an orderly universe from a mishmash of present perceptions, genetic desires, and previously-consumed patterns. All with a big assist from the little tool that could, narrative.

Oh Narrative, How Do You Do What You Do When You Did What You Did To Me?

At this point, I had finally seen the light; I had converted to Literary Darwinism. It now seemed clear that the most useful way to begin analyzing literature was to understand how our response to narratives (and their manipulation) is governed by a system that has evolved to primarily serve survival purposes. My next question: beyond simply recording what’s consumed, what is our brain doing when it encounters these patterns that we perceive as narratives? At the most basic level, the brain’s main job in such an encounter is to decide what to do with the data. Does it require an immediate response? Should it simply be consumed and stored? Can it be ignored altogether? Consider that narrative patterns small and large are perpetually pouring into our brains—if all data was treated as equal, there would be almost no way to effectively manage and make use of the information. In order to decide what it is going to do with the pattern data, the brain must first analyze and prioritize the pattern, then choose an appropriate response. Even if it does not require an immediate response, the most useful data would likely be tagged with greater importance somehow—a way to keep it near the top of the data pile so it can easily be retrieved and checked against incoming patterns.

Imagine that a primitive man and his companion encounter a raging waterfall for the first time. They have spent weeks traversing in and out of the river without harm, so the companion does not fully perceive the danger when he steps into the rushing water near the falls. The other man watches as his companion is quickly shuttled to a violent death at the base of the falls. This is important stuff: IF YOU ENTER RUSHING WATER LEADING TO A RAGING WATERFALL, YOU WILL PROBABLY DIE. The data from this short-but-eventful narrative is recorded in the primitive man’s brain as a memory—one that the trauma has powerfully imprinted into his mind. In the near future, if the man hears the sound of a raging waterfall, the memory of the story (attached to feelings of fear and danger) will likely be pulled from its spot at the top of the data pile. Imagine that he has many more encounters with raging waterfalls after that, all without significant incident. The memories of those tiny narratives quickly fade, but whenever he hears a waterfall raging somewhere downstream, the data and emotion from his first, tragic encounter are yanked from the archives into his working file. This kind of imprinting occurs when certain emotional responses are produced in the brain during high-impact events. It’s the type of brain mechanic that causes syndromes like Post Traumatic Stress Disorder. If we consider that the brain’s emotional response, and thus the strength of the imprinting, occurs in proportion to the “significance” of the event or events (the pattern data) then we can see how an overpowering imprint is created by the painful, terrifying narratives encountered in something like war. In these cases, the data is given far too high a priority and any incoming pattern cue that is even slightly related to the super-imprinted data gives rise to the brain’s screams of danger. Thus, a backfiring car in a mall parking lot can send a recent war combatant into a panic even though every other pattern cue clearly indicates a less intense response is appropriate.

Of course, most narratives that we encounter do not provide data of life or death importance. So our brain presumably has to use a nuanced method for analyzing and prioritizing most of the stories it encounters. Science has yet to uncover a flowchart in the brain that tells us exactly how this prioritizing is done, but by considering the matter from the perspective of survival needs, we can build our own set of analytical criteria that might make Story Theory more useful and applicable. The big questions to answer: What would define an incoming narrative pattern as high priority? What might the brain be gauging when determining a narrative pattern’s usefulness? Once I knew the questions, I felt compelled to push on in my journey to Literary Darwinism; although I had already arrived at the destination, I could hear the call of its backcountry beckoning. The secrets behind the power of story were hidden out there.

~

By exploring the way in which our brain dissects narratives, we might discover how they entrance and transform us. So I built something to aid in that exploration—a set of criteria that I believe the brain might use to analyze and prioritize narrative patterns. We’ll call the criteria the Narrative Prioritizer Test, since it makes it sound like a comic book weapon prototype, and since it’s like a quick exam to judge an incoming narrative’s merit as a useful pattern. By extrapolating the evidence presented so far along our journey, it seems likely that the brain would apply this test (or something similar) to every narrative it encounters—the higher the pattern “scores,” the more intense the emotional response, the stronger the imprint, and thus, the higher its priority. Our Narrative Prioritizer Test covers four general categories in which the brain likely judges a narrative’s usefulness, but it is not intended as a hierarchy. (My belief is that the categories have fluctuating weight when the brain determines a narrative’s overall value.) When a brain is analyzing and prioritizing a narrative, this is what it might be seeking to measure:

Importance

Once the brain has discovered or decides what a narrative is about—essentially, what’s at stake—it must judge the importance of those stakes. Going back to our primitive river-walking friend, his waterfall narrative was about a human companion dying; that gets a high score in importance and helps it achieve priority prestige. If, instead of falling in, his companion simply tosses an apple core into the water, the importance of his companion’s toss scores extremely low—generating a negligible emotional response and leaving a weak imprint. Later recollections of the event may recall the beauty of the scene, but the apple toss micro-narrative would not have been made a high priority memory and was possibly even ignored as it happened. When our brain measures the importance of a narrative’s content, it is essentially determining how “life-altering” the content might be. At one end of the scale there’s the life or death stuff. At the other end are the apple tosses, the purely inconsequential. Of course, individual values make importance judgments in the middle of that scale vary greatly among different audiences. People who highly value independence, for example, might respond more strongly to narrative about power and control than people who aren’t particularly concerned with independence. In essence, for the independence-minded, the power and control narrative provides data about something that they perceive as more significantly life-altering. Even though the factors that can play into such a judgment are seemingly countless, the brain is ultimately forced to decide on the intrinsic value of each narrative’s content in order to file it in a hierarchical fashion, if it chooses to file it at all.

This is why political campaigns today are run the way that they are. The politicos are onto the value of narrative in shaping our worldview, and they know that in order to get their narratives consumed and stored, they need to make sure they score high on the importance meter. So they identify what is most important to the broadest swath of their constituency (aka, the lowest common denominator) and keep hammering home narratives that focus on those matters. It’s also why teaching literature and how to analyze it can be so crucial to ensuring that classic works aren’t under-appreciated by new audiences. Consider that the esoteric content of a period novel would likely receive a very low importance score in an average reader today. But when readers are able to recognize what the story is really about—i.e., choosing between a family’s love or a partner’s love—their brains are more likely to perceive the narrative’s value. In literary analysis terms, judging the importance of narrative content is like gauging one’s interest in a story’s themes. Wherever a story ends up on someone’s scale of importance helps determine how strongly they will respond to it and how strongly the data will be imprinted in their memory.

Relevance

While the question of importance focuses on the significance of the content itself, relevance is about how likely it is that you personally might be able to make use of that content. Imagine our primitive man again at the waterfall, but in this story, instead of a man, he witnesses a squirrel slip into the water and die in the falls. The life or death content still causes a small blip on the importance scale, insofar as the little story has more impact than it would if the squirrel had simply jumped in and out of the water without harm. But its life or death content is deemed useless by the man’s brain because it is not relevant. The squirrel’s experience does not seem to reflect what might happen to the man in a similar situation, and thus garners only a mild response, leading to a weak imprint. When the man comes upon another raging falls, he might not recall the squirrel tragedy at all. The impact of relevance comes into clearer focus if we imagine that instead of a squirrel, the man witnesses a large deer fall in and die. In this case, the deer’s comparable size makes the narrative more relevant and would likely give it a much greater chance of being strongly imprinted. Consider that it’s not likely that the man would care any more about the deer’s death than the squirrel’s, so its increased impact could not be attributed to greater intrinsic importance (in the both cases, the narrative’s content is the rather insignificant “a random animal died in the falls”). Yet, it is easy to believe that the man would be more likely to remember the deer tragedy than the squirrel one. I’d argue that the most significant distinction between the two narratives is their relevance to the viewer, the likelihood that the narrative content could be of personal use to the judging brain.

This is why books and films are marketed by demographic. The more we are able to see ourselves in the shoes of the main character, the more relevant the narrative seems and the more impact it has. We’re drawn to stories about characters that we perceive to be like ourselves, especially those who achieve things we want to achieve but haven’t. They’re the kinds of narratives that our brain wants to devour and lock-up in our save for later file.

Novelty

In the brain’s narrative beauty pageant, originality counts. The more unique a pattern is, the greater its potential usefulness. Keep in mind that our pattern junkie is a collector, he’s out to gather and hoard every different kind of pattern he can get his hands on. But like any maniacal hobbyist, he’s always looking for the pieces that he doesn’t already have in his collection. You can almost hear the pattern junkie’s weary response when he encounters a pattern that is well-stocked and readily available: Boooring. I already have a bunch of those. Let’s return again to our handy primitive man by the river. Sadly, his companion has once again fallen in and died. But it is not the last riverside tragedy he witnesses. He soon joins up with a particularly-witless band of humans, several of whom he watches die in a series of clumsy waterfall accidents. By the time he’s seen his 8th or 9th human die this same way, the narrative’s lack of novelty significantly drags down its priority score. Unless one of the later incidents scored particularly high in one of our other categories (higher importance because the victim was someone he loved or higher relevance because the victim was behaving exactly as our primitive man usually does in the same situation) the first incident will still likely be the first memory recalled when he hears future waterfalls raging downstream.

We have the same response to novelty in the fictional narratives of books and film. Typically speaking, the most avid filmgoers and readers are the same ones who eventually seek out more and more complex stories, and become increasingly disinterested in the simple narrative structures of mainstream pop fare. Conversely, occasional or infrequent viewers and readers usually still find plenty of attractively-novel experiences contained in the simpler narratives—often declaring the other category of people snobs for not being able to appreciate the pleasures that they are clearly experiencing. So the next time you (and you know who you are) feel like mocking someone else’s pedestrian, bourgeois tastes in literature, remember that they’re not the one making the call—it’s really their pattern junkie back there maniacally trying to put together his own unique collection. If the junkie still wants to complete his special set of McDonald’s Hamburglar and Grimace drinking glasses (or read the whole Twilight series) there’s not much a consciousness can do about it.

Validity

If we think of this pattern processing as a narrative assembly line, we might view the validity portion of this test as quality-control. The narrative assembly deep-brain minions are pretty busy; they’ve got pattern data coming in almost faster than they can build the narratives. If they come across incomplete, unfamiliar or especially challenging pattern data, they don’t always have time to test out different narrative configurations, so they assemble the pattern in the most familiar and quickest way possible—sometimes creating a less-than-reliable narrative. In other cases, the data that the brain is working with is simply bad—misperceived and false or submitted by an unreliable source. In order to filter out truly useful patterns from misleading ones, they must be analyzed for validity (in essence, believability) and categorized accordingly. As opposed to the other three portions of our test, which in part help determine the actual strength of the memory imprint, validity seems to determine how the memory will be categorized in terms of reliability.

We’ll leave our river-walker out of things this time (judging validity wasn’t his strong suit anyway, his primitive mind would believe almost anything). Instead, let’s visit a Vegas-style magic act. At the climax, the magician pulls the cloth from a glass case and what had just been a woman is now a tiger. The brief-but-dramatic narrative scores high on the other parts of our test, but validity…not so much. Our brain has plenty of reliable stored pattern data about women, tigers, glass cases, physics—so when this narrative was cross-checked against them for validity, it failed quality-control. Despite its lack of validity, its scores on the rest of the test still allowed the events to generate a strong response and be imprinted, but in the end it did not receive the special reliability tag. Which means that if an audience member later encounters a woman entering a glass phone booth (when they still had those) they will not worry about her suddenly being replaced by a tiger. Without the reliability tag a story will usually find itself ghettoized—filed away in the pure novelty section, rarely called upon, and much more likely to fade away than a reliable narrative.

In addition to this reliability tag, the validity test seems to help determine the strength of our interest in a narrative. You hear a strange noise in your basement. When you go down to investigate, you notice what appears to be a dark shape moving in a far corner. For a moment your brain guesses that the shape was human and your first instinct says you’ve seen a ghost. Even if you don’t believe in such things, the experience might startle or generate a quick chill. But if you truly don’t believe in such things and the phantom scenario fails the validity test, your interest in the narrative is minimal; a moment after your chill, you’ll likely discard the ghost narrative and toss the whole memory into the forget this bin. If, however, you’re prone to believe in such things, and the ghost narrative passes the validity test, you might remember and develop a deeper interest in it—possibly even sustaining that interest by further investigating the narrative, trying to find out why this house might be haunted, analyzing how your own actions might have invoked the ghostly presence. The same initial pattern data, two completely different responses to the narrative—all based on perceived validity.

In literature, validity is also strongly tied to interest in the narrative. When characters behave in ways we find implausible or situations do not play out in a ways we judge believable, we tend to become less captivated by the story. Even in fantastical or obviously imaginary situations, a narrative must set up plausible cause and effect relationships that follow the rules established within that universe in order to sustain our interest. Consider how often we’ve heard science fiction fans explain that they lost interest in a narrative “once things started getting really ridiculous.” Much science fiction is by definition ridiculous, yet the best science fiction work somehow doesn’t feel that way—it’s playing by plausible rules, creating a pattern that still measures up as believable when checked against previously-stored pattern data, most of which has been gathered through non-fiction experiences. Even in the most fantastical fictional worlds, when someone begins to feel a narrative lacks validity, they’re likely to take off their Spock ears and get on with their life.

A Campfire Tale & A Final Confession

We’ll start with my final confession: so far, we’ve only covered a few acres of Literary Darwinism’s backcountry. But I can see we’re losing the daylight here, so we’ll have to explore the rest on a future expedition. For now, in lieu of such an expedition, we’ll conclude this journey with a little campfire tale—an element of Story Theory that is too important to ignore. It’s one of the biggest of the big game out there roaming the backcountry: empathy. Empathy is the key that unlocks the full power of narratives. From the most basic viewpoint, empathy is simply feeling what others feel in response to witnessing them feel it. And in some cases, empathically experiencing a witnessed experience. (As we’ll explain later, the “witnessing” need not be done in person; it can also be done via written and spoken narratives.) We feel excitement when we see someone’s face light up. We can build doppleganger muscle memory of shooting free-throws by watching someone shoot them repeatedly—studies have shown that even watching yourself shoot them in your mind can help improve actual performance. Other recent findings have suggested that our empathy subsystem is far more complex than originally thought. They’ve even discovered that there are special, highly-developed neurons (mirror neurons) that are devoted specifically to engaging in empathic responses. These empathic responses work like a high-speed wireless data exchange between humans. No need to be told how someone is feeling, just analyze their expression and your brain can actually make you feel how they feel—it’s like mind-reading. In addition, this mechanism can add a unique component to the analysis of incoming narrative patterns. Let’s bring back our primitive friend one last time, the version who witnesses his companion’s death. Imagine that he is later traveling with another companion, one who has never encountered a waterfall. When they suddenly detect the sound of a raging waterfall downstream, our man’s face momentarily expresses fear and concern. When his friend sees this, he empathically feels the same things. Not only has there been an efficient, wordless communication of feelings, but that pattern-cueing sound has now—due to the emotions generated—been assigned a value in the companion’s mind that is actually based on our man’s experience. In some ways, empathy is like a powerful human-to-human networking device that allows us to efficiently share information, therefore helping each individual to benefit from the experiences of others and accumulate valuable data more rapidly.

Empathy is a complex tool, and one of its other tricks is that it seems to work even without the actual presence of another human. One of the reasons that written narratives are so powerful is that language seems to have a unique relationship with our subconscious. Recently, on the radio program Radiolab, a reporter described a group of deaf people who didn’t know sign language. Although they used a form of pantomime to communicate with each other, they essentially had no formal language. When one of them finally learned how to sign and was asked to describe his language-less experience, he couldn’t. He had no describable memory of the time, and could only explain it as “the dark time.” Using Story Theory, we might conclude that without language it was difficult for him to describe to himself and catalog his internal narratives, and without a word-based stored narrative he was unable to shape the data into a memory that could be translated through language. It seems that even our subconscious needs language to do much of its work; without language, it has no way to “talk to itself” and efficiently manipulate narratives. This suggests that language has a special status in the deep brain—words themselves play a crucial role in transforming data into perceived experience, consciousness. So when you’re reading a narrative, it can feel like you are experiencing it, likely because language plugs right into the part of our brain that generates the perception of experience. It’s as if a written narrative just steps into the deep brain and takes over, feeding us its own pattern data and routing the resulting experience and feelings directly to our consciousness. It’s the vicarious experience of reading a novel. And once you’re deep in the mind of the narrator, it’s essentially an empathic experience. This means that when our Narrative Prioritizer Test is run while reading a novel—if the vicarious, empathic experience is powerful enough—your brain is actually running the test as if you were the narrator. Suddenly, you’re crying over the death of some imaginary person’s child, remaining haunted by the feeling of loss even after you set the novel down. So although our own values and judgments often determine what literature we seek, if we can develop an empathic connection to a story’s narrator, even narratives that wouldn’t naturally appeal to us can still engage us and move us in powerful ways.

~

Our last question, then, is your junkie’s last question, the one he’s always asking as he nears the end of any narrative: is he going to get his big fix? In most cases, it seems the juiciest pattern-pleasure fix is usually reserved for that moment when the narrative is completed, when the pattern achieves some sort of resolution. It’s the siren song that can draw us through to a narrative’s end—the source of the power of mystery in narratives. It’s—well, it’s part of that backcountry that we’ll have to explore together another time, lest your junkie miss out on the fix he’s aching for here. So then, junkie, how’s this…

Literary Darwinism—and the little cabin we’ve built in it, Story Theory—shouldn’t be seen as ideas whose secret agenda is to displace all current literary theories or schools of analysis. They are, instead, frameworks for developing a better understanding of our myriad approaches to literature. It’s true that many elements of current theory may not hold up well when considered within a framework of evolutionarily-driven narrative impulses, but other elements may be understood more deeply and expanded upon. In the end, I didn’t take my journey in chase of debunking (or bunking) any particular schools of literary thought. I did it for the reason we hunt down any narrative: it was thrilling. As a writer that thrill came from finding, in every nook and cranny along the way, evidence that our craft—the manipulation of words, narratives—was not merely an esoteric, narcissistic exercise. Rather, these tools that we wield have the power to reach into the heart of our consciousness and bring to life the forces that create and sustain our experience of existence, our very sense of being. And once we have devoured all those delicious patterns, they truly become a part of us, they take their place in helping shape how we will perceive and respond to all those experiences to come. When I was about 7 or 8, my dad asked me what I wanted to be. He’s a physician, and at the time I assumed he was hoping for an answer like doctor or scientist. So I was ready to defend my choice, preparing a list of reasons in my head as I answered, “A writer.” My dad paused a moment, thought about it, then surprised me by providing a reason that trumped all of my own. “I think that’s a smart choice,” he said. “A job like mine, doctors, who knows? Maybe someday they’ll have machines and robots to do our work. But writers—we’ll always need people to tell us stories.” It took me 30 years, but I’m finally beginning to understand why.

Advertisements

3 comments

  1. Time to face the music armed with this great inofratmion.

  2. Pingback: Story Theory: Confessions of a Literary Darwinist « thehumanities.com

  3. Pingback: Quote of the Day | 0204 « net eamelje

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: