Skip to main content

Mike Sugarbaker

Roleplaying and the brain: when you can't "just roleplay it out"

8 min read

There’s a thing psychologists call the fundamental attribution error. You could summarize it (when married with its cousin, self-serving bias) as “I Messed Up For A Good Reason, You Messed Up Because You Just Suck.” Specifically, the reasons we give when we mess up tend to be external factors, rather than some internal quality we identify with, whereas the reasons we assume for other people’s mistakes or offenses are internal rather than external – inherent to who they are. We make this mistake in part because we have access to our own subjective experience but not other people’s; if we did have that access, it would tell us a lot about what’s really going on with them.

Now, when you play a roleplaying game, there’s an other person: your character. To the degree that you aren’t just treating your character like a pawn, you have to do some thinking about the reasons for what they do, because you’re deciding what they do. But they don’t exist; they’re only in your head.

It’s okay; you can be just as wrong.

My theory, for which I have some backup, is that we don’t have nearly as much access to our roleplaying characters’ subjective experiences as we think we do. See, the external conditions that surface into our conscious awareness when we make decisions aren’t the only conditions that apply. There are dozens (this may be low by an order of magnitude or two) of involuntary chemical responses our brains have to things, especially stressful things. They are far enough outside of conscious control that you might as well call them external – they’re certainly external to the conscious volition of everyone but the very highly trained – and they are often the most salient conditions to the kinds of judgments we make when we make the FAE. I am talking especially about fear and embarrassment.

These chemical responses can turn into conscious feelings in unexpected, and unexpectedly changeable, ways. I’ve been seeing the story make the rounds lately of a 1974 study (the same year as the first publication of D&D!) in which men who had just walked across a famously frightening suspension bridge were asked a set of questions just afterwards by an attractive female interviewer who offered her phone number to the men for follow-up questions. A control group was given time to recover normal heart rate and such before being approached. Men in the control group were significantly less likely to call the number and ask for a date. The slight shift in context – hey, attractive woman! – took the neurological arousal of fear and put it to an entirely different conscious purpose. That’s just one example.

None of this would have any implications for roleplayers, if it weren’t for the way we often check in to our characters and imagine what they would do: by stepping into our characters’ heads, and trying to see through their eyes, at least metaphorically. Now, you do find the occasional “immersive” roleplayer who claims to have a trance-like ability to feel what their characters feel. But these claims are unverifiable, and the idea that a master immersor’s brain chemistry would have the necessary kind and depth of neurochemical accuracy to a natural response strikes me as an extraordinary claim requiring extraordinary proof. At any rate, most of us keep a little more mental distance from our characters while we play.

Suppose you’re “roleplaying out” an in-character debate, in one of those free-form roleplaying sessions where vague, grandly scoped political debates take forever and never resolve. Apply our jokey summary this way: “I caved in in the argument because my feelings ganged up on me. You caved in in the argument because you just suck.” Except the “you” is your own character, right? They don’t suck! They’re pretty awesome, in fact! So why would they cave in? And so the argument goes on.

Well, actually, they’d cave for the same reason you would: involuntary emotional responses. Maybe not under the exact same terms or at the exact same time, but they won’t be free from those forces, unless their neurology is substantially non-human.

If you want to make realistic decisions on your character’s behalf when your character is in one of many kinds of stressful situations, you must either apply some kind of external constraint (e.g. system), or step away mentally from imagining your character’s conscious volition (that is, think more authorially).

Already, though, I should remind you that the first word in that there commandment is “if.” Realism isn’t the one true yardstick by which story, play, or our weird story/play amalgam called roleplaying must be judged. But (and here we go back to doing aesthetic theory), I think roleplaying needs more realism in this particular neurological arena, for two reasons.

The first is that content wherein the heroes never feel self-doubt, fear, or a single moment’s weakness is trite. It’s fine for kids and, I hasten to add, for games in which you aren’t there for the content – that is, in which the main point is clearly the gamey business of bashing monsters and thinking tactically. (However, the recent history of indie video games shows us that tactical, traditionally heroic gameplay need not conflict with other modes of gameplay that question or even undercut them.) But my primary interest is in those games that value story more highly.

The second is that lack of realistic judgment about our characters’ mental states contributes to many of the classic social problems in gaming that split up groups and drive gamers out, as well as stopping new participants from coming in. I don’t just mean the two-dimensional stories and interminable arguments; I mean things like the rampant sociopathy on characters’ part that tends to creep into many games, due to the human inability to feel involuntary shame on behalf of their made-up characters. (The other side of that coin is the ineffectiveness of peer pressure on a fictional character; RPG lore is also full of tales of the one player who insisted on making their character follow a rigid moral code, screwing up the other players’ fun. In reality, a holy knight who went around adventuring with a bunch of miscreants would find themselves acquiescing to all but the most horrible crimes in pretty short order.) Basically, besides all the other problems with saying “but that’s what my guy would do,” when someone says those words, you should take a hard look at whether it really is.

There’s no limit to the number of ways system could possibly deal with the problem. One is to break the one-to-one relationship of player decisions to character decisions, by allowing more than one real brain a shot at driving the fictional brain. If the incentives are lined up properly, this could do the trick, but I don’t know of a good specific example. Another possibility is to enforce a fiction change that lines up with the desired player experience – a recent conversation between Vincent Baker and E6 author Ryan Stoughton speculates on a game in which players’ characters are recast as robots who have certain rigid programs that take over for their free will. I’m unconvinced that particular game would put the players’ felt experience in the exact right place, but it’s an elegant attempt.

The most historically popular option, though, are so-called “social mechanics,” meant to handle things like mental stress and manipulation. Social mechanics have a reputation amongst RPGers for not doing these things all that well (often because they’re modeled closely on the main historical lineage of RPG mechanics, those derived from wargaming). The last ten or so years of design have produced systems that do the job better, as well as systems that dodge the question entirely by operating on a much less character-viewpoint-identified level – asking players to think from time to time like authors or directors, as well as like their characters. Entrenched roleplayers are famously resistant to either approach, often advocating instead that we “just roleplay it out.”

One thing’s for sure, though: these players are not wrong when they call social mechanics “mind control.” They’re just wrong that their own minds aren’t being involuntarily controlled all the time.

But if you don’t like the two choices in my boldfaced rule above, there’s actually a totally viable third one: simply accepting that some of the character decisions in your game are going to cause problems, and that it isn’t such a big deal to go on with your game knowing the problems are there. Being wrong isn’t the worst thing that can happen to you, and if your game is fun overall, then you should enjoy it. Accounting for taste is the job of aesthetic theory, but that doesn’t mean it’s, you know, possible.