This week, I’m talking about how we try to restrain bad behavior. On Thursday, I’ll share a roundup of your responses about Seeing Red and restraint.
I’ve never tried a VR game and I’m skeptical of plans for the metaverse. (Incidentally, VR headsets are much more likely to cause motion sickness and nausea for women than for men). But for those who make it online, there are new problems, including new forms of sexual harassment.
The MIT Technology review had a recent story on efforts to curtail virtual groping in Facebook/Meta’s VR environments. The pitch for VR is the successful illusion of presence—when it works, that can make intrusive virtual contact feel more “real” and more threatening than slurs in a chatbox. Facebook/Meta had a plan, which didn’t work out so well:
Meta’s internal review of the incident found that the beta tester [the woman who was groped] should have used a tool called “Safe Zone” that’s part of a suite of safety features built into Horizon Worlds. Safe Zone is a protective bubble users can activate when feeling threatened. Within it, no one can touch them, talk to them, or interact in any way until they signal that they would like the Safe Zone lifted.
The Facebook employee quoted in the article emphasized the way Facebook is fulfilling its responsibility—users are introduced to Safe Zone in onboarding, and reminders about the feature turn up on posters within the virtual world.
It sounded a little like the instructions women receive to navigate the world: don’t leave your drink alone; walk with a friend; have your keys out before you reach your car. “You’re in control,” the Facebook posters say, implying that, if something bad happens, you may be the one responsible for messing up your precautions.
At the time of the MIT article’s publication, Safe Zone just gave users the chance to exit an encounter and block the person bothering them. The instigator could push the person they were harassing out of a space. A different virtual game, Quivr (in which you fight zombies with a bow and arrow), had a virtual gesture users could make to push another avatar away from them. In mid-March, Facebook added a new feature, Personal Boundary, which users could toggle on and off to keep other avatars at arm’s length.
It’s not that I want companies to not build these safety features. Each one can be helpful in limiting harm. But I do think they show a real disregard for responsibility on the part of the companies.
Each tool is about creating a rule, not inculcating a virtue.
The companies are coming up with ways to make certain behaviors impossible or to let users escape bad behavior. But they aren’t thinking about what in their platform encourages people to behave badly. The sites act as though they are neutral—if people behave badly online, they are revealing their self. They might have behaved badly anywhere; it just happened to be on Facebook.
However, it’s clearer and clearer that the choices sites make can encourage or discourage virtuous behavior, and the sites know it. Facebook knows that Instagram is bad for young women. Twitter knows that their site makes sending a death threat feel casual and normal, when many of those users would probably never have posted a letter expressing the same thoughts.
Their job isn’t just to stop doing harm, but to think a little about how to actively do good. Putting large numbers of strangers together without guardrails isn’t neutral and isn’t responsible.
The search for the perfect rule or set of safety settings does remind me of Christine Emba’s Rethinking Sex. As she told me during our conversation, the modern culture around sex is marked by a broken promise. Many of her interviewees had a sense that, if you find the right rules, sex can only be good, and you and a stranger will never have to know each other or reveal yourselves to each other in order to feel good about what you do with each other. The rules (“two enthusiastically consenting adults”) will keep you safe.
But there’s no end run around character formation, and no checklist of consent items that lets us get around the fact that we are interacting with another human being, not a preference menu.
Rules are the minimum, and a good rule can be a teacher, when we inquire into it. But safety can’t come from rules alone but from active work to build a culture that forms character rightly. Every site and culture is already shaping character; the question is just how deliberately and in what direction.
I have found that I can’t really look for strong community online. I can use Twitter for news and chatting with friends and friends-of-friends. I can use email and Facebook for coordinating local stuff like giveaways and park meetups. I can use Reddit to crowdsource car repair ideas. But I can’t join non-local Facebook groups geared at moms or Catholic women, for example, because I find that it quickly becomes unhealthy for me. Social media groups, in particular, often seem to encourage me and others to stake out really aggressive positions on decisions that we ourselves are insecure about.
Whew, this one hits kind of close to home for me today. For many years, I've been a regular participant on a particular religious subreddit. I've virtually always experienced it as a safe(r) place, even as a woman on the internet: there's a good core of regular users and a fairly strong moderation team, who together cultivate an environment where, by and large, crudeness and combativeness don't stick around for long. I think it's a product of the trust and friendship that can build over time among like-minded people with reasonably similar moral convictions. However, recently, another longtime user said something kind of gross to me, and it's had me feeling like there might be no safe places on the internet, after all. So, I don't really know my answer to the first question anymore.
On the second question, I have an on-again-off-again relationship with Twitter, because the algorithm tends to tempt me toward reactionary-type tweets that (though I might agree with their substance) are not moving me in a direction of holiness. Periodically I realize that the posts I'm seeing are bringing me more stress and anger than joy and life. Then I have to do a sort of mental reset, and re-curate my follows, so that I'm mainly seeing the good and true and life-affirming conversations that, for me, are a reason to stay on Twitter for now.