Ensuring “Be Excellent to Each Other”

You may have heard a bit about automatic voice moderation in the past, and we’re finally ready to start in-game trials today. So what’s going on here? Why are we doing this? And how will it affect you, as a player? 

Rec Room has really grown over the past year - from an influx of new VR players last Christmas, to all our new Xbox friends, and big bumps from being featured by Apple - and we need to ensure the moderation system will keep up with this growth. I’m sure many of you have had an experience in Rec Room that was not the best - people yelling racial slurs, making crude sexual advances, or telling others to kill themselves. Obviously, this is not the experience we want our players to have! While we would love it if everyone was excellent to each other immediately, sometimes this just doesn’t happen. We do tend to have a certain number of trolls join over time, so we need to make sure these people don’t ruin things for everyone else. 

Currently, almost all of our moderation actions are reviewed by staff - this won’t be sustainable in the long term! This worked well when we were a small population and could get through the number of reports relatively quickly. However, as the population grows, we’re finding that there’s longer wait times for reports to be reviewed, and bad actors can continue to bring negative effects to the community while these reports are being reviewed. This has been true even as we’ve added staff to the moderation team over the past year. Enter automation!

We’re partnering with Modulate.ai to trial ToxMod in the Rec Center this month, with plans to roll it out further as we refine our processes. So what does this look like for players? At first, you may not notice any big effects while we evaluate initial results from the partnership. We want to ensure that we’re targeting the behaviours we want to restrict - racism, homophobia and transphobia, sexually explicit language and harassing behaviour. I know there have been some concerns about swearing, but given that we’ve generally been tolerant of casual and non-abusive swearing - casual use will still be fine! There’s a big difference between “oh shit” and “you’re shit” in how it affects players around you. 

After our initial trial is complete and we’re happy with our processes, you might start to notice that the person yelling racial slurs quickly gets their mic muted, or the person making explicit sexual statements to everyone around them gets sent back to their dorm. We’ll be experimenting over time to see which consequences are most effective in reducing long-term behaviours that affect other players - after all, we do want Rec Room to be a place for everyone! We do believe some players just may not understand our Code of Conduct at first and can become positive members of our community. But we recognize that some players may be coming into our community with poor intentions, to disrupt or ruin other people’s experiences, and we may need to remove those people to ensure the experience remains good for everyone else. 

I know some players have privacy concerns around the new process. This is something we care a lot about! We want to assure you this has been top of mind for us while selecting a partner for this work and designing our processes. ToxMod acts like an extra in-game moderator for us; it reports things that violate our Code of Conduct, and only the data specific to that incident is used in reports. Chat data is not kept long-term and is deleted once its use is served, and we minimize the amount of data sent to Modulate for analysis as much as possible. We have also ensured that our partner had similar privacy policies to our own, so players can be comfortable knowing that their data is used only to improve our moderation system, and isn’t sold or traded to other parties. 

Hopefully this answers some of the questions and common concerns we’ve heard from the community. If not, feel free to comment below and let us know your thoughts! We’re excited to take these next steps towards a long-term, scalable moderation system.