We are soon releasing new updates to address hacking issues quicker and more effectively. Read more about it in this post.
Tee-rific Moderation Update
Once upon a time (summer of 2022), in a faraway land (the roomiverse), and long before Avatar Studio, we launched the Shirt Customizer. It was the first time players could customize clothing and the creativity the community has shown confirmed that when we give you the right tools, you make all kinds of cool stuff and a lot of it - we’ve seen millions and millions of shirts being created and sold.
So why bring this up now, two years after the feature launched? Well, we’ve got a new feature we’re rolling out for custom shirt publishing and are long overdue for an explainer on how custom shirt moderation has been working behind the scenes. We also want to walk through how we’re applying what we’ve learned from Custom Shirts to our vision for Avatar Studio - so you can create and wear avatar items to be your favorite version of you, while still being fun and welcoming for everyone!
We’ve seen so much creativity come out in custom shirts, and it really got us thinking about the potential for UGC avatar items. Just look at some of these really fun examples!
We also saw some folks who wanted to push the boundaries and be a little provocative (you know who you are!). A few people really do go too far though, and we saw some customization that just wasn’t appropriate. We’re pretty sure you all know it, but to be clear, if it breaks the Creator Code of Conduct, then we won’t allow it.
We never want the small subset of rule breakers impacting the play or creation experience for the majority of players, so we put some great tools in place that help us detect inappropriate shirts within seconds of publishing. It works by scanning shirts for things that would be a no-go like nudity, hate, or drug content. It took us a while to get these systems fine-tuned, heightening and lowering thresholds for different categories and types of no-go content so that rule breakers had less chance of slipping through the cracks. And that system is now doing a pretty good job of making sure all your excellent creations get published ASAP and those handful of bad ‘uns don’t.
If your shirt is detected as violating our CCoC, we take down the shirt so it doesn’t make it to the store. We operate on the premise of no harm, no foul - so creators get a warning with no ban because we want you to be able to continue creating great stuff. If our creation tools are repeatedly abused though, we may remove your shirt creation privileges - so consider this fair warning.
When we were doing that fine tuning and decreased the thresholds for some of the Creator Code of Conduct categories, it got us thinking. What if the AI gets it wrong? How do we match up the automated rules with reality? This can be harder than it sounds as so much of these rules are about context and taste. Technically, if the AI does get it wrong, the risk to the creator is small because we just take down the shirt, but that wasn’t good enough for us. So, we’ve built in a new appeal system for shirts that are flagged by our AI systems. If you think the AI got it wrong and the shirt is not in violation of the CCoC, you can appeal and we will review it and reinstate it if it was a false flag. These appeals will even be used to fine-tune our systems and keep making them more accurate, so please use (but don’t abuse) this new function!
What happens when the AI doesn’t work the other way and something slips through that shouldn’t? Like always, you can report shirts you see that breach the CCoC and we’ll review and take action on those too.
We’ve learned a lot of lessons enabling everyone to create the shirts they’ve dreamed of, and now those lessons are key to the development of our moderation systems for Avatar Studio. As you’ve seen, we’re taking it slow, building out and fine-tuning our automated scanning and appeal systems to match our CCoC, and planning out extra protections for avatar items that appear in high profile areas like our featured clothing. Keep an eye out for a future update that shares more detail on how we’re integrating trust and safety into the fundamentals of Avatar Studio AND creating tools with the potential for new clothing and accessory styles - Pumpkin Head and Sigma Headband anyone?
New Direct Message Settings
Control who messages you
Our mission at Rec Room is to create a fun and welcoming place for people from all walks of life. That’s an easy thing to say and a much harder thing to do, but it’s one of the most important things we work on because the magic of Rec Room is hanging out with friends - old and new.
And y’know, we think our community is pretty darn special. We’re very proud to have you and all of the fun and kind and kooky Rec Roomers in our community. But just because we think you’re all great doesn’t mean you have to agree with us. YOU should have control over who you engage with and how you engage with them, just like you choose which rooms you want to hang out in and who to hang out with. What makes a great social experience is that it works for you.
So, today we’re launching new player chat settings that give you more control over who can directly message you. If you like to see a full DM inbox, you can keep your preference to ‘all friends’. If you’re an empty inbox type of person, you can choose to completely turn off DMs. Or, pick the middle ground and choose to receive DMs from favorites only!
If you try to DM a player who has DMs disabled, you’ll see this message:
If you have DMs disabled, you’ll see this message when viewing your DM threads:
And if you want to change your settings at any time, you can find them on the Experience settings page of your Watch.
These settings apply to player-to-player DMs and group DMs - we’d heard from our popular creators and video partners that group chats had gotten a bit wild, so these settings also apply to who can add you to new group chats (friends, favorites, or no-one)! Room chat and party chat will stay the same as social features in the room or party you’re hanging in.
And of course, your existing player settings will continue to be respected - so blocked and banned players still won’t be able to message you.
We are always looking for ways we can improve your experience in Rec Room. We hope this is a useful change that puts you in control. Hit us up with ideas if you’ve got a burning request to make Rec Room even more fun and welcoming, and keep giving it your Rec Room Best!
State of Voice Moderation
The State of Voice Moderation
We've received a lot of questions about voice moderation, so we got the 411 from our trust and safety team on how voice moderation actually works in Rec Room.
We use voice moderation systems in all public rooms in real time. These are cutting-edge machine learning systems that detect hate, harassment, and other types of nasty speech based on Rec Room’s community standards.
What’s The Deal?
We’ve spent the last year running analyses and reworking our systems. Here’s the most important takeaway: the data confirms what we knew all along - the vast majority of players are fantastic community members. Most have never broken the Code of Conduct. Never. Not once. Angels.
Of those that have broken the rules, most players only slip up once. Say it gets a bit tense, maybe you miss *that* paintball shot at the last second and a few “$%@!s” slip out - our voice moderation system is designed for exactly this. It will catch the one-off and give you a friendly warning - because we get it - we all get caught up in the moment from time to time.
What If It’s Not Just A Moment Then?
Players only lose mic privileges if that moment escalates. Temporarily revoking mic privileges is a very direct form of feedback - hey, that wasn't cool! - but it’s also a chance to reflect; "how can I be excellent with other Rec Roomers when I get my mic back?"
If a player keeps using speech to be hateful or harass, that mic restriction will increase, and may eventually turn into a ban to give them a chance to cool off. We don’t like banning players. We do it when we have no other way to protect the experience for the rest of our community.
As we said, our players are top notch. So, bans for voice chat toxicity are actually really rare- like 1 in 135,000 people rare. You’re literally more likely to roll all sixes on six dice! And those very rare bans are now also short - we don’t need to (and we aren’t) pressing the 40-year ban button or even the one week ban button for voice toxicity. Long or permanent bans are used only for serious code of conduct violations.
But What If I Didn’t Actually Mess Up?!
Hang on though… you’ve all seen the social posts that say “I didn’t say anything and then I got banned” or one of our favorites “It was just a mic echo”. But it’s not true! If players are getting banned or muted, we are pretty dang sure they said a few things they shouldn’t have. Is it 100% perfect? No. But it almost is…
This is where our internal data wizards - come in: every detection system has a False Discovery Rate (FDR) i.e. a chance of being wrong. The FDR limits the confidence we’re making the right decision on any issue. If the FDR is ever above 5%, you can never reach 99% confidence. Unless… you wait. If you see not 1, but 2 or 3, or even 25 potential-rule-breaks in any given period - then you can have quite a bit more confidence that someone did cross the line.
So even if the system has a 5% FDR, if we see a player caught 10x, then we’re not 95% confident - we’re more like 99.999999999% confident. So yeah…almost 100% perfect! We don’t want to interrupt games unnecessarily, so to reduce false positives, we don’t ban or mute based on one thing a player says in one fragment of speech - instead we wait until we hit that 99.999999999% confidence rating based on multiple hits, which can be over a few sessions or in the moment.
This really is the best of both worlds - we catch the worst toxicity, we forgive the one-offs, and we stop those gnarly false positives. And you know what? It works! This chart shows what’s been going on over the last year - instances of toxic voice chat have fallen by around 70%!
What’s Next??
So, job done then? Not quite…we’re getting to a good place on improving the systems’ accuracy, but accuracy is only half the story. The other half is about what we catch - and more importantly, what we don’t. Right now, if the system is catching you, you slipped up more than a few times. But we know - and you know - that there are times we probably should have caught something and we didn’t. Players being unfairly banned is lose-lose, so we worked on accuracy - but now we’re turning our attention to broader coverage. More on that in future updates, but for now, thank you to all of you who continue to make Rec Room the great community we want it to be. And keep doing your Rec Room Best!
Ensuring “Be Excellent to Each Other”
You may have heard a bit about automatic voice moderation in the past, and we’re finally ready to start in-game trials today. So what’s going on here? Why are we doing this? And how will it affect you, as a player?
Rec Room has really grown over the past year - from an influx of new VR players last Christmas, to all our new Xbox friends, and big bumps from being featured by Apple - and we need to ensure the moderation system will keep up with this growth. I’m sure many of you have had an experience in Rec Room that was not the best - people yelling racial slurs, making crude sexual advances, or telling others to kill themselves. Obviously, this is not the experience we want our players to have! While we would love it if everyone was excellent to each other immediately, sometimes this just doesn’t happen. We do tend to have a certain number of trolls join over time, so we need to make sure these people don’t ruin things for everyone else.
Currently, almost all of our moderation actions are reviewed by staff - this won’t be sustainable in the long term! This worked well when we were a small population and could get through the number of reports relatively quickly. However, as the population grows, we’re finding that there’s longer wait times for reports to be reviewed, and bad actors can continue to bring negative effects to the community while these reports are being reviewed. This has been true even as we’ve added staff to the moderation team over the past year. Enter automation!
We’re partnering with Modulate.ai to trial ToxMod in the Rec Center this month, with plans to roll it out further as we refine our processes. So what does this look like for players? At first, you may not notice any big effects while we evaluate initial results from the partnership. We want to ensure that we’re targeting the behaviours we want to restrict - racism, homophobia and transphobia, sexually explicit language and harassing behaviour. I know there have been some concerns about swearing, but given that we’ve generally been tolerant of casual and non-abusive swearing - casual use will still be fine! There’s a big difference between “oh shit” and “you’re shit” in how it affects players around you.
After our initial trial is complete and we’re happy with our processes, you might start to notice that the person yelling racial slurs quickly gets their mic muted, or the person making explicit sexual statements to everyone around them gets sent back to their dorm. We’ll be experimenting over time to see which consequences are most effective in reducing long-term behaviours that affect other players - after all, we do want Rec Room to be a place for everyone! We do believe some players just may not understand our Code of Conduct at first and can become positive members of our community. But we recognize that some players may be coming into our community with poor intentions, to disrupt or ruin other people’s experiences, and we may need to remove those people to ensure the experience remains good for everyone else.
I know some players have privacy concerns around the new process. This is something we care a lot about! We want to assure you this has been top of mind for us while selecting a partner for this work and designing our processes. ToxMod acts like an extra in-game moderator for us; it reports things that violate our Code of Conduct, and only the data specific to that incident is used in reports. Chat data is not kept long-term and is deleted once its use is served, and we minimize the amount of data sent to Modulate for analysis as much as possible. We have also ensured that our partner had similar privacy policies to our own, so players can be comfortable knowing that their data is used only to improve our moderation system, and isn’t sold or traded to other parties.
Hopefully this answers some of the questions and common concerns we’ve heard from the community. If not, feel free to comment below and let us know your thoughts! We’re excited to take these next steps towards a long-term, scalable moderation system.
Automated Voice Moderation in Rec Room
In September, we announced that we’re working on an automated voice moderation system to help our moderation team keep Rec Room fun and welcoming. As we get closer to making that system live, we wanted to give a brief overview of what we’re looking to achieve and what protections are in place for Rec Room players.
Currently, we offer a number of moderation options in game for players to help us identify bad actors and manage their own experience. However, we’re working towards a future where speech that harasses or demeans others is dealt with in a very rapid manner in Rec Room, giving trolls less of an opportunity to offend large groups of people before they are removed from the game.
Next week, we’re starting to test a system which will automatically flag speech that’s sexist, racist, discriminatory or contains violent harassing language. Rest assured, we’re targeting the worst of the worst behavior in Rec Room - you can still call your friend a butthead or yell “OH SHIT” when the red bats are closing in in Golden Trophy. Still, there’s speech that everyone can agree violates Rec Room’s Code of Conduct, and that’s what we’re working to remove.
As we introduce these new systems, we want you to know that privacy is being given utmost consideration. We’ve worked to implement and integrate technologies where we can store the most minimal amounts of data, keep that data anonymized, and delete the data as soon as its purpose is served. These systems will be active in public rooms, so you can use private rooms to keep your conversations completely confidential.
Feel free to leave questions in the comments and we’ll try to answer them in the longer devblog coming next week!