Content, conduct, and safety
Providers and regulators are now contending with the consequences of digitizing human behaviors into hyperscale social environments—the leading social media services that span continents and gather hundreds of millions (and even billions) of users together. Social media, user-generated content services, and multiplayer video games are working to monitor conduct and moderate content. They aim to protect their brands and support positive experiences for their users while establishing responses in case any cause problems.10
These services are used by billions of people across many cultures, normative spaces, and legal jurisdictions. Yet they are often navigating which kinds of content and conduct require moderation. Depending on how it’s created and distributed, content can undermine ownership, like copyright violations. Depending on the views it expresses and the images it contains, content can be illegal by law, like hate speech or offensive by the various standards of different cultures. Conduct such as harassment or bullying can lead to harms and undermine the safety of these experiences.11 In the US, Section 230 of the Communications Decency Act generally provides online platforms with immunity from liability based on third-party content. Many lawmakers are pushing for this immunity to be scaled back.12
In the European Union, the Digital Services Act (DSA) established a regulatory regime that requires platforms to manage the risks that illegal and harmful content and activity may pose to users and society.13 Aimed primarily at services that allow users to share content (including social media platforms), the DSA seeks to increase online safety, drive accountability, and create transparency on how content and behavior are managed on platforms.
While the EU has put in place a regulatory framework across its member state countries, there is limited federal law in the United States, where individual states are legislating piecemeal. A challenge in creating social media regulation is the need to not only protect against harmful content but also protect free speech. California’s AB 587, for instance, requires social networks to post their content moderation policies and provide a description of their processes for flagging and reporting problematic content like hate speech, racism, extremism, dis- and misinformation, harassment, and political interference.14
There seems to be little global consensus on how to regulate human behaviors in social experiences. How such rules might play out in metaverse spaces can be even more fraught where current codes of conduct are suggested but rules seem to be scarce.15 What are the lines of self-expression through avatars? What about virtual violence? How should copyright apply to avatars and digital clothing, or deepfakes and generative AI content? Many of these issues are being tested and the legal positions being defined via lawsuits, in the absence of regulation. For instance, issues such as fair use and the copyright position for generative AI works are being debated in courts in multiple jurisdictions.16 The EU aims to address the risks of AI-generated content in its Digital Services Act and draft AI Act, requiring those in scope to classify levels of risk and enable labeling of deepfake and inauthentic content, for example.17
These issues may become more problematic in the presence of younger users. The UK’s Age-appropriate Design Code (for privacy) and pending Online Safety Bill (for content), along with the EU’s Better Internet for Children (BIK+) program, are guidelines seeking to clarify how services should protect younger users, including recommendations for enabling “strong” privacy by default, turning off location tracking, and not prompting children for personal information.18 Providers are asked to assure the age of users, protect their data, and shield them from harmful and illegal content.19 The US state of Utah recently enacted requirements for social networks to secure parental consent before any child account is created and to set curfews on child accounts, preventing access between 10:30 PM and 6:30 AM.20 In 2019 and further amended in 2021, China limited the amount of time minors can play video games, aiming to deter “internet addiction.”21 In more immersive experiences, will younger users find it difficult to limit their time online? And whose responsibility is it to set those limits?
Critical considerations
When negative consequences are allowed to continue unchecked, regulators can be compelled toward strong responses that can limit growth and innovation. As regulations come into effect, tech companies should be proactive in embedding protections into their current policies and implementations and adopt leading practices that help support positive outcomes for users and society.
Protection by default, trust by design: Providers should enable protections by default, with straightforward and easy-to-manage user controls and policies for content and conduct.22 Restricting unsafe search results for younger users, blocking younger users from appearing in searches, disabling direct messaging from unknown accounts, and establishing new accounts as private until configured otherwise are steps that can help companies demonstrate compliance and dedication to user safety. Some may consider how to selectively “mute” other avatars, keep avatars at a distance, spin off safer metaverse spaces designed for children or teens, and follow a minimal data collection policy for younger users.
Real-time content moderation: Moderating such enormous amounts of content and conduct can be very difficult and costly—and metaverse experiences can be even harder.23 But as potential harms become evident, regulators are able to impose greater punitive measures on service providers.24 Providers should be paying attention to AI and large language models (LLMs) that may be better able to moderate at scale.25 Such tools may help mitigate harms, avoid legal challenges, and foster enjoyable experiences for the majority.
Risk analysis: Metaverse innovations may inadvertently create new ways for bad actors to engage in harmful conduct and exert influence. It is unclear how issues like bullying, harassment, stalking, grooming, and hate speech may manifest in metaverse environments. Companies should consider scenario modeling to identify new risks and mitigation strategies, then communicate them to users and regulators to build awareness.