Chat rooms can be highly toxic. They are the frequent haunt of trolls and spammers. This is especially true for chat rooms connected to live video streams. But chat rooms are also one of the only places where viewers can discuss what they are seeing in real time with a group of people who are all seeing the same thing. Communities form around chat rooms, and they sometimes develop their own unique culture and norms of behavior. I have been researching and designing a platform called DeepStream that allows people to curate live streaming videos by adding contextual information such as blog posts, news stories, related videos and Twitter feeds. The act of curation can place previously decontextualized livestreams into a rich narrative that can inform and engage viewers and potentially challenge dominant narratives about the events being streamed. One of the goals of the platform is to offer people new ways to engage with world events through the ability to create open, participatory communities based on livestreams. But is chat an important piece of functionality for this platform? Will it help or hinder the formation of communities around some of the curated streams? Is there a way to implement chat so that it will tend to be more civil and less toxic? This paper explores the problems and potential of chat. I will argue that chat is necessary but poblematic, and will articulate a model of citizenship and community that could be built with the help of chat. I will survey existing chat moderation features on several platforms to analyze some of the technical solutions that are used today, and consider the importance of these features in positively influencing the tone and tenor of chat rooms. Finally, I will propose new features that might encourage vibrant, participatory, and civil communities around chat rooms.
The Problems with Chat Rooms
So what specifically is the problem to address? There are several ways people sometimes undermine participatory and civil discussion in chat rooms. One is spamming. Some people will create a long block of text and copy/paste it over and over into the chat room, flooding the chat with so much text that it effectively hides other legitimate conversations that may be going on. An analogous offline behavior might be standing in the middle of a cocktail party and yelling as loudly as possible, in the hope that you will be so loud that it will be impossible for anyone else to have a conversation. Another problem is threats. Hostile behavior can have a chilling effect on the chat room, cause people to feel unsafe participating in the space, or cause people who were previously in conversation to make counter-threats. Continuing the cocktail party analogy, if someone loudly starts threatening to beat up the host, other conversation is very likely to stop while people consider how to respond to this threat, and some people may leave and never return. There are also trolls. People may appear to be engaging in the conversation, but their real motive is to bait people into responding to outrageous claims, turning the conversation toward arguing over a point the troll may not even care about. Imagine someone at a cocktail party intentionally making loud outrageous claims to rile up the entire room until everyone is shouting at her or him. When this happens the troll has accomplished their goal, which is to control the conversation, turn it toward something they have no real concern in, and make people angry.
There are also problems related to over-zealous attempts to moderate trolls, spammers, and hostile participants in chat rooms. An example comes from Reddit, which is not a chat room but it requires an extremely high level of moderation. Some of this moderation is done with automated programs that detect rule violations. A user recently wrote to say that he had been posting for three years and had just realized that he had been auto shadow banned since his first post (a form of banning that allows the user to continue posting comments, but the comments are not seen by anyone else except the user) because his first two posts included links to the same domain, which was against Reddit’s rule and triggered a moderation bot (ribbonlace 2015). In this case legitimate speech was silenced by over-enthusiastic moderation policies. This illustrates the tension inherent in moderation between trying to efficiently remove unwanted content and ensuring the space allows democratic participation, including dissenting or unpopular opinions (Leavitt and Peacock 2014).
The final problem with chat rooms is that moderators are hard to find. It takes time to build a relationship with someone that is sufficiently trusting to give him or her moderation privileges in your chat room. A livestreamer that I interviewed spoke about this as one of her biggest challenges. Without moderators, spam, trolling and threats can render the chat room toxic.
If chat rooms can really be made more civil, why hasn’t this been done yet? Let me be clear: I do not think there is a technical fix to this problem. No combination of human and automatic chat moderation can prevent a chat room from devolving into hostile insult-trading. Chat rooms are dynamic communities, often with frequently changing membership. In active rooms messages can appear faster than they can be read. The ideas discussed below must always be tempered with the knowledge that there is no perfect system. But while I am not claiming to have the answer to the very difficult problem of “fixing” chat rooms, I am leveraging the idea that artifacts have politics (Winner 1980), and am therefore investigating a set of very specific questions about how to build an artifact that encourages certain behavior. While it is important to explore the implications of the various technical features common to chat rooms like muting and banning, I am also proposing that we look at chat as a collection of social processes within a community. If the goal is to design a civic media platform that ideally enables the formation of communities that can make change using media, then what kind of communities do we want? Is chat important to them? And if so, what are the design choices and features of a chat room that might foster the type of social interaction that supports and encourages those kinds of communities?
Models of Community and Citizenship
The first question to address, then, is the kind of communities we trying to encourage. There are two general qualities that I am designing for: participation and the ability to self-organize. Participation is one of the fundamental design principles of DeepStream. The platform attempts to fuse participatory remix culture with the amazing content that livestreamers are creating every day. Henry Jenkins notes that “participatory culture is emerging as the culture absorbs and responds to the explosion of new media technologies that make it possible for average consumers to archive, annotate, appropriate, and recirculate media content in powerful new ways” (Jenkins et al. 2006, 8). I hope DeepStream will be one of those media technologies, enabling people to “annotate” livestreams by adding relevant contextual material and then recirculate their remixed version.
Jenkins also states that “a growing body of scholarship suggests potential benefits of these forms of participatory culture, including opportunities for peer-to-peer learning, a changed attitude toward intellectual property, the diversification of cultural expression, the development of skills valued in the modern workplace, and a more empowered conception of citizenship” (ibid, 3). This empowered conception of the citizen connects to a model of participatory citizenship where a good citizen is someone who actively participates in civic affairs and social life, engages in collective, community-based efforts, plans and participates in organized community efforts, and develops relationships, common understandings, trust, and collective commitments (Westheimer and Kahne 2004). In short, an important part of the politics of DeepStream is the idea that users can practice participation by curating livestreams as a way to engage with issues that are important to them.
The second quality of community that I am suggesting is important is the ability to self-organize. If DeepStream were used to curate content about political or social justice issues, it would ideally allow the community of curators and viewers to organize themselves. Marshall Ganz and Kate Hilton describe organizers as people who “identify, recruit and develop leaders” (Ganz and Hilton 2010, 1). How could DeepStream enable the five organizing practices that Ganz and Hilton identify? The first practice is building a public narrative (ibid). It is exactly the lack of narrative in current livestreaming platforms that DeepStream tries to address. A skilled curator could pick very specific content using the DeepStream interface to convey a clear narrative about the event that is being streamed, which may challenge dominant narratives. The second practice is establishing relationships (ibid). Where and how would these relationships form on DeepStream? The chat room is a very likely candidate, and below I discuss the relative advantages of using chat or Twitter for this purpose. The third practice is building and empowering teams. I address this step in the recommendations section, where I will propose an idea that leverages chat room dynamics to help build teams. The fourth and fifth practices in Ganz and Hilton’s model are to form a strategy and take action. This is where DeepStream currently breaks down. I have not yet incorporated design ideas that encourage communities to move from participation to strategic planning and acting. It is possible that some communities may invent ways around this. If a particularly robust community wanted to assemble as many people as possible for a strategic action, they could create a context card through the DeepStream interface that included a call to action, advertising where and when the action was going to be and encouraging people to join.
Is Chat Friend or Foe?
If these are the kinds of communities that I hope to encourage with this platform, do chat rooms help or hinder their development? My initial thought was that without chat there was no practical way for any form of community to emerge around curated livestreams, because there was no other channel for group communication. If the members of a nascent group can’t communicate it means that they are likely to remain passive consumers of media rather than active participants.
This realization led me to consider alternatives to chat, for example using Twitter as a proxy chat client. This was the strategy Meerkat used when it first launched. But I think it is revealing that Meerkat now allows users to stop pushing chat messages to Twitter and to communicate only within the app. I suspect the Meerkat team realized that people simply want to say different things in a chat room connected to a live video, often things that they don’t want in their Twitter feed. There is also the problem of context. If someone asks the streamer a question it usually makes sense for those watching the video. A question appearing in someone’s Twitter feed without the context of the video usually won’t make sense. It may also be the case that people think their online identities are silos, or at least want them to be. Some people may not want the videos they watch connected to their Twitter identity, but do still want to participate in chatting about those videos. Another consideration is that while DeepStream will include Twitter integration, and hashtags could spring up around some of the curated streams, people communicate more on chat compared to Twitter, by some estimates sending 200-1600% more messages per hour (Harry 2012, 144). Additionally, if one is going to forge a new relationship with a stranger, and potentially build that relationship online through discussion of shared experience, conducting the entire relationship-building process via something as public and searchable as Twitter may be less comfortable than non-search-indexed, and less public, chat rooms. All of these issues led me to the conclusion that the type of communication that happens on Twitter is different than the type of communication in a chat room, and that building relationships through co-watching may be more difficult on Twitter.
If Twitter hashtags can’t replace chat, then we need to try to determine whether chat rooms will help or hurt community formation. The answer depends on many factors of actual use, but I see several ways they could potentially be a benefit. In terms of organizing, it is interesting to note that online groups have successfully used chat rooms to communicate, grow their membership, and coordinate in the past. Anonymous is one of the more well-known groups that conducted a lot of its activity via IRC chat rooms (Coleman 2013). Second, chat is of course a clear form of participation; chat rooms create another participatory outlet for people who wish to engage with the curated streams. Third, as described below, I think it is possible to use the need for moderators as a motivator for community building, which could strengthen the bonds of a nascent community.
These potential benefits have to be weighed against possible drawbacks. Especially for streams with few viewers, and therefore perhaps only one moderator, the chat room is more susceptible to trolls, spammers, and other people with bad intentions. When new viewers do come, they may find this sufficiently off-putting that they spend less time with that stream, undermining its potential to attract repeated viewers that might eventually turn into a community. Relatedly, too much toxic chat may mean that the entire platform is viewed as an unsafe place. Online harassment is very real, and very damaging. While men are more likely to experience online name-calling, young women disproportionately experience sexual harassment and stalking (Duggan 2015). If this kind of behavior flourishes, DeepStream could been seen as yet another online space that is home to unfriendly or outright hostile users, which is anathema to encouraging the kinds of open, participatory communities described above.
On balance I would argue that chat is a necessary if risky component. There is very little hope of communities forming around curated streams without it. But chat rooms need to be closely moderated or else they may not encourage the type of communities I hope will form. This leads to the final question: what are the best methods for chat moderation that try to strike a balance between efficiently enforcing politeness and allowing unpopular views to be expressed?
Analyzing Existing Platforms
To begin to answer this question we can look at what sorts of features are available on livestreaming platforms today. I analyzed the chat moderation features of four platforms: Ustream, Livestream, Twitch, and Bambuser. I have summarized their features by type in the chart below:
Bambuser | Livestream | Twitch | Ustream | |
Require Login | ✔ | ✔ | ✔ | ✔ |
Mute | ✔ | ✔ | ||
Ban | ✔ | ✔ | ✔ | |
Remove Messages | ✔ | ✔ | ✔ | |
Restrict User Type | ✔ | ✔ | ||
Rate Limit Chat | ✔ | ✔ | ||
Limit Size of Rooms | ✔ | |||
Restrict Certain Content (bad words, links, e.g.) | ✔ | ✔ |
Bambuser has the smallest number of viewers out of the four sites, and they have the smallest feature set for chat moderation. It is likely that chat becomes more problematic with more viewers, and Bambuser may not have enough viewers for significant chat problems yet. It is also important to note that within the categories in the table there are significant differences. All bans are not the same, for example: Ustream allows ip banning, but Twitch only allows user banning. This table is therefore intended to act as a starting point. A new platform implementing chat should consider each of these categories and the range of options within them before making thoughtful decisions about what to include. This process is about being prepared, so that if a chat room does see significant activity a strategy is in place to try to manage problems. That strategy could fail, but at least developers will not be scrambling to implement chat features in an attempt to mitigate ongoing damage. As a final note, developers might want to ensure that there is an easy way to shut down chat for individual streams and across the entire platform if community moderation practices are failing to prevent harassment. Let’s look at each category and consider the range of implementation possibilities and how they fit with the overall goals of DeepStream outlined above.
Require Login
All platforms allow chat room owners to require that participants login, and by extension prevent anonymous participation. Online anonymity is one of the factors that can lead to incivility (Leavitt and Peacock 2014, 3) due to the online disinhibition effect (Suler 2004). But research has also shown that anonymity increases participation and the willingness to voice dissenting opinions (Haines et al. 2014). Yet anonymous posts are also less influential (ibid). This research highlights the tension between the goals of high levels of participation and discouraging incivility. Given that 40% of internet users are victims of online harassment (Duggan 2015), I would argue that anonymous chat cannot be the default chat room setting. I can imagine cases where anonymity might be justified, such as with highly political livestreams in repressive societies, but even so there would need to be a clear scenario where the platform could be compelled to turn over user information to authorities.
A typical login process requires a user to create an account and connect it to an email address. One could imagine stronger forms of identity verification in an attempt to reduce the ability of people to create burner accounts that are used briefly to cause trouble then thrown away. During a recent talk at MIT Brianne Wu has suggested Twitter should allow users to only see messages from accounts that are at least 30 days old to reduce harassment (Matias 2015). Wu also suggested that anonymity is something that people should lose if they abuse the privilege. In this second scenario standard login might be the default setting, but stronger forms of identity verification would be required for users who have engaged in harassment. The question is whether someone who has harassed people in the past will stop this behavior because they are forced to be less anonymous. I am not convinced that would be the case. The request for more identifying information could simply push the harasser into creating a new account to continue the same behavior with the same level of anonymity. In contrast, variations on the 30 day rule (e.g. forcing sign in to DeepStream’s chat rooms with a Twitter account that is at least 30 days old) seem like they would indeed force people to use accounts they are more firmly connected to, increasing the consequence of getting that account banned. Given all of these considerations, I would argue for a policy of requiring login, and would consider some form of the “30 day rule” or letting moderators ban an ip address if actual usage indicated that moderators were frequently banning the same people using different accounts.
Mute
Mute is sometimes referred to as shadow banning because the person who has been muted usually doesn’t know it. They can continue to post chat messages, but no one else sees them. A variation of the mute feature is to change it to a user-level function, where individual users can mute other individual users, removing that user’s future messages from only their chat window. Drew Harry has suggested that this second implementation could include reporting user-level mutes to moderators, essentially flagging potential harassment that should be examined for further action (Harry 2012, 153). The drawback to muting is that it can be used to silence legitimate voices. Making moderators more efficient by reporting on crowd-sourced mutes has potential, but it needs to be tested. I would like to see the system implemented, then do a comprehensive review of messages that led to a user-level mute. If users are muting legitimate voices too frequently, we have to conclude that the human desire for homophily is too strong to keep this feature in place, and return muting to a moderator-only option, or eliminate it altogether and just use bans.
Ban
More serious than muting, banning kicks the user out of the chat room. Most of the above platforms allow banning for a specific time period, after which the user can rejoin the chat room. IP banning also has to be considered, to create a level of protection against abusive users who try to create multiple burner accounts. For especially egregious cases of harassment, I would also propose implementing a platform-wide ip ban, meaning that user cannot participate in any chats at all across the entire platform. This would require a level of reporting on bans that filters up to site administrators, showing comments of users who were banned so an administrator could make an informed decision about a site-wide ban. Timed ban versus permanent ban raises an interesting question. Times ban assumes that abusive users who are banned can be “reformed,” and anticipates changed behavior when they rejoin the chat room. How frequently are temporarily banned users re-banned? Data on this would help answer the question of whether temporary bans work.
Remove Messages
This is a more basic feature, and simply allows moderators to remove offending messages, either individually, or by removing all messages from a user. Some sites also allow moderators to remove all messages in the room. This is the basic way moderators can take down offensive messages, and should be enabled on all sites.
Restrict User Type
On some sites moderators have the ability to restrict chat participants to a sub-group of logged in users. For example, Twitch has a setting so that only moderators can participate in chat. Except in the heaviest-trafficked chat rooms where messages are going by too quickly to read I see little civic value in this feature, and there may be better ways to handle the room size problem.
Rate Limits
Twitch allows moderators to limit the rate of contribution for specific users. Ustream causes everyone in the room to be rate-limited if messages are going by too fast to read. Rate limiting a specific user seems like a softer version of mute, so I view it as somewhat redundant. It seems more appropriate for other chat participants to ask someone to contribute less if a person is dominating the chat room in a problematic way, and failing that, to mute or ban.
Limit Size of Rooms
Ustream limits room size to 1480 participants, and automatically creates a new chat room for additional participants. On especially popular livestreams, room size does seem to be a problem. It is extremely difficult for 10,000 people to use a chat room and have a coherent discussion because messages are moving too quickly to read. Large chat rooms also place a greater burden on moderators. The speed of the messages can make it extremely taxing to monitor for abusive behavior. Restricting user types, rate limiting, and room size restrictions are all attempts to deal with the issue of scale. I will propose a solution for this problem below.
Restrict Certain Content
This type of moderation is done automatically. The category includes preventing people from using bad words or links (Ustream), and posting non-unique messages (Twitch). I don’t view these as essential features for chat rooms on a new platform, nor as furthering the ideal of civil discourse. Moderators can deal with the type of abuse automatic content restriction might prevent, and there are appropriate uses of bad words and non-unique messages.
Imagining New Solutions
In addition to implementing variations of the above features, I suggest experimenting with the following additional features as ways to make chat rooms more civil and more likely to lead to a sense of community:
Silent Listener Period
In discussing his prototype for a chat system called ROAR that connects small chat rooms to larger crowds co-watching the same event, Drew Harry mentions the possibility of setting a period of time that new viewers are required to spend silently viewing before they can participate in a chat room (2012, 154). Harry states that this increases the overhead of anti-social behavior: if someone gets banned they would have to create a new account, and then silently watch for the fixed time period before they can participate in the chat room again. I would add that this silent listener period also creates an opportunity for viewers to see how the chat community functions, and implicitly observe the norms of communication that are prevalent. It may increase the likelihood that new participants are slightly more “socialized” into the norms of the community before participating. Research has shown that comments tend to mirror the thoughtfulness of previous comments (Sukumaran et al. 2011). Even visual and textual design elements that suggest thoughtful engagement can influence the quality of contribution (ibid). This requires further experimentation, for example if the “chat” button is relabeled “participate” or “deliberate” will this alter the tone of the chat room? But the principle of the listener period could be an effective way to encourage people to contribute in community-appropriate ways.
Building Teams
I interviewed a livestreamer and asked about chat, and one of her comments was that it was very hard to find moderators. This should be verified by more interviews, but I believe this is a near-universal problem and propose that the chat room be used as a place to actively connect viewers and curators, with the intention of elevating good participants to moderators. I envision a system that flags a viewer who visits the same chat room more than 3 times and sends more than 6 total messages during those 3 visits. An email would be automatically sent to the curator, with the viewer’s profile information and each of their chat messages. The email would prompt the curator to think about what makes a good moderator, and if the chat messages from the viewer seem to indicate she or he might have those qualities, it would prompt the curator to contact the participant and ask if they would like to try moderating once on their next visit. If the viewer accepts the request and does moderate the chat room a second email will be sent to the curator, detailing all of the actions that the temporary moderator took. This email would prompt the curator to evaluate whether the moderator did a good job or not. They could even give the moderator feedback. In this way the system will continue to prompt the curator to try to build a team of moderators. Accepting the responsibility of moderating is a logical next step for frequent participants, and moves them up the ladder of engagement, creating a sense of co-ownership of the chat room, and a deeper level of participation. The 3 visit/6 message threshold may need to change, but the idea is to have some level or participation that triggers prompts to build relationships. This kind of system should increase the chances of communities forming, and directly relates to the organizing practices that Ganz and Hilton describe (2010).
Room Rules
When a curator turns on the chat room (and it could be turned off by default to ensure it is a conscious choice by the curator to have a chat room), the curator could be prompted to write a statement of chat rules or norms. This would be similar to Reddit sub-forums, which often have specific rules for posting. Joining a chat room would require reading and agreeing to the set of norms, helping to establish the identity of the community as they choose to define it. The rules could either be displayed once in a pop-up window that new participants have to agree to, or they could be permanently displayed at the top or bottom of the chat window. The idea here is very simple: by making sure people understand what the community views as acceptable behavior people are more likely to abide by those norms.
Require Moderator
When a curator turns on the chat room for their curated stream, there could be an additional option to only make the chat room live when a person with moderator privileges is present. This would create an additional incentive for curators to connect with other people and elevate them to moderator status, possibly increasing the likelihood that curators and viewers would form bonds through use of the platform. A drawback to this approach is that chat rooms would abruptly stop working as soon as the last moderator leaves. This could be frustrating to participants, who may have been mid-conversation.
Scaling Chat Rooms
The issue of scale is one of the bigger challenges for implementing chat. I would like to experiment with rate-limiting the entire room, but scaling the rate limit based on the number of participants and how frequently they are sending messages. This would require determining how many messages per minute a moderator can reasonably deal with. If the answer were 50, then what kind of rate limit would need to be used on a room of 1,000 participants based on their rate of chatting to achieve roughly 50 messages per minute? While this runs counter to the stated goal of increasing participation, large chat rooms have the problem of too much participation. It is possible that if people know they are being rate-limited they will try to send more substantive messages, essentially trying to contribute more with each message to compensate. It would be quite interesting to compare messages from a room that is being rate limited to one that is just below the threshold to see if there is a difference in contribution quality.
Conclusion
In summary, I have attempted to argue that chat can be problematic, but that it is necessary to create the possibility that communities might form around the curated livestreams on DeepStream. I have discussed the kinds of participatory and self-organizing communities that might ideally form, and considered whether existing chat moderation features help or hinder their formation. Finally, I have proposed some specific ideas to reinforce moderator-determined norms of behavior and increase the community-building potential of chat rooms, and a way to try to deal with the issue of scale for very large rooms that could result in higher-quality messages from participants.
References
Coleman, Gabriella. 2013. “Anonymous in Context: The Politics and Power behind the Mask,” Internet Governance, , September. https://www.cigionline.org/sites/default/files/no5_3.pdf.
Duggan, Maeve. 2015. “Online Harassment.” Pew Research Center’s Internet & American Life Project. Accessed May 8. http://www.pewinternet.org/2014/10/22/online-harassment/.
Ganz, Marshall, and Kate Hilton. 2010. “The New Generation of Organizers.” February 12. http://www.shelterforce.org/article/1870/the_new_generation_of_organizers/.
Haines, Russell, Jill Hough, Lan Cao, and Douglas Haines. 2014. “Anonymity in Computer-Mediated Communication: More Contrarian Ideas with Less Influence.” Group Decision and Negotiation 23 (4): 765–86. doi:10.1007/s10726-012-9318-2.
Harry, Drew. 2012. “Designing Complementary Communication Systems.” Cambridge, Mass.: Massachusetts Institute of Technology.
Jenkins, Henry, Katie Clinton, Ravi Purushotma, Alice J. Robison, and Margaret Weigel. 2006. “Confronting the Challenges of Participatory Culture: Media Education for the 21st Century.” MacArthur Foundation Publication 1 (1): 1–59.
Leavitt, Peter, and Cynthia Peacock. 2014. “Civility, Engagement, and Online Discourse: A Review of Literature.” Engaging News Project. August 4. https://cmsw.mit.edu/wp/wp-content/uploads/2017/06/Civility_Online-Discourse_ENP_NICDreport.pdf.
Matias, Nathan. 2015. “What Can We Do About Online Harassment? Danielle Citron and Brianna Wu on Legal and Technical Responses.” May 7. https://web.archive.org/web/20170813134419/https://civic.mit.edu/blog/natematias/what-can-we-do-about-online-harassment-danielle-citron-and-brianna-wu-on-legal-and.
ribbonlace. 2015. “TIFU by Posting for Three Years and Just Now Realizing I’ve Been Shadow Banned This Entire Time. • /r/tifu.” Reddit. Accessed May 9. http://www.reddit.com/r/tifu/comments/351buo/tifu_by_posting_for_three_years_and_just_now/.
Sukumaran, Abhay, Stephanie Vezich, Melanie McHugh, and Clifford Nass. 2011. “Normative Influences on Thoughtful Online Participation.” In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 3401–10. ACM. http://dl.acm.org/citation.cfm?id=1979450.
Suler, John. 2004. “The Online Disinhibition Effect.” Cyberpsychology & Behavior 7 (3): 321–26.
Westheimer, Joel, and Joseph Kahne. 2004. “What Kind of Citizen? The Politics of Educating for Democracy.” American Educational Research Journal. edsjsr.
Winner, Langdon. 1980. “Do Artifacts Have Politics?” Daedalus, 121–36.