The other day, I was talking with a friend of mine who is working on launching a neighborhood blog; as we discussed his vision for the site, he admitted that probably the scariest aspect of the whole launch for him involves decisions Re: how to handle reader comments. Like any blogger, news publisher, or other online community director, my friend’s ultimate goal is to encourage animated discussion among readers without also inviting trolls who hang around mainly to abuse other users. Anyone who has spent more than five minutes on SFGate’s news portal will understand why my friend might have cause for concern–such is SFGate’s reputation for having a comments section overrun by people who are prone to name-calling, prejudice, and just basic idiocy.
Many news organizations feel that what encourages such behavior is the anonymity of the web, the faceless frontier where people might be more willing to say things on a computer screen that they would never say in person. Those who feel that anonymity is the problem champion an approach of “authenticity”–or what is often called a system of “verified users”–that requires readers to create a profile for themselves that links to all of their activity on the site. My own experience, and the discussions on the matter I’ve read and participated in (see in particular Steve Buttry‘s blog post on the matter from early March) have convinced me that having “verified users” contribute to the conversation is not nearly as important as having a human moderator whose presence is palpable to those who read and participate in the “Comments” section of a given site; of all approaches, such participation from a moderator is the most likely to yield what web publications strive for: a lively community with lots of user engagement and participation, full of thoughtful comments and a minimum of what one commenter on Steve’s blog termed, “keyboard rage.”
Software, of course, can be a key element in weeding out obviously abusive posts and therefore keeping the comments section of a site “cleaner” than what one might see at SFGate. SFGate does use software to flag posts for potential abuses; however, this process also suffers from a lag time of fifteen minutes or more during which offensive content has often not only been viewed by visitors to the site, but also responded to by other readers. So software should not be the whole of comment moderation, though it can be a helpful first line of defense, especially when one has a high traffic site with a high comment frequency.
What this means is that one cannot overlook the importance of the “human element”–a moderator who not only oversees the “behavior” of the users participating in the community, but participates herself, engaging with users in a thoughtful way. Mandy Jenkins , who runs the site, Zombie Journalism, made the point that “You have to establish this relationship [between readers and site moderator] very early on and maintain a consistent message of what will get a scolding, what will get removed and what will get a user kicked off the site.” She also notes that, in her experience, she has learned “A human voice (with a photo, even!) of authority inside of a comment thread can keep the conversation on-topic, answer questions and, yes, take the abuse users usually reserve for one another.”
One site that stands as a good example of how this is done is the Tumblr blog, STFU, Parents. The site’s creator, who is known only as “B.” to readers of the blog, interacts frequently with her readers in the comments, engaging most often with those readers whose write things that are funny and smart. Those who comment, then, know that B. is listening to what they have to say; they are reminded, then, that–though B. is not sitting right in front of them–they are talking to a person, one they respect and admire. B.’s participation also models the kind of “behavior” she expects from those who engage in her online community: when it is clear that smart and funny comments are the ones that are the most appreciated, those who comment are more likely to say something smart or funny (or–even better–something smart and funny). B. goes so far as to award a “Comment of the Week” prize to the person who had the best one-liner or wittiest response to a given post; this shows that B. is reading all her comments, and that she values what her readers have to say. In turn, her readers respond by writing comments that are far more literate and respectful than much of what one sees on SFGate.
Of course, comment moderation can be an all-day job if you are running a high-traffic, high-participation site. The human factor can still be at work, though, if one gets one’s readers involved. When Gawker began using a “promoted comments” system, in which readers could “promote” comments they liked and help “hide” the ones they didn’t, the site found that comments as a whole improved in quality.
Many news organizations have tried to improve comments by making them work more like letters to the editor: Readers who wish to comment must register with the site and provide contact information–sometimes an email, but sometimes even a phone number. The problem with this approach is that a letter to the editor is, essentially, a “mini-essay”–a work on its own that is not intended to receive a response from someone else. Online comments, however, work more like a real-time conversation in which people do not simply respond to what the author of a piece has written, but also to what fellow readers have said about that piece. In that way, online comments are more akin to “water cooler” discussion than letters to the editor. Because an engaged readership is a loyal readership, news organizations, bloggers, and others in the online community need to put thought and attention into how they moderate this water cooler conversation. Above all, they need to make sure to provide a human presence–a voice of respectful but firm authority–that responds quickly to reward “good behavior” and stamp out the bad.