With most social internet services, getting rid of trolls is usually a matter of reporting a post or blocking the offender. But how do you do that in a fast-moving livestream service like Periscope? By asking viewers for help, that’s how. Periscope has introduced a moderation system that creates “flash juries” whenever a comment is up for dispute. If someone flags a message as abuse or spam, a few random viewers are asked to vote on whether or not it’s a problem. If the majority believes it is, the offender faces a minute-long ban on comments; a repeat offense mutes the person for the rest of the broadcast.
You should see moderation in effect starting today (May 31st) through app updates.
The system isn’t mandatory. Viewers can opt out of voting if they’d rather not participate in a mini trial, and broadcasters can turn moderation off if they’re comfortable with the occasional outburst. And Periscope is quick to note that this isn’t the sum total of its anti-abuse efforts. You can still kick people out of broadcasts, limit viewers to those you know and report ongoing problems. The new approach primarily tackles Periscope’s trickiest abuse problem: those hit-and-run comments meant only to cause some temporary grief and ruin an otherwise happy stream.