What if we could ban trolls before they became abusive?
IF technology enabled us to identity online trolls before they became abusive, would you agree with banning their use of social media — before they flexed their abusive writing muscles?
This may sound more than a little like the pre-crime, sci-fi world of the film Minority Report, where murder was predictable and would-be murderers were caught and imprisoned before killing anyone. And it is.
Yet banning potential trolls is an issue we may face thanks to researchers from Cornell and Stanford Universities. Why? Because they’ve developed an algorithm that can detect which online users will become trolls and bullies.
The researchers studied more than 35m posts from nearly 2m users on three major websites and found nearly 50,000 users who were banned during the 18-month period of the study.
Tell-tale signs emerged in the posts of these Future-Banned-Users: they wrote differently from most other users — before they showed any obvious signs of the behaviour that would lead to their ban.
They tended to spend more time in individual threads and used different language than other users. Although they didn’t write longer posts than other users, what the Future-Banned-Users wrote was much harder to read.
None of the researchers’ analysis focused on whether posts were malicious, just on whether the writers went on to be banned from the websites that formed part of the study.
Astonishingly, the researchers found a way to predict — by analysing only five posts — if someone would go on to be banned. The accuracy rate was 80 per cent when based on five posts. Extending the analysis to ten posts boosted this by a further two percentage points.
FREAKED OUT, YET?
All very fascinating, and slightly freaky. But what do we do with this algorithm, now that the genie is out of the bottle? After all, we have a growing list of famous people who say they have been hounded off social media by bullying behaviour so extreme it extends to death threats. Shouldn’t we be doing something?
Broadcaster and comedian Sue Perkins is one of the latest celebrities to sign off from Twitter. Her sin? She became the bookies’ frontrunner to take over the helm of Top Gear after Jeremy Clarkson’s exit, a drama played out heavily on social media. Perkins decided enough was enough when someone suggested they’d like to see her burn to death. And she was far from the only person to be trolled during the Top Gear saga.
We even have former Apprentice contestant Katie Hopkins fulfilling the troll profile of deliberately wanting to attract attention — no matter what kind. The backlash has come in the form of a petition asking The Sun to sack her.
Those of us sitting on the sidelines of trolling activity may soon have some issues to wrap our heads around, and some key decisions to make.
ALGORITHM? WHAT ALGORITHM?
Of course, we could try to pretend that the algorithm doesn’t exist. But the impact online bullying and trolling has had on people’s lives, even to the point of suicide, makes that a non-starter.
But do we simply adopt the algorithm developed by the researchers, using it to detect and ban future-trolls before they’ve bullied or abused anyone? Doesn’t that sound like guilty until proven innocent? The reversal of our ideals of justice?
And, while we’re weighing up the pros and cons of this plan, how would users react if blocked from a group without solid evidence of anti-social behaviour? We’d like to think it wouldn’t happen to us, but then Tom Cruise was convinced in the infallibility of pre-crime — until his name cropped up in Minority Report as a future murderer.
Would someone running an online business have grounds for legal action were they blocked and find their company falling into the red? Can we deny someone the use of the web or claim it as a human right?
WHAT’S WRONG WITH FREE SPEECH?
Or do we stand on the principle of free speech and fall back on existing laws that cover hate crimes, acting only when the vitriol is visible?
In this instance, we’d be ignoring the suicides or psychological damage inflicted on troll victims, knowing full well that an algorithm might have prevented harm. It feels like falling back on the notion that ‘words will never hurt me’, an idea we know to be false. And would those on the receiving end of trolling have grounds for compensation on the basis of websites and social networks ignoring their duty of care to users?
Perhaps the most positive way we might use the algorithm lies in it being used to train future-trolls into adopting more socially acceptable behaviour — before they stray from the straight and narrow.
I imagine we’d all like to imagine that we could take back the web from the trolls and transform it into something altogether nicer. But the idea of training future-trolls either sounds like good parenting or a little Brave New World-ish, depending on your viewpoint.
Ignoring Brave New World for a moment, perhaps we can take this ideal of niceness a step further? If we are what we write, and evidence suggests that’s true, using the algorithm to train future-trolls to see the error of their future ways might have a positive impact on their non-writing behaviour too. Wouldn’t that be lov-erly? But also back to Brave New World mixed with Big Brother.
But future-trolls who are bright enough — which basically means psychopaths — might like the challenge of beating the algorithm, dribbling out bile subtle enough to stay on the right side of the odds of getting banned. A slow drip of damage instead of full-on venom. I’d see that as a stalking story sequel to the ‘pre-trolling’ film that’s already writing itself in my head.
We’re going to need time to get our heads around the idea of being able to identify Future-Banned-Users in this unfamiliar world of ‘pre-trolling’.
All we know for now is that we are what we write and, thanks to two researchers, we are what we might write…
Latest posts by Susan Feehan (see all)
- Once upon a time, George Clooney was stuck for a blog idea… - July 23, 2015
- Atticus Finch: hero, villain or who we really are? - July 16, 2015
- Anyone know how to herd cats? Seriously… - July 9, 2015