Scientists, including one of Indian origin, have developed a new technique that can spot nasty personal attacks by cyberbullies on social media and alert parents or network administrators when abuse has occurred.
Scientists, including one of Indian origin, have developed a new technique that can spot nasty personal attacks by cyberbullies on social media and alert parents or network administrators when abuse has occurred. The approach developed by researchers at the University of Colorado Boulder in the US, uses five times less computing resources than existing tools. That is efficient enough to monitor a network the size of Instagram for a modest investment in server power, said Richard Han, an associate professor at UC Boulder.
“The response of the social media networks to fake news has recently started to uptick, even though it took grave consequences to reach that point. The response needs to be just as strong for cyberbullying,” said Han. The group also released a free Android app called BullyAlert that allows parents to receive alerts when their kids are the objects of bullying on Instagram. The app can learn from and adapt to what parents consider bullying, researchers said.
“As parent, I know that a lot of times we are not in full knowledge of what our children are doing on their social networks,” said Shivakant Mishra, a professor at UC Boulder. “An app like this that informs us when something problematic is happening is invaluable,” Mishra said. To build their toolbox, the researchers first employed humans to teach a computer programme how to separate benign online comments from abuse.
Next, they designed a system that works a bit like hospital triage. When a user uploads a new post, the group’s tools make a quick scan of the comments. If those comments look questionable, then that post gets high priority to receive further checks. However, if the comments all seem charitable, then the system bumps the post to the bottom of its queue.
“Our goal is to focus on the most vulnerable sessions. We still continue to monitor all of the sessions, but we monitor more frequently those sessions that we think are more problematic,” Han said. The researchers tested their approach on real-world data from Vine, a now-defunct video-sharing platform, and Instagram.
Han explained that the team picked those networks because they make their data publicly available. Researchers calculated that their toolset could monitor traffic on Vine and Instagram in real-time, detecting cyberbullying behaviour with 70 per cent accuracy. The approach could also send up warning flags within two hours after the onset of abuse – a performance unmatched by currently available software.