While it’s undeniable that the digital realm has brought forth increased interconnectivity, it hasn’t come without a negative side. One of the most talked about digital drawbacks remains cyberbullying, which seems to increase in proximity every year. Data provided by the United Nations International Children’s Emergency Fund (UNICEF), a humanitarian aid organisation, has shown that teenagers are more susceptible to cyberbullying in comparison to adults, as they spend much more time leaving their digital footprint. However, the advent of artificial intelligence (AI) has developed into a way to ensure online protection against cyber harms, through natural language processing (NLP) and machine learning (ML) algorithms. “I think that cyberbullying has evolved with advancements in technology, intensifying the potential harm to the physical and mental well-being of the victim. AI is believed to have enhanced our ability to keep pace with cyberbullying tactics. It can act as a shield against online threats like it can detect patterns and recognise offensive language through scans of social media for harmful content, alerting and stopping bullies in real-time,” Darshil Shah, founder and director, TreadBinary, a research and solutioning service-based platform, told FE TransformX.
Numbers provided by Gitnux, a market research company, have stated that roughly 20% of young people get subjected to cyberbullying before the age of 25, with 83% of cyberbullied victims exhibiting suicidal symptoms. The company also mentioned that more than 80% of teens routinely use a mobile phone, which makes it the most typical way of inducing cyberbullying practices. The top three countries, where the maximum cyberbullying cases have been reported, include India at 38%, Brazil at 29%, and the United States of America (USA) at 26%, as per SingleCare, a healthcare, technology, and consumer startup-focused platform. Overall, 87% of the youth has been at the receiving end of cyberbullying, as quoted by a cybersecurity insights report of Norton, an anti-malware software product. In that context, market research has shown that AI can play a critical role in preventing cyberbullying, as the technology can recognise developing cyberbullying patterns, abstain the patterns from taking a course, and create a personalised approach for helping those affected by cyberbullying. Insights from an article on Medium, an online publishing platform, highlighted that AI algorithms have the potential to find key terms which relate to cyberbullying, along with ensuring real-time monitoring to block accounts related to cyberbullying.
“I believe deepfake videos have been used for cyberbullying, creating realistic yet fake videos of individuals saying or doing things they never did. AI-based detection tools are being developed to combat this, such as the deepfake detection tool by Microsoft, which analyses videos to provide a confidence score on whether the content is artificially generated or not. Future AI could provide even more nuanced detection of cyberbullying by understanding context better, similar to how Google’s Jigsaw is evolving to interpret the subtleties of language and intent. AI could offer personalised cyberbullying protection settings, akin to TikTok’s customised content preferences, allowing users to define what they consider harmful,” Deepika Loganathan, co-founder and CEO, HaiVE.Tech, an AI-as-a-Service platform, explained.
Furthermore, AI’s capacity to defend against cyberbullying enables the understanding of platforms’ ethical standards over responsibility, privacy, and censorship. Reportedly, platforms, such as Instagram, have deployed AI-based technologies to prevent the spreading of derogatory content. For example, Instagram introduced an AI-oriented application that notifies users before they post a comment that could be considered offensive, giving them a chance to reconsider their words, which demonstrates how AI can play a crucial role in moderating online interactions and preventing harassment. Other examples include Twitter’s, now X, usage of NLP mechanisms to hide offensive tweets, Facebook’s AI utilisation to scrutinise images and videos for cyberbullying patterns, and Jigsaw’s, a Google-based technology incubator, formation of Perspective API to help users filter malicious content. Talking about the global AI in cybersecurity market, Market Data Forecast, a market research firm, has predicted the market to reach $10.85 billion in 2024 and $30.92 billion by 2029, at 23.3% compound annual growth rate (CAGR) for the mentioned timeline.
“I believe the AI market is poised for evolution in terms of restricting cyberbullying practices, driven by increasing societal demand for safer online environments and the technological advancements that enable sophisticated moderation tools. Soon, we can expect AI systems to become better at contextual analysis, leveraging advancements in NLP and ML to understand the subtleties of language, culture, and context. This should allow for accurate identification of cyberbullying incidents, reducing false positives, and ensuring that legitimate expressions of free speech are not unduly censored,” Sonakshi Pratap, CEO, Leadzen.ai, a lead generation tool, concluded.