The increasing call for social media regulation: why and how?
5th March 2019
Social media can have a powerful effect on your mental wellbeing. While you might get a pang of jealousy watching ‘so-and-so you went to school with’ bounding across a beach in the Maldives, or a burst of nostalgia as a family member shares a picture of your childhood dog, these emotions are nothing compared to the effects of being consistently exposed to disturbing images surrounding self-harm and suicide.
Let’s not forget, the internet can be a dark place
Social media regulation is about implementing more control over what is posted online. Right now, lack of regulation means users have the freedom to post about whatever they wish, and while freedom of speech can be a wonderful thing, there’s also nothing to shield people from the less-than-pleasant content.
It’s been an ongoing discussion for a number of years, but the evidence supporting regulation only continues to mount. Too many parents have come forward to point the finger at social media while Papyrus, a charity working to prevent suicide, claims to be speaking with 30 families who believe that social media has played a part in the suicides of their children.
Instagram has said before that “for many young people, discussing their mental health journey or connecting with others who have battled similar issues is an important part of their recovery. This is why we don’t remove certain content and instead offer people looking at, or posting it, support when they might need it most”.
While allowing those who have battled (or are still battling) with mental health to share and reflect upon their experience is so very important when it comes to forming supportive communities, it can be hard to know when a particularly distressing image or story is doing more harm than good.
Basically, how can we define ‘harmful’? Can we define ‘harmful’? Ofcom is just beginning to explore this idea.
Well, nobody really knows yet. Social media regulation could come in the form of stricter monitoring around what content is posted and making a greater effort to swiftly remove anything potentially harmful. It could even mean enforcing companies to use third-party tools (like ContentCal) to publish to social accounts. Only time will tell.
In the meantime, here’s where the government is at when it comes to new laws working towards internet safety.