Instagram has just announced that it will remove images of self-harm from its platform. The social media giant is under increasing pressure to find better ways to moderate users’ content, but this new announcement seems unlikely to address the major challenges facing both Instagram and Facebook.
The promise to remove graphic self-harm imagery came about as a result of pressure from the U.K. government following the suicide of a 14-year-old schoolgirl who had made various posts about depression prior to her death. The U.K. government’s Health Secretary, Matt Hancock, met with Instagram’s Head of Product, Adam Mosseri, who pledged to make the changes as soon as possible.
What’s worrying is that it took pressure from a country’s government to change something that Instagram should have addressed a long time ago. This is not an admirable move on Instagram’s part; the company should have been establishing policies for moderating extreme content long before now. Why didn’t Instagram’s Terms and Conditions already prohibit this sort of graphic content? Furthermore, it seems unlikely that deleting some posts is going to have an impact on users’ mental health; much larger changes would be required to deal with the myriad issues that are starting to emerge as a result of social media.
Instagram’s growth has been prolific but clearly the company has been dragging its feet when it comes to managing the huge volume of content that its users are producing. Nipples (link NSFW) have been a source of problems for several years, and users have suggested that, even after reporting violent content, posts have remained online. It’s relatively easy to write code to enable machines to identify porn; gore is a different story. Furthermore, you’re much more likely to be able to recruit staff to pick out sexual content than you are to find people that are happy to sit for hours sifting for images and videos of graphic violence and death. Evidently, Instagram has not invested heavily enough to moderate the content which generates its ad revenue. There will always be dark corners of the internet where graphic and inappropriate content materializes; given Instagram’s huge role in shaping our society, it should be working incredibly hard to ensure that it is not a part of that dark corner.
Having spent a bit of time exploring the #selfharm hashtag on Instagram, I found that most users are sharing their experiences in search of help, or in order to help others. A small minority are expressing some very disturbing emotions and resulting imagery. Given that the hashtag currently has well over half a million posts, banning self-harm images will probably be meaningless. Users with private accounts and small followings will continue untouched, and obscure hashtags will emerge, shifting in order to avoid detection and censorship.
So is Instagram promising to achieve something that simply isn’t possible? Unfortunately, until it makes wholesale changes to how it moderates content, yes. And when you consider that Instagram already actively promotes content that breaches its terms and conditions, there’s little reason to be optimistic that it regards this ban as much more than a reactionary attempt at public relations damage limitation.
As much as I am loath to admit it, Instagram needs government intervention. I’m an advocate of press freedom and the independence of information sharing online, however Facebook and Instagram have consistently demonstrated that they have neither the inclination nor the ability to deal sufficiently with the problems and challenges that its platforms create. Unless there are serious implications — fines and limits to advertising — they will drag their feet indefinitely.
Instagram is worth more than $100 billion. It's time for it to invest properly when it comes to protecting its users.
Lead image is a composite using an image by Ian Espinosa.