Last Week Instagram launched a very important new feature.
Now, if you are worried about a fellow instagrammer’s mental wellbeing you can flag up their posts to receive an automated message and offer of help.
The message reads: “Someone saw one of your posts and thinks you might be going through a difficult time. If you need support we’d like to help.”
Following this a series of support options will be given to the user: talking to a friend, numbers for support helplines and more information and advice on mental health.
The same message will be automatically sent to anyone searching for hashtags like #selfharm.
At the moment the feature is only available in the US, but there are plans to release the update worldwide imminently.
In a social media landscape, where people are taking to platforms like Instagram to express themselves, is this a good idea, or a restriction of free speech and public expression?
Personally, I commend the decision of Instagram to launch this new feature. It opens the door to discussing mental health issues as part of the responsibility of social media platforms and could also help a lot of people who use their Instagram accounts as a means of expression.
In a digital age where Instagram has been criticised on more than one occasion – most recently in this article by The Guardian – for contributing to a fear of missing out and inadequacy felt by their users, this move is the next logical step to reassure users that it is taking their mental health issues seriously.
Instagram’s CEP Marne Levine said: “These tools are designed to let you know that you are surrounded by a community that cares about you, at a moment when you might most need the help.”
However, we also need to consider how effective this will be.
Instagram have already banned hashtags such as #thinspiration to prevent user from searching for posts relating to drastic weight loss and eating disorders, how far can they go without restricting content entirely?
And is this enough?
Giving users the option to skip the generated messages is necessary for those who don’t actually need help, but what about the people who do? And is it possible to put anything further in place without completely destroying user privacy?
I think it is certainly a move in the right direction, and that social media platforms need to be taking steps to ensure the safety of their users in the real world when so many of us spend our lives in a digital landscape.
Finding the balance between caring enough to do something to help and restricting content too much is going to be difficult.
After all, I’m sure Big Brother started out ‘for the greater good’.
What do you think? Is this update a good thing? Or is it going too far?