Mark Zuckerberg wants to stop suicides being broadcast on Facebook Live and heres his plan to stop it

| February 17, 2017 | 0 Comments


Get daily updates directly to your inbox
+ SubscribeThank you for subscribing!
Could not subscribe, try again laterInvalid Email

The use of live streaming platforms being used to broadcast deeply troubling videos as the unfold live is a very modern problem thrown up by the advance of technology. Platforms such as YouTube, Periscope and Facebook live are essential tools for relaying unfolding news events in real-time. However, they are also used by some people at times of extreme vulnerability to show them taking their own lives, or by bullies or criminals to broadcast their abuse to an audience. Facebook founder Mark Zuckerberg says he recognises the problem and is taking steps to try and prevent suicides and similar disturbing content from being viewed. To do that, he says, Facebook needs to develop artificial intelligence tools that can analyse and identify content in real-time to help the platform’s moderators be proactive instead of reactive. “Looking ahead, one of our greatest opportunities to keep people safe is building artificial intelligence to understand more quickly and accurately what is happening across our community,” he writes.

Mark Zuckerberg has outlined his vision of the future of Facebook in a 6,000 word essay posted to his profile page
(Photo: Facebook)
“There are billions of posts, comments and messages across our services each day, and since it’s impossible to review all of them, we review content once it is reported to us. “There have been terribly tragic events — like suicides, some live streamed — that perhaps could have been prevented if someone had realized what was happening and reported them sooner. “There are cases of bullying and harassment every day, that our team must be alerted to before we can help out. These stories show we must find a way to do more. “Artificial intelligence can help provide a better approach. We are researching systems that can look at photos and videos to flag content our team should review. Facebook doesn’t want to stop fake news, just label it more clearly “This is still very early in development, but we have started to have it look at some content, and it already generates about one-third of all reports to the team that reviews content for our community.” Zuckerberg’s comments are made in a 6,000-word essay he posted top his Facebook profile page outlining the social network’s future goals. Zuckerberg posted the mission statement today, addressing his desire to focus on using the platform to turn online communities in to real physical groups. Among the other subjects tackled in the essay are the desire to build tool that allow people to form meaningful communities, and the aim of getting the balance right between what to remove from the site and what to leave up – including ‘fake news’. After the hugely divisive election campaign that resulted in Donald Trump being elected to the White House – and his obsession with perceived partisan media reports – the best way to tackle ‘misinformation’ online has become a hot topic. However, Zuckerberg has declared that he does not want to root out ‘fake news’ and delete it from the platform, but instead wants content to be rated and marked by Facebook’s team.

Donald Trump rails against the phenomenon of ‘fake news’
(Photo: Getty)
That way, he argues, people can enjoy a multiplicity of media and form a balanced view of their own, rather than allowing Faceook to become an ‘echo chamber’ of friends holding the same views as the user. “Our goal must be to help people see a more complete picture, not just alternate perspectives,” he writes. “Accuracy of information is very important. We know there is misinformation and even outright hoax content on Facebook, and we take this very seriously. “We’ve made progress fighting hoaxes the way we fight spam, but we have more work to do. We are proceeding carefully because there is not always a clear line between hoaxes, satire and opinion. “In a free society, it’s important that people have the power to share their opinion, even if others think they’re wrong. “Our approach will focus less on banning misinformation, and more on surfacing additional perspectives and information, including that fact checkers dispute an item’s accuracy.” Samaritans (116 123) operates a 24-hour service available every day of the year. If you prefer to write down how you’re feeling, or if you’re worried about being overheard on the phone, you can email Samaritans at jo@samaritans.org .



Tags:

Category: Tech World News

Leave a Reply