Facebook will identify terrorists soon
The founder of Facebook Mark Zuckerberg has told the public that artificial intelligence software will be able to review the content posted on the social network. Algorithms will eventually be able to spot violence, bullying, terrorism and will prevent suicide attempts. He admitted that Facebook previously has made mistakes by removing it. However, it would take years to create such algorithms.
He described the plan and said that the announcement is welcomed by the internet safety charity and had been critical of the way. The social network will be able to handle the posts depicting extreme violence.
He does receive more than 5500 words letter, which discussed the future of Facebook. The founder of Facebook said it is impossible to review billions of messages and posts which appear on the platform every day. However the complexity of issues have started, the existing processes will govern the community.
Founder stated that the videos had been removed related to the Black Lives Matter movement and the historical napalm girl photograph from Vietnam as errors in the existing process. Facebook was criticized and banned in most of the countries in 2014 when the killers of Fusilier Lee Rigby spoke about murdering a soldier. The research systems can be read and looked at videos and photos if anything dangerous is happening on the platform. The early development has started to look at some content and generated one-third reports to the team which reviews content.
He said that AI promised to identify the problematic material quickly for humans and also identified risks which nobody has flagged including terrorists which plan to attack using private channels.
Facebook team is exploring various ways to use AI, which tells the difference between stories and news of terrorism and will identify the propaganda of terrorist.
The aim of Mr. Zuckerberg is to allow people to post largely on the social platform but the main concern is that it must remain within the law so that the algorithms must detect it. Users can easily filter their news feed and can remove the types of posts they did not want to see. Make sure to adjust the personal settings based on violence, graphic content, profanity and line of nudity. People who are unable to make a decision, the default would be the decision. These AI-based algorithms must be worth noting because it will make major advances to understand photos, videos, a text which contains graphic violence, hate speech and sexually explicit content and much more.
The Facebook plan was appreciated and welcomed by the member of Family Online Safety Institute, which previously criticized Facebook for allowing beheading videos to be seen without the warning site.