Facebook put the safety of its content moderators at risk after inadvertently exposing their personal details to suspected terrorist users of the social network, according to The Guardian.
The security lapse affected more than 1,000 workers across 22 departments at Facebook who used the company's moderation software to review and remove inappropriate content from the platform, including sexual material, hate speech and terrorist propaganda, the British newspaper reports.
It all came in the open when Facebook moderators started receiving friend requests from people affiliated with the terrorist organisations they were scrutinizing.
''The security glitch, which lasted for a month before Facebook was able to correct it in November, made the moderators' profiles appear in the notifications of Facebook groups that are thought to be administrated by terrorists with ties to IS, Hezbollah and the Kurdistan Workers Party,'' the report quoted a moderator as saying.
It was later discovered by the company that the personal Facebook profiles of its moderators had been automatically appearing in the activity logs of the terror groups they were shutting down. Around 40 workers were affected in counter-terrorism unit based at Facebook's European headquarters in Dublin, Ireland while six of those were assessed to be ''high priority'' victims of the glitch.
''Within the high-risk, six had their personal profiles viewed by accounts with ties to IS, Hezbollah and the Kurdistan Workers Party. Facebook complies with the US state department's designation of terrorist groups,'' the moderator added.
A Facebook spokesman while confirming the security breach said it had made technical changes to ''better detect and prevent these types of issues from occurring''.
''We care deeply about keeping everyone who works for Facebook safe. As soon as we learned about the issue, we fixed it and began a thorough investigation to learn as much as possible about what happened,'' the spokesman said.
After the leak was detected, Facebook convened a ''task force of data scientists, community operations and security investigators''. The internal emails of Facebook revealed that the company warned all the employees and contracted staff it believed were affected, and also set-up an email address, email@example.com, to field queries from those affected.
For those in the high-risk group, Facebook also offered counselling through its employee assistance program, over and above counselling by the outsourcing firm Cpl Recruitment. It also offered to install a home alarm monitoring system and provide transport to and from work to the six.
But one of the moderators who fled Ireland said that Facebook needed to do more to address their pressing concerns for their safety and families. The moderator has filed a legal claim against Facebook and Cpl, seeking compensation for the psychological damage caused by the leak.
The moderator who went into hiding was among hundreds of ''community operations analysts'' contracted by global outsourcing company Cpl. Community operations analysts are typically low-paid contractors tasked with policing Facebook for content that breaches its community standards.
Overwhelmed with fear that he could face retaliation, the moderator, who first came to Ireland as an asylum seeker when he was a child, quit his job and moved to eastern Europe for five months.
The moderator said that others within the high-risk six had their personal profiles viewed by accounts with ties to Isis, Hezbollah and the Kurdistan Workers Party.
An urgent investigation by Facebook's security team established that personal profiles belonging to content moderators had been exposed. As soon as the leak was identified in November 2016, Facebook convened a ''task force of data scientists, community operations and security investigators'', according to internal emails seen by The Guardian, and warned all the employees and contracted staff it believed were affected. The company also set-up an email address, firstname.lastname@example.org, to field queries from those affected.