The feature uses machine learning to detect suspicious activity, such as adults sending out loads of friend or message requests to children. When it spots suspect behavior, it sends an in-app warning to the top of the conversation. This prompts users to block or ignore shady accounts, and provides tips on how to avoid potential scams. Facebook says the feature doesn’t need to look at the messages themselves. Instead, it searches for behavioral signals such as sending out numerous requests in a short period of time. [Read: This AI needs your help to identify child abusers by their hands] That means it will work when Messenger becomes end-to-end encrypted, Facebook messaging privacy chief Jay Sullivan said in a statement:
Facebook’s safety strategy
Facebook has chosen not to automatically block suspicious accounts that the new feature flags. Instead, it will prompt users to make their own informed decisions. This approach is similar to the alerts Facebook now sends to users who interact with coronavirus misinformation, which direct them to a myth-debunking webpage. Stephen Balkam, CEO of the Family Online Safety Institute, said he approved of the strategy: Facebook started rolling out the new feature on Android in March and will add it to iOS next week.