Meta Seeks to Weed Out Harmful Content Using AI Systems Amid Growing Pressure

Meta Seeks to Weed Out Harmful Content Using AI Systems Amid Growing Pressure

(Cnet) Meta, the parent company of Facebook, has created an artificial intelligence technology to flag harmful and misleading content quickly.

Meta says the new AI system, Few-Shot Learner, requires a small amount of training data allowing it to tackle new types of harmful content quickly. 

Some of the harmful content that Meta targets include misinformation about Covid-19 vaccines that is likely to evade detection using the current systems.

The company says it has tested the new system and found it better in flagging offensive content that would have been missed by the conventional AI systems. 

Meta Product Manager Cornelia Carapcea says the idea of using the new AI is to keep users safe by reacting to information faster and acting on it in a timely fashion.

Meta’s new move is expected to counter criticism, with the US President Joe Biden saying it was doing little to prevent misinformation on the platform.

FB: NASDAQ is down -0.26% on premarket.

Our Experts


Daniel Michelson

Daniel is a long term investor and position trader in the forex market.

Reva Green

Reva Green is the Senior Editor for website. An experienced media professional, Reva has close to a decade of editorial experience with a background.

Shandor Brenner

Shandor Brenner, an experienced writer at fxaudit.com, brings a wealth of knowledge with over 20 years in the investment field.

Leave a Reply

CAPTCHA ImageChange Image