(Cnet) Meta, the parent company of Facebook, has created an artificial intelligence technology to flag harmful and misleading content quickly.
Meta says the new AI system, Few-Shot Learner, requires a small amount of training data allowing it to tackle new types of harmful content quickly.
Some of the harmful content that Meta targets include misinformation about Covid-19 vaccines that is likely to evade detection using the current systems.
The company says it has tested the new system and found it better in flagging offensive content that would have been missed by the conventional AI systems.
Meta Product Manager Cornelia Carapcea says the idea of using the new AI is to keep users safe by reacting to information faster and acting on it in a timely fashion.
Meta’s new move is expected to counter criticism, with the US President Joe Biden saying it was doing little to prevent misinformation on the platform.
FB: NASDAQ is down -0.26% on premarket.