US regulator accuses Meta of putting child users at risk

US regulator accuses Meta of putting child users at risk

The US Federal Trade Commission (FTC) has accused Meta, the parent company of Facebook and Instagram, of failing to implement adequate parental controls, and has proposed that the company be banned from making money off children’s data.

The regulator’s investigation found that users under the age of 13 were still able to chat with contacts who were not vetted by their parents.

The company was also accused of allowing third-party apps access to private information after promising to cut off access after 90 days of non-use.

ALSO READ: ARCON granted leave to summon Meta in N30bn lawsuit over advert law violations

The FTC has proposed a number of actions in response to these findings, including limits on future uses of facial recognition technology. Meta would be required to obtain users’ affirmative consent for any future uses of the technology.

In response, Meta has accused the FTC of overstepping its authority and called the move a “political stunt”. A spokesperson for Meta claimed that the company was being singled out, while Chinese companies such as TikTok were allowed to operate without constraint on American soil.

The FTC began its case against Meta in 2018, following revelations that Cambridge Analytica had taken personal data belonging to millions of Facebook users.

ALSO READ: Kenya court hears case against Meta over dismissal of former employees

Meta claims to have spent significant resources on building an industry-leading privacy program and has pledged to “vigorously fight” the FTC’s action. The company believes that it has been unfairly treated, and accuses the regulator of coming up with a “totally unprecedented theory”.

Despite these claims, the FTC insists that Meta has repeatedly violated its privacy promises, and is demanding tougher action to protect younger users.