Why is Facebook’s AI not learning to recognise faces?

India’s social media giants Facebook, Twitter and YouTube have all made a push to integrate AI into their platforms.

The companies say that it will help them to improve the accuracy of their content and improve the quality of the user experience.

But the impact of this AI, says Srikanth Prasad, a professor at the University of California, Los Angeles, is not clear.

He says the technology may not help users, but it may help the social media companies and advertisers.

For example, the company may not know what a person is saying or what they look like, he says.

And if the AI can learn how to recognise people’s faces, it may have the potential to improve both its own user experience and that of the brands that it advertises to.

“We may have a more accurate understanding of the face and a more engaged user, and so on,” he says, adding that Facebook has been testing its AI in real-world situations.

But others worry that the AI will also become biased.

“This is not a way for AI to be unbiased and neutral,” says Daniel Bovey, an AI expert at the Google Brain.

He says that AI may still be unable to distinguish between a person’s true identity and fake ones.

The social media company says it has been using the AI to improve its social sharing algorithms and to better understand how to share content with its users.

“The biggest impact will be when we get the technology to be able to recognise face, the first person to recognise it, the last person to not recognise it,” the company said in a statement.

But experts say this could also lead to AI taking a biased approach to information.

“It’s not the AI that’s biased, it’s the AI in the world,” says Michael Bekker, a researcher at Stanford University, who has studied artificial intelligence.

“If you’re building a machine that has been trained to recognise things that you know are false, it can also learn to recognise false things,” he adds.

He points to a 2014 paper published in Science that shows that the artificial intelligence can learn to detect images of people that look different from themselves.

“It can do it in a way that makes it more likely that it’ll be wrong, and that makes the machine more likely to be right,” he said.

The company has also said that it has started to work with academic researchers to test the impact that the technology could have on social media platforms.

“We are not at a point where we’re confident that our AI is neutral, and we are not there yet,” Facebook said in March.

But some social media users are not convinced.

“You can’t really trust the data that they are producing,” says Vishal Pandit, a human-rights lawyer and co-founder of the Centre for Internet and Society in New Delhi.

“They have no idea how accurate they are.”

Follow Stories Like This Get the Monitor stories you care about delivered to your inbox.

Sign up for our newsletter and never miss a story again.

In fact, the social network has said that the data it has on its user base is based on “lots of false positives” that were not used in the data used to create the AI.

“These are people who are not using the tool at all, but are looking at it because they want to get a certain result,” said the company.

“A lot of people are not even aware that there are such things as false positives.”

The AI has a strong bias against people with a certain political or religious view.

The AI, in fact, uses data from Facebook’s internal testing and analysis to help identify people who share more conservative views.

“As we get closer to having a fully automated system, we will be able … to identify the people that have the most conservative views,” Facebook says.

Related Post