The results were shared onstage at F8, Facebook’s annual developer conference taking place today at McEnery Convention Center in San Jose, California. Other news announced at F8 this year include the release of Oculus Go, new Facebook Stories sharing capabilities, and the reopening of app and bot reviews following the Cambridge Analytica scandal.

See the full rundown here. The results of Facebook’s research mean that its computer vision in the real world can see more specific subsets, so instead of just saying “food,” it’s Indian or Italian cuisine; not just “bird” but a cedar waxwing; not just “man in a white suit” but a clown.

Improvements to Facebook’s computer vision could make for a better experience for everything from sharing old memories to News Feed rankings, Manohar Paluri, Facebook computer vision research lead in the applied machine learning division, told VentureBeat. “You can put it into descriptions or captioning for blind people, or visual search, or enforcing platform policies,” he said.

“All of these actually can now do a much better job in individual tasks because we have representations that are richer, that are better, and that understand the world in a lot more detail than before.” Most of Facebook’s computer vision advancements have been achieved through supervised learning, Paluri said, in which a person is fully involved in labeling data fed to neural nets. However, today’s advances were achieved with weakly supervised learning, which use a mix of labeled and unlabeled datasets.

Facebook is using weakly supervised learning because the method requires fewer humans to annotate and train AI than supervised learning. No specific verticals were tackled during the course of this research. Read more from venturebeat.com…

thumbnail courtesy of venturebeat.com