News

Facebook Shifts for the Future – Part 2 ~8 min

Aitarget examines key announcements from Day 2 of F8 2019

Aitarget experts joined creators and entrepreneurs from around the world in San Jose on 30 April and 1 May for Facebook’s annual two-day conference. More than 5,000 people explored the future of technology and the latest tools to create and grow business.

We’ve addressed Facebook founder Mark Zuckerberg’s overall comments about the privacy-focused future of the social network and the first-day highlights in Part 1.

The focus of the second day shifted to technology, and particularly the long-term investments Facebook has been making in artificial intelligence (AI), augmented and virtual reality (AR and VR). Onstage speakers talked about the ongoing effects of the rapid progress made in recent years and the future being built with the innovative technological solutions.

Responsible innovation

Facebook Chief Technology Officer Mike Schroepfer began the keynote address on Day 2 by addressing some elephants in the room in terms of social media misuse:

“I am as optimistic about the future as I’ve ever been… It’s true that we, and I, have learned a lot of really hard lessons over the last few years, and those lessons have fundamentally changed the way we develop and build new technologies. Made us realise... not just the amazing good that can come from new technology, but the bad, the unintended consequences, the ways in which people may abuse those new technologies.”

Even though the challenge is clear, issues relating to election interference, misinformation, and hate speech are complex and the solutions aren’t simple, admitted Schroepfer. As we all, the CTO would love to get an easy 1-2-3 innovation playbook, but it just doesn’t work that way.

Instead, Facebook is dedicating itself to continuously tackling these problems on a daily basis. Each product team includes a special group dealing with the specific challenges faced by that product. Those building new features work alongside those investigating drawbacks.

Facebook is committed to engaging with experts at each stage of creating new products and features, said Schroepfer. They will work with “a spectrum of organisations on each and every core issue we have”, such as save.org for suicide prevention.

The CTO took the audience through the challenges faced with problematic content, and the steady progress made with increasing use of AI to proactively detect problems (before users report them).

As an example, Schroepfer demonstrated the F8 2019 audience how difficult it was even for humans to distinguish between tempura broccoli and marijuana in an image.

Five years ago Facebook could only detect drug-related content using keywords. The development of computer vision (CV) made it easier to detect violating content in images – the task with which the audience struggled – and now AI could even detected packaged drugs like those pictured in the image above on the right.

Search and destroy

In recent years Facebook has worked hard to develop AI technology that with as little supervision as possible proactively detects content that violates Facebook policies.

While developing tools can be challenging – especially as they’re working against bad actors actively trying to get around defenses – the overall goal can be simply encapsulated, said Schroepfer:

Manohar Paluri of the Facebook AI shared how a digital common language was being created which helps catch harmful content across many languages by utilising advances in natural language processing (NLP).

A new approach to object recognition, Panoptic FPN helps AI-powered systems understand context from photo backgrounds. Nowadays AI can help proactively address problematic content across any medium on social media (text, images, video).

Paluri began his speech by stating that nowadays Facebook “cannot exist without AI” – and closed by spelling out just what AI itself needs to do.

Fairness and inclusivity

While AI is an important tool to help keep Facebook safe, it also brings risks in terms of reflecting and amplifying bias, said Joaquin Quinonero Candela from Facebook’s AI team.

Facebook is building best practices for fairness into every step of product development to ensure AI protects users without discriminating against them.

A new process for inclusive AI has been baked into the development of new features to ensure data sets are representative of people across the spectrum of age, gender presentation, and appearance.

Candela spoke about how fairness is a process, then introduced his colleague Margaret Stewart, VP of Product Design, to address the hard ethical questions in design.

Stewart spoke about the human side of Responsible Innovation, and how the design of social media interfaces can impact society in both good and bad ways.

“Design is not neutral,” she stressed, “It has to be managed with utmost care.”

Facebook uses design to reduce the harm that could be caused by misinformation while erring on the side of freedom of expression. However, not always does it work directly. For instance, when they tried working with third-party fact-checking organisations to flag disputed content, it had the opposite effect, drawing more attention to such material.

With research showing people around the world wanted to decide for themselves what information is credible, Facebook has built new tools to help determine reliability. Every article that appears in user’s News Feeds now has a display button providing context (who posted, how long they’ve been on Facebook, related articles by source and topic, statistics on who’s sharing it). Because of how important this issue is, Facebook has taken the rare step of animating the new display button, even though the animation tends to create visual noise. “That’s our values showing up in design,” said Stewart.

Stewart also shared how Facebook’s process to dealing with profiles when someone dies has changed over the years. Taking into account different cultural aspects of different nations, Facebook is still trying to balance security issues and the wishes of friends and family to remember their loved one. That is why a new Tributes tab separate from the original Timeline has been added, so friends and family can gather and share memories.

Respectful and safe AR/VR

Lade Obamehinti of Facebook’s AR/VR software team talked about how the inclusive AI process is being used by Spark AR engineers to ensure the software delivers quality AR effects for everyone. For instance, it is capable of recognising hands of various skin tones under varied lighting.

Facebook is also working to ensure their technology not only doesn’t exclude people, but brings people together. VR will allow users to interact regardless of physical distance, but to be successful it needs people to feel completely present.

“Before we ship this we also need to make sure we answer some key questions around privacy and security,” said Ronald Mallett, Research Manager for the AR/VR team. “One of the questions is making sure an avatar is authentic.”

Facebook is working on truly lifelike avatars, with gestures, facial expressions, and voice tone that uniquely identify a personality.

AR and VR need to be inclusive and safe, so Facebook has built preventive systems. One such example is a code of conduct for those who both use and build headsets that fosters a respectful culture and interactions. Another example is reactive tools for reporting violations have also been built.

Lindsay Young, Oculus VR product manager, summed up the VR team performance:

“We believe VR is the next frontier of human interaction. So it's an area we need to be incredibly mindful when building. VR is powerful, but it can also be intimidating… As we bring together people in VR, they should have access to tools that make them feel safe.”

Part of this is to shape what safety even is in VR. “VR is so new some social norms may not have exist in it,” said Young.

They've built several tools for people to get control over their safety experience, like safety bubble which serves as a boundary of personal space. If an avatar crosses the bubble border of another one, both of them become invisible to each other.

Another tool is Pause (when you can take a break in any space when you don't feel comfortable) or Mute (you can mute another disturbing person by disabling their audio).

Live moderators ensure good behaviour as well.

Final thoughts

Overall, F8 2019 had a really positive and personal vibe, showcasing plenty of new tools and features coming that will keep people using its platform safely for connection, entertainment, and business. With its commitment to “listening, learning, and adapting” and “building responsibly” as new VR technologies are introduced, the social network looks well set to discover the new futuristic technological horizons.

Bonus videos from other sessions: