Facebook is creating AI that can view and interact with the world from a human’s point of view: Over 2,200 hours of first-person footage captured in nine countries could teach it to think like a person 

- Advertisement -


  • Facebook is building an artificial intelligence capable of seeing and interacting with the outside world the same way a person can
  • The Ego4D project will let AI learn from ‘video from the center of action’
  • It has collected over 2,200 hours of first-person video from 700 people
  • It can be used in upcoming devices like AR glasses and VR headsets

- Advertisement -

Facebook announced on Thursday that it is building an artificial intelligence capable of seeing and interacting with the outside world the same way a person can.

Known as the Ego4D project, the AI ​​project will take the technology to the next level and learn it from the ‘Center of Action’s video’, the social networking giant said in a statement. blog post.

advertisement

The project involved 13 universities and collected over 2,200 hours of first-person video from 700 people.

It’s going to use video and audio from augmented reality and virtual reality devices, such as its Ray-Bans sunglasses, which were announced last month, or its Oculus VR headset.

- Advertisement -

scroll down for video

Facebook is building an artificial intelligence capable of seeing and interacting with the outside world the same way a person can

The Ego4D project will let AI learn from 'video from the center of action'

The Ego4D project will let AI learn from ‘video from the center of action’

The company said, “AI that understands the world from this perspective could unlock a new era of immersive experiences, as devices such as augmented reality (AR) glasses and virtual reality (VR) headsets become more like smartphones in everyday life.” become useful.” Post.

Facebook created five goals for the project, including:

  • Episodic memory, or the ability to know ‘what happened,’ such as, ‘Where did I leave my keys?’
  • Prediction, or the ability to predict and anticipate human actions, such as, ‘Wait, you’ve already added salt to this recipe.’
  • Manipulating hands and objects, such as ‘Teach me how to play the drums.’
  • Keeping an audio and visual diary of your daily life, with the ability to know when a person said a specific thing.
  • Understanding social and human interaction, such as who is talking to whom, or ‘Help me better hear the person talking to me in this noisy restaurant.’

“Traditionally a robot learns by doing things in the world or is literally taken on a hand to show how to do things,” said Kristen Grauman, Facebook’s lead research scientist in an interview. CNBC.

‘There is an opportunity to let them learn from the video just from our own experience.’

It could be used in upcoming devices such as AR glasses – such as the company's Ray-Bans sunglasses – and VR headsets.

It could be used in upcoming devices such as AR glasses – such as the company’s Ray-Bans sunglasses – and VR headsets.

Facebook said the project has collected more than 2,200 hours of first-person video from 700 people

Facebook said the project has collected more than 2,200 hours of first-person video from 700 people

Facebook’s own AI system has had a mixed track record of success, notably after apologizing to DailyMail.com and MailOnline in a video posted by a news outlet of a black man in its AI system as a ‘primate’ was referenced.

While these tasks cannot currently be performed by any AI system, it could be a large part of Facebook’s ‘Metaverse’ plans, or a combination of VR, AR and reality.

In July, CEO Mark Zuckerberg revealed Facebook’s plans for the Metaverse, and said he believed it to be the successor to the mobile Internet.

‘[Y]You can think of the metaverse as a contiguous internet, where instead of just watching content – ​​you’re in it,’ he said in an interview. ledge those days.

‘And you feel present with other people as if you were in other places, there are different experiences that you might not necessarily be able to do on a 2D app or webpage, like dance, for example, or different types of fitness. .’

Facebook intends to make the Ego4D data set publicly available to researchers in November, the company said.

There is some concern that the project could have negative privacy implications, such as if someone doesn’t want to be recorded, something Facebook has a mixed record on.

A spokesperson told The Verge He Additional privacy safeguards will be introduced down the line.

What is the difference between AR and VR?

Virtual reality is a computer generated simulation of an environment or situation.

It makes the user feel that they are in a simulated reality through images and sounds.

For example, in VR, you can feel like you’re climbing a mountain while at home.

In contrast, augmented reality layers computer-generated images on top of existing reality.

ARs are developed in the form of apps to bring digital components into the real world.

For example, in the Pokémon Go app, characters appear in real-world scenarios.

advertisement

.

- Advertisement -
Mail Us For  DMCA / Credit  Notice

Recent Articles

Stay on top - Get the daily news in your inbox

Related Stories