Ray-Ban Meta Smart Glasses will get smarter soon


Ray-Ban Meta Smart Glasses will get smarter soon

Image: Meta

Der Artikel kann nur mit aktiviertem JavaScript dargestellt werden. Bitte aktiviere JavaScript in deinem Browser und lade die Seite neu.

Meta is beta testing more advanced AI features on its Ray-Ban Smart Glasses, rolling out to a small number of opt-in users.

Ray-Ban Meta Smart Glasses are best known as a lightweight, wearable camera, but can also respond to voice commands to take photos and record video. The glasses even support ChatGPT-like conversations in beta within the US.

When Meta announced its latest smart glasses, the company teased much more advanced AI capabilities would come in the future. With its Qualcomm Snapdragon AR1 Gen 1 chip, it can handle more than just voice commands. Now, a few lucky beta testers are getting early access.

Opt into Meta’s multi-modal AI

Meta CTO Andrew ‘Boz’ Bosworth posts regular updates on Meta VR and AR products to Instagram, and today’s Reel shared new information about Meta’s multi-modal AI for Ray-Ban Smart Glasses.

Starting this week, owners of Ray-Ban Meta Smart Glasses in the US can sign up to beta test Meta AI’s image recognition features. Note that Boz said it would be a small group.

Meta says you can tap the settings cog in the Meta View app, then “Swipe down and tap Early Access.” I couldn’t find that control, so it might be rolling out, or perhaps Meta already has enough participants.

Boz said this new AI feature is expected to be available for everyone next year.

Look and ask Meta AI

Meta’s smart glasses can already chat with you, answer questions, help with math and unit conversions, write poems and make songs. The responses are brief but helpful.

Listening to your voice and replies is a single mode of operation. To be multi-modal, an AI should be able to accept images or text. Since Ray-Ban Meta Smart Glasses have a good quality camera, visual information isn’t a problem. The new AI feature is called Look and ask.

Boz demonstrated this in his Instagram Reel by looking at a piece of art in the shape of California. After asking the smart glasses to, “Look and tell me what you see,” Meta AI described the art accurately.

He gave a few other examples of how to use Meta AI:

  • Visual translation and summarization of text.
  • Create captions for your photos.
  • Ask ongoing questions related to the image or topic within a certain period of time.

Meta keeps investing in AI

Meta is an AI leader, offering powerful open-source technology for researchers and developers. What has been lacking until recently is consumer access to Meta AI. With the launch of AI avatar chatbots in Messenger, the public got its first taste of what’s possible.

The use of generative AI in its VR headsets. The SAM model (Segment Anything) was demonstrated by Meta earlier this year, running on a Quest 3 or Quest Pro.

Meta has invested heavily in Nvidia AI processors and is also purchasing AMD’s newest Instinct MI300X AI chip. Simultaneously, the social media giant is working on an AI processor of its own based on the open-source RISC-V architecture.

2024 should be a very interesting year for Meta as it continues to expand its hardware and software for AR, VR, and AI. Meta’s AI features are launching first in the US and will come to other countries later.

Buy Quest 3, Accessories & Prescription Lenses

Buy Quest 2, Quest Pro & Prescription Lenses

Quest 2


Quest Accessories


Quest Pro


VR Optician




Win VR Games and Hardware in our Advent CalendarSnowflakesJoin now

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top