I love how every player in this space is just building exactly the same product, and no of them seems to have a compelling pitch why someone would need their product
Teleprompter is pretty original and compelling
I guess. I mean... how often do you use an actual teleprompter? I'm guessing apart from politicians and newscasters, the answer is pretty much never.
I bought this but ultimately returned it as it didn't really solve any problems due to being a complete walled garden with very sparse functionality right now
It's a cool form factor but the built-in transcription, ai etc are not very well implemented and I cannot imagine a user viewing this as essential rather than a novelty gadget
> complete walled garden
Not really. You can build your own apps [1].
Seems they even included a hard-coded API key for DeepSeek's API that still work, doesn't exactly leave great confidence in their engineering.
If you share that API key here...
- we all get to use free LLM
- they'll learn to do it properly next time
So it's a win-win situation
If I instead don't, and let you know that the key is there in the source code, hopefully at least one deserving person might learn how to look through source code better, and none of the lazy people get it :)
So no.
Not enough. I am not putting any hardware on my nose if I can't fully control the software running on it. Is the OS open source?
This is pretty low level access. The phone you are going to pair it with and is providing network access is definitely more closed than this.
It is bad that phones are closed. But, at least they aren’t stuck to our faces and able to observe our every move.
These glasses don’t have a camera.
Ah, my mistake, I just assumed. Well, in that case the lack of openness is less of an issue.
No, it isn't
I don't have time to fiddle around with some locked-in ecosystem in exchange a little more productivity or the ability to pretend not to be using my computer. And I don't even have a day job.
If it was just a heads-up display for android like xreal, but low power and wireless that might be cool for when I'm driving. But everyone wants to make AI glasses locked into their own ecosystem. Everyone wants to displace the smartphone, from the Rabbit R1 to the new ray-bans. It's impossible.
In the end this tech will all get democratized and open sourced anyways, so I have to hand it to Meta and others for throwing money around and doing all this free R&D for the greater good.
I really like the aesthetic of these, both the glasses themselves and the UI. However, I have the same problem with these as with smartwatches: the apps don't solve any of my problems.
Real time translations is a real good use case. The problem is most implementations such as the Airpods live translate are not great.
I've been in many situations where I wanted translations, and I can't think of one where I'd actually want to rely on either glasses or the airpods working like they do in the demos.
The crux of it for me:
- if it's not a person it will be out of sync, you'll be stopping it every 10 sec to get the translation. One could as well use their phone, it would be the same, and there's a strong chance the media is already playing from there so having the translation embedded would be an option.
- with a person, the other person needs to understand when your translation in going on, and when it's over, so they know when to get an answer or know they can go on. Having a phone in plain sight is actually great for that.
- the other person has no way to check if your translation is completely out of whack. Most of the time they have some vague understanding, even if they can't really speak. Having the translation in the glasses removes any possible control.
There are a ton of smaller points, but all in all the barrier for a translation device to become magic and just work plugged in your ear or glasses is so high I don't expect anything beating a smartphone within my lifetime.
Some of your points are already considered with current implementations. Airpods live translate uses your phone to display what you say to the target person, and the target person's speech is played to your airpods. I think the main issue is that there is a massive delay and apple's translation models are inferior to ChatGPT. The other thing is the airpods don't really add much. It works the same as if you had the translation app open and both people are talking to it.
Aircaps demos show it to be pretty fast and almost real time. Meta's live captioning works really fast and is supposed to be able to pick out who is talking in a noisy environment by having you look at the person.
I think most of your issues are just a matter of the models improving themselves and running faster. I've found translations tend to not be out of whack, but this is something that can't really be solved except by having better translation models. In the case of Airpods live translate the app will show both people's text.
I have the G1 glasses and unfortunately the microphones are terrible, so the live translation feature barely works. Even if you sit in a quiet room and try to make conditions perfect, the accuracy of transcription is very low. If you try to use it out on the street it rarely gets even a single word correct.
This is the sad reality of most if these AI products and it’s that they are just taking poor feature implementations on the hardware. It seems like if they just picked one or these features and doing it well will make the glasses useful.
Meta has a model just for isolating speech in noisy environments (the “live captioning feature”) and it seems that’s also the main feature of the Aircaps glasses. Translation is a relatively solved problem. The issue is isolating the conversation.
I’ve found meta is pretty good about not overdelivering on promised features, and as a result even though they probably have the best hardware and software stack of any glasses, the stuff you can do with the Rayban displays are extremely limited.
Is it even possible to translate in real time? In many languages and sentences the meaning and translation needs to completely change all thanks to one additional word at the very end. Any accurate translation would need to either wait for the end of a sentence or correct itself after the fact.
I guess the translation can always update itself in real time if the model is fast enough.
And I hate the aesthetics of them (for me,) which is going to be a huge problem for the smart glasses world. Glasses dramatically change how you look, so few frames look good on more than a handful of face types, and that’s not even considering differences in personal style. Unless you come up with a core that can be used in a bunch of different frame shapes I can’t see any of these being long term products.
The hardware can look amazing, but if the software doesn't offer something meaningfully better than pulling out your phone, it ends up as an expensive novelty
> the apps don't solve any of my problems
Reminds me of when dumbphones were introduced and people said things like why do I need to have a phone with me all the time.
I don't remember anyone saying that.
In the 1980s, it was pretty common.
This is what cellphones looked like, back then: https://share.google/z3bBbfhT43EHcDYoc
Cellphones actually were quite small, during the 1990s. I used to go to Japan, and they got downright tiny during that time.
Smartphones actually increased the size (but not to 1980s scale).
- [deleted]
The only thing that matters is how easily I can customize what is shown on the screen. Everything else is probably just annoying, like the translation or map feature, which I assume will be finicky and useless. If the ring had four-way arrows and ok/back buttons, and the glasses had a proper SDK for creating basic navigation and retrieval, such as the ability to communicate with HTTP APIs, there would be no limit to the useful things I could create. However, if I'm forced to use built-in applications only, I have very little faith that it would actually be useful, considering how bad the last generation of applications for these devices was.
> The only thing that matters is how easily I can customize what is shown on the screen.
What matters more is how they support different eye-distances (interpupillary distance, IPD).
For me it's like the pebble in smart glasses land, simple and elegant. Less is more, just calendar, tasks, notes and AI. The rest I can do on my laptop or phone (with or without other display glasses). I do wish there's a way to use the LLM on my android phone with it and if possible write my own app for it. So I am not dependent on the internet and have my HUD/G2 as a lightweight custom made AI assistent.
Nice idea, but no world lock rendering (Thats hard so we'll let them off)
However you are limited in what you can do.
there are no speakers, which they pitch as a "simpler quieter interface" which is great but it means that _all_ your interactions are visual, even if they don't need to be.
I'm also not sure about the microphone setup, if you're doing voice assistant, you need beamforming/steering.
However, the online context in "conversate" mode is quite nice. I wonder how useful it is. they hint at proper context control "we can remember your previous conversations" but thats a largely unsolved problem on large machines, let alone on device.
Why would you need world lock rendering on a 640x350 monochrome display? It’s a personal dashboard, not a “spatial computer”.
maps, but also being able to move your head so that you can see past the thing on the screen is useful.
For people who are prone to motion sickness, its also really useful to have it tied to the global frame. (I don't have that, fortunately)
So we're already used to people looking crazy talking to themselves (talking via BT headphones).
Now we're going to see people's eyes moving around like crazy.
Would that attract your attention? I see people with nystagmus often enough to not care.
It would. I don't know if you mean people reading phone screens or on drugs or what, but I rarely ever see it outside clinical settings.
If you run this site, look into making your images more compressed! Takes forever to load them
Under support a number of policies are listed. The privacy policy is not one of them. No thanks.
I like the idea a lot more than other implementations (although I still think the original google glass was great), but I do feel the market for these, outside of the bankers and financial industry that loves to show off with tech, is primarily dorks like myself, and dorks like myself often like to be able to fiddle. The fact I can't be trusted to install applications that I can make as I go with an easy to use api seems a mistake. I see the even hub but that seems far off considering there is no details about it.
> The fact I can't be trusted to install applications that I can make as I go
Thanks this was key in deciding whether to consider this brand at all.
Google Glass at least gave devs something to play with out of the gate
No open App Store is a non starter.
Oof, yeah. Why do I want this if I can't run code on it? Useless.
They have an open BLE based protocol, you can display whatever you want on the screen.
I am not seeing anything mentioning that via this link.
There is a big README panel saying:
> We have now released a demo source code to show how you can build your own application (iOS/Android) to interact with G1 glasses.
> More interfaces and features will also be open soon to give you better control of the hardware.
With a link to the demo app just below that with a detailed explanation of the protocol and currently available features.
I wonder how they handle the whole notification stack with iOS. Did anyone try the first one? A lot of non-apple wearables have issues with that.
- [deleted]
Am I the only one who feels hesitant to even interact with someone wearing smart glasses? I have no idea if they could be recording me.
It's similar to how people felt when Google Glass first showed up. Until there's some universally understood signal like a visible recording light (that can't be turned off), I think that unease is going to stick around
Even with a light, a just made a quick google search and the top reddit thread is about how easy it is to cover up with black nail paint.
These particular classes don’t have a camera.
I've never used any smart glasses, but I do wear prescription glasses ("dumb" glasses?); don't these smart glasses products all clash with the field of prescription lenses? I mean, either they each have to provide the entire possible range of correction profiles, for use instead of what people wear now, or they need to be attachable/overlays for regular prescription glasses - which is complicated and doesn't look like what the providers are doing ATM. Or - am I getting it wrong?
Things like Vision and Quest have the ability to adapt to vision issues, but these things likely don’t.
I guess they could use a common “generic” form factor, that would allow prescription lenses to be ordered.
That said, this is really the ideal form factor (think Dale, in Questionable Content), and all headsets probably want to be it, when they grow up.
These offer a wide-ish profile of prescription lens options within the three lens types they support. It is definitely not the full gamut though.
there's an ai lol
Isn’t AI everywhere nowadays? I won’t be surprised if there’re AI toasters and fridges.