From “Behind the Apple Design Decisions That Bogged Down Its Mixed-Reality Headset” ($) posted Friday by The Information:
Another design decision that has greatly added to the technical challenges for the Apple headset has been the inclusion of its 14 cameras, which allow it to capture everything from images of the outside world to facial expressions and body gestures.
Apple had to build the Bora image signal processor to process the bounty of imagery. But Apple’s engineers have faced technical challenges getting Bora to work with the headset’s main processor, code-named Staten. The back-and-forth communication between the two chips increases latency, which can create nausea for people wearing the headset…
Another technical hurdle for Apple has involved making a key function of the headset—video pass-through, which depends on the cameras—work properly. That function will allow people wearing the headset to see video images of their surroundings on the displays inside the device, a capability intended to reduce the isolation users experience with other VR headsets, as we previously reported.
But because Apple is also putting a display on the outside of the headset—which will show video images of a user’s eyes and expressions to people around them—it couldn’t position the outward-facing cameras roughly where users’ eyes will be.
My take: Wait. Can this be right? Video images of a user’s eyes and expressions displayed on the outside of the headset? I’m already feeling nauseated.
Maybe that is why the market size is only 11M in 2021, from .5M in 2014. The market doubled in 2021, mainly due to aggressive price cuts by Meta to $299 and $399.
This is deeply immersive tech. When you’re that immersed, you need something grounded to hang on to. Same principle as keeping your eyes on land to avoid sea-sickness – or the horizon if there’s no land in sight.
This is especially important if you’re going to be physically moving around. So now we know that there’s a movement element to Apple’s plans.
All you need now is the surround-sound, sensor-suit, and the “omni-directional treadmill” and harness in “Ready Player One”….
Weird. Some people, ie celebs or people who think they are celebs, go around wearing sunglasses indoors and out, no need for eyes showing on the outside of the lens. There’s that Vogue editor for one.
Look up the video “Knowledge Navigator”. (Get the HD version) Apple made it in 1987. It was an iPad with a conversational virtual assistant with an internet connection, before the web came to be. That more than 20 years before the iPad itself was released. The point is that many impossible things are prototyped years before they become real. This also gives lie to the notion tossed about that Steve Jobs just dreamed up the iPhone a year or two before it came out. These sorts of ideas have been around for a long time. Figuring out how to actually make it was genius.
It’s pretty cool. The “iPad” is very clunky with a huge border, hinge and camera. It is foldable which looks cool. The virtual assistant appears as a visual person which is kind of nice. It has some familiar things like touch screen, no physical keyboard, FaceTime, drag-and-drop, and it has an internet connection. The OS looks somewhat like a traditional Mac OS with a menu bar and desktop icons, including a trash can.
Thanks for the video pointer. Very interesting and forward looking.
Here’s wishing Siri could someday understand like that.
I agree. This was like Sci-Fi Gold at the time.
There are other videos I remember, but cannot find, that predicted multi-touch already back in the mid ’90’s. Apple is surfing the future wave.
As June 30 approaches, which is the end of the academic fiscal (and charitable) year, my broker suggests using AAPL for giving.
That’s just not going to happen. Especially after the last 6 weeks that we have had.