With curiosity and a lot of interest I recently watched Apple’s keynote about the new Apple Vision Pro headset. It’s not advertised so much as a virtual reality or augmented reality headset but instead it’s referred to as a “spatial computing” device. The term needs a bit of further explanation though looking at what the device can do and given the fact that it’s more or less a huge goggle hooked to a computer. As soon as it is released it will present all the information that so far only lives on your flat computer screen in a more or less animated, augmented, three-dimensional reality. Your living room becomes your new desktop. All the information stored on your computer becomes part of the spatiality of your home.
According to Gary Wolf, author of Wired magazine, Richard Saul Wurman, graphic designer, architect and later founder of modern information architecture, believed that “the presentation of information can be more important than the information itself.” I’m sure modern Apple knows about these ideas as the Vision Pro spatial computing device seems to be targeted exactly towards that idea.
So far computers have mostly stored all of their information in form of hierarchies (files, folder, subfolders…), alphabetically, in the form of time and date (date created, date added), in categories (folders, tags) and sometimes even in the form of location (maps). These different ways of storing information are commonly referred to as the LATCH principle. With these new headsets entering the market new organization systems seem to come to live that put location front and center.
So imagine the file or the three dimensional item you’re looking for can no longer be found within a certain folder, it instead lives more or less within a map of your own house or your office building. It could live on your real world desktop, but it also could be in the room next door, in your closet or tucked away behind your shelf. The question that is running through my head while trying to compare the analog and the digital world is, if this spatial approach is really the right move for digital information and goods. Is it really more convenient, is it even feasible, to find thousands of digital files and data within the drawer of my desk?
In the real world we usually refer to architects, city planners or interior designers to organise the spatial realities of our daily world. Even at home we mostly use predesigned floor planes and prefabricated furniture, all designed to function in the way it is supposed to be, as it is expected. But do these environments designed for physical goods work equally well for storing, and often hoarding, digital information?
What Apple tries to introduce is nothing less than a new scheme to organize information. It’s based on location, like all the information we retrieve from a map of our city or our house. Sure it’s just an additional concept of accessing all the information on our computers, all the old schemes don’t disappear immediately. And as a last resort you can always search, that’s for sure. So you still will be able to access your files, images, and 3D objects via hierarchy, alphabetically, by time or by category. All of these old information types aren’t lost. It’s just a new way of storing digital information physically, so to speak.
To me the question arises if this new concept of spatial computing and essentially of spatial information is a compelling add-on for digital information and goods. Think about all the new AI models, think about all the networks of links and backlinks in Wikipedia, think about your own super brain that’s in your head. Is this type of information stored and organised spatially in the first place? It’s not. Well, maybe billions of reference images to train your new AI model can now be stored in your desk drawer. On the most basic level, within maybe the first two or three subfolders, spatial computing might work. But what if you have to dig deeper. What if you’re a researcher, if you want or have to reinvent the wheels, the nuts and bolts of our environment, but also all of our information.
Spatial computing introduces a concept that connects location and information. And somehow to me this feels not enough to go forward. Physical or 3D space is so limited. It comes with rules that might be counterproductive to large amounts of data. It provides a lot of assistance, guides and helpers for our own physical body and to our senses, but it doesn’t properly accomodate the atoms, neurons, bits and bytes, the wires and networks of our computers or our brains. Spatiality or spatial computing is not properly prepared for all that additional complexity that is looming in front of us and that we are already struggling with today.
I’d rather work and create in a space that is designed to accomodate my brain than my physical body. And as a spatial designer, as an architect, I only have glimpses of ideas of how such a “brain architecture” or “neural architecture” could look like. Maybe it has no look and feel at all. And maybe these new input and output devices still need a lot of research to grasp only the beginning of an idea. But my guess is that “brain connectors” are the future, spatial computing as we know it now probably remains just a side gig. And honestly any cyborg implant looks better than the current version of this very intrusive Apple Vision Pro headset. It’s a start, the beginning of a long journey, that’s true. And almost every mistake has to be made until things get better.
My biggest complaint about this new device and about this new direction of computing is that it puts an enormous amount of effort and a lot of scarce ingenuity into products of a very short lifespan. All these efforts try to mimic the real world and fail dramatically instead of trying to enhance and reinvent it. Maybe such advanced headsets might work well as copilots in jets or driverless cars, but their introduction for the consumer market seems to have more obstacles and implications than they are doing anything good. They turn humans even more into consumers instead of supporting their creative energy.
If I were Apple I’d put all of my efforts and resources in circumventing the all so limited senses of our bodies and instead get a direct connection to our brains, to the matrix of our thoughts and ideas. That really would be a new 1.0 device. I’m very curious when the term “neural computing” becomes a thing. Until next time.