Skip to content

Building for Apple Vision Pro. How I’m adjusting and what I think.

Published: at 09:48 AM

I’ve been working on an Apple Vision Pro application for a while and I’m completely blown away by the experience! Here is how and why I’m so impressed.

What I’ve done till now

I’m building it all natively in SwiftUI with RealityKit. At this stage I’ve now build different spatial windows navigating to other windows, with nested page lists and windows that change state, windows with portals in them, login flows, spatial volumes with portals in them, spatial immersive spaces that hide all other apps, pulling external live data and visualising in said spaces, using more elaborate hand tracking and scene understanding with spatial anchors so each surface is passed to my app and I can use walls, tables, floors ceilings.

I got all of this working + refactored my code nicely into (what I believe is) best practice structure in a month or so.

And here’s the catch: I have never build a native iOS app before! Ever.

A 3D render of the Apple Vision Pro without the face mount

How I was able to get there so fast

The key for me to be able to progress so fast has been to use all of my experience in other languages to establish what I imagine should be possible and “the right way” in this one.

I describe to ChatGPT how I would have coded it in language X and then go back and forth to refine the result.

I don’t think I could have done this without all of my pre-knowledge. I’ve build tons of Unity applications, hundreds of websites in dousins of framework and even some React Native stuff. Most of my Unity projects and WebXR projects utilise ARKit features so that has been a helpful compass in knowing what to expect from Apple. I basically know all the questions to ask and that there is an answer in there.

I would for example ask:

“In language X I would create a responsive menu in this way using relative values and a min max value. Is there an equivalent way in SwiftUI? Or an entirely different approach I should consider?”

It would return something to me and from the gist of it I would know if it sorta looks right and I would simply just try it. There might be some errors so I ask for each of them. Then at some point the code technically works and I start querying a new ChatGPT instance about “if this is the best practice structure and how I could improve it”.

Through this way I began establishing the scaffolding of my app and also my own understanding of the language improved immensely.

Learning by chatting

At this point I’ve learned so much that half the time I’m not asking anymore. Or I already know how to structure my code better so I do it manually.

And all of this while XCode has been a dream kit to work in! Every time I press build my app shows up wirelessly on our Apple Vision Pro device in seconds.

Takeaway

Everything. Just. Works. And that is a new experience after years and years of half baked sdk’s and changing game engines.

The quality of life that Apple developers get is really not talked enough about between all the other news about Apple! This is a total game changer for me and I should have done it earlier.

I’ve now begun building other projects, that I would normally have made in Unity, as native iOS apps.