Back in summer, I was helping my fiance with a small WebXR project. I’ve build a few of these over the years and the technology I use tend to change. There was a time where the WebXR standard seemed to be just around a corner. I don’t know if I fully believe that anymore… This time around we settled on using MindAR and create an image marker based experience.
MindAR has its perks: it runs in the browser using TensorFlow models, which is pretty cool. But it also has problems: it’s not as performant as native AR, it acts like a framework instead of an engine (getting in your way), and the codebase is messy with overlapping files and A-Frame dependencies. Plus, it hasn’t been updated in 2 years.
I’ve used MindAR before, so I knew what we were in for. But this time, after digging into the code, I started seeing how I could improve it. What if I turned it into a simple engine that just drives the camera, rather than trying to manage everything? And with 8th Wall shutting down, breaking MindAR out of its cage seemed like a good use of my time.
So now finally, after a few late weekends of work, here it is:
What did I change?
It’s still using the same TensorFlow models and tracking principles (including that fast “One Euro Filter”), but I’ve rebuilt everything around them. The result: a modular engine instead of a monolithic framework.
Decoupling the engine
The original MindAR tried to manage everything: your scene, renderer, camera. That made it very frustrating to integrate with custom rendering or post-processing or to optimise performance the way you’d typically do in Three.js.
I ripped out A-Frame entirely and split the Three.js integration into separate modules. Now you provide your own scene, camera, and renderer. MindAR just drives the camera. That’s it.
Performance that works
The original had no performance management. It just hoped for the best. I added four modules: PerformanceManager (monitors frame times), MemoryManager (cleans up Tensor memory), SmartScheduler (adaptive frame skipping), and WorkDistributionManager (maintains target FPS).
When you’re running ML models in real-time, every millisecond counts. These work together to keep things smooth, even on lower-end devices.
Cleaner code
I centralized processing into a FrameProcessor pipeline and a TrackingStateManager for state logic. Added proper configuration validation and structured logging (no more console.log everywhere).
The original had experimental files, debug code, and overlapping functionality. I removed it all: 89,035 lines deleted, 5,524 added. The codebase is now focused and more readable.
What you get
Going from version 1.2.5 to 2.0.0, I’ve:
- Removed A-Frame dependency entirely
- Modularized the Three.js integration into 7 separate modules
- Added 4 performance management modules
- Centralized core processing into 2 main modules
- Added proper configuration and validation systems
- Reduced the codebase by ~83,500 lines
- Improved error handling and logging throughout
It’s still the same tracking tech underneath, but now it’s production-ready.
If you’re building AR experiences, you now have an engine that gets out of your way. You can integrate it with your existing Three.js setup, add your own post-processing, use custom shaders, or whatever else you need. The performance system means it’ll actually run smoothly on mobile devices. And the modular architecture means you can understand what’s happening and customize it if needed.
You can check it out on GitHub.
Thanks for reading and I hope you find a use for my “revamp”. With 8th Wall shutting down, this might just be the best free alternative for browser-based AR right now.