
Marius Thorvaldsen, Martin Sivertsen, and Sindre Askim Gronvoll founded Breach in 2016 to create immersive games, XR experiences, and 3D simulations. Over the past nine years, the work-for-hire studio has delivered XR solutions while also developing their own games. In 2024, they brainstormed ideas and began prototyping as they looked for a new internal project. One game concept quickly went into production to become Laser Matrix.
The title is a mixed reality puzzle action game where fitness, fun, and futuristic challenges collide. Players can solve light-based puzzles, dodge shifting lasers, and unlock levels using their agility and wits.
We sat down with Andreas Weibye, engineering director at Breach, and Jonathan Jørgensen, one of the game developers, to discuss building the game for Meta Quest and Android XR, and how they overcame technical and performance challenges.
What is Laser Matrix?
Andreas: Laser Matrix is a movement action puzzler in VR. It’s very arcade-like and can turn your living room into a dangerous labyrinth that you have to quickly and deftly move around.
How did the implementation process work for Meta Quest and Android XR?
Andreas: We didn’t implement Meta Quest and Android XR simultaneously from the beginning of the project. The prototype started out as only for the Meta Quest. We’ve worked extensively with the platform, so we got it up and running quickly. Android XR also didn’t exist when we started this prototype, so we ended up porting during the development process.
Jonathan: We started with Meta’s “building block” tool to do rapid prototyping. It made validating the concept more convenient. We eventually realized that we wanted to aim for cross-platform XR and made sure to structure the game in a more platform-agnostic way. The core gameplay doesn’t really rely on platform-specific features, so it was an easy transition to go cross-platform.

What steps did you take when implementing hand tracking in OpenXR?
Jonathan: We read real-time tracking data from the OpenXR runtime. The data is then processed through abstraction layers where the raw data is converted into positions and rotations that are then applied to hand meshes. These hand meshes are then interfacing with the built-in interaction systems in Unity, which provide utilities for common things like touching something with your finger. Then, finally, our own gameplay layers on top of that.
It’s all about the multilayered abstraction from that raw data. Many XR games often have similar needs, so we tried to use stable existing solutions as much as possible and tied them neatly together. For Laser Matrix, there aren’t a lot of non-standard aspects to the hand tracking.

What technical challenges did the team come across?
Andreas: It’s a fairly out-of-the-box OpenXR setup when it comes to the scene and the components used. We had to work around issues relating to the moments that hands lose or gain tracking and would spawn in a different location in that frame. At some point, players began losing hand tracking, which caused them to lose lives.
Jonathan: Designing around hand tracking was one of the bigger challenges. The different contact points on the hand caused some issues. It’s more straightforward for normal, slow-paced UI interactions like pressing a button. However, in Laser Matrix, you’re constantly moving your body and head around, which can reduce the accuracy of the camera-based hand tracking.
Andreas: For UI outside of gameplay, we implemented buttons that you need to touch with your fingers, as opposed to using raycasting. In our experience, raycasting with hand tracking is not precise enough to have a great playing experience, so this helped.
Jonathan: For “flat screen” games, developers can limit how inputs like mouse, keyboard, and controllers are used. XR is different, because a game can never truly stop your physical hands. For example, the buttons in the Laser Matrix menu are pretty flat, and you can poke your hand all the way through the button. This is an interaction that you don’t necessarily design for, and it’s unclear what should actually happen. Is it a valid button press? In general, there are some interesting implications that follow when you translate more traditional user interface patterns to XR.

What performance-related issues did the team encounter?
Andreas: Our primary performance issue is on the graphical side. From the beginning, it’s about finding the balance with transparency overdraw. It’s one of the biggest challenges with the current game design because if you were only to use outlines for these boxes, you can see where they are when looking up or down and compare them against something. But when you look straight ahead and are trying to understand your distance from a wall or how far you can extend your hand without touching, that’s more difficult.
Since we want the user to be able to see the whole labyrinth at the same time and make strategic locational decisions, we need a transparency solution or an alternative, both of which are GPU-intensive.
Jonathan: Some of our solutions affected the final visuals of the game. For example, we changed the outer yellow border of the game area. It used to have a transparent yellow hue covering the full wall surface, but we reduced it to only the edges, with a gradient fade to fully transparent. Since the game border is visible at all times when playing, this consistently reduced the number of overdraw calls, which was one of the main contributors to our frame time.
We also looked at the red laser cubes that the player is trying to avoid when moving around in the game. They have a square grid pattern on their surface. We could apply the same approach as with the yellow border cube, trying to reduce overdraw again. Since only the edges of the grid cells are rendered fully, we made the cells bigger, resulting in more of the surface being fully transparent on average.
There were also some performance issues related to loading. A big spike occurred every time the level started. At first, we just assumed it was related to loading levels in general. Loading a Laser Matrix level means that a lot of different elements have to be spawned. However, the culprit was specifically the music. Through profiling, we discovered that the system reloaded the audio with each level. We configured it to preload the music with the application instead, which made the spike unnoticeable.
Andreas: Some of our performance issues came because we turned a prototype into a production game. We should have rewritten the system.

Which Unity tools and features were instrumental during the build?
Andreas: Unity’s benefit is that it allows us to make one asset and one piece of code and have it work on multiple platforms from the start.
VFX Graph was great because it allowed our product owner, who isn’t artistically inclined, to sketch out what he envisioned the visual effects to look like. From there, our technical artist and developer polished it.
Jonathan: Shader Graph was a very valuable tool for making the game. Most of the game’s 3D elements are pure shaders. Aside from some cubes, hands, and buttons, there are very few actual 3D models in this game.
The XR Device Simulator was also used a lot. When you work in XR, testing can be rather physical and taxing in the long run. Between all of the steps – standing up, putting on the headset, moving around, and more – the hours really add up. By setting up the simulator and using mouse inputs to stimulate interactions, we have saved a lot of time and frustration.
Andreas: Since I had the XR Device Simulator available to me from the start, I wanted to develop new features in ways I could more easily test them in isolation. It pushed me to write more modular, testable systems from the start, which also improved our code quality.
Jonathan: The Unity Profiler was also key in helping us detect where performance spikes came from. For example, with the audio loading issue I mentioned, the Profiler both helped us identify the spike and the calls triggered at that exact time. This kind of information usually gives some clear hints as to where you should start when implementing a solution.

Which new features or updates in Unity 6 were most beneficial?
Jonathan: We build for quite a few targets, and in different contexts. When going cross-platform, we need the builds to have different configurations for platform-specific features and services. So the ability to have build profiles in Unity 6 allowed us to tailor our builds for those different platforms. Doing this manually would be tedious and prone to error.
What key performance benchmarks did you have?
Jonathan: When we flagged the performance issues, which is a big problem in XR specifically because of motion sickness, we measured the average fps and looked for frame drops. There are well-defined targets for what is comfortable or not.
By tweaking the thresholds on the different cube shaders , we gained around 10% in fps. We profiled the game in a test environment where we drew the maximum number of cubes for the game. This provided us with a very fixed and reliable context for testing.
Andreas: For performance, our hard limits are 72 fps on the Meta Quest 2 and 90 fps on the Meta Quest 3. We’re on target for the average frame rate, but we can produce situations where the user gets a lower result. These are very specific cases where you can end up in the worst possible place.
Jonathan: On more of a project and team level, our iteration time has become much more efficient in the last few months. As a company, we matured our quality assurance process and tied it to our automation stack, and now our feedback and fix cycle is quite fast. We’re able to address issues fast enough that a backlog of tasks doesn’t pile up or get out of control.

Is there anything the team wished they’d done differently during development?
Andreas: We should have started using OpenXR a bit earlier. Early in the project we planned to use the Meta's room markup feature in order to define the play area’s shape and size. While it worked, it didn't give us the gameplay functionality that we were looking for without also opening design challenges that we didn’t have great solutions for yet. We realized that during the room scanning discussion. We should have scrapped that system earlier because we spent too long trying to solve it. Additionally, looking back, once prototype code evolved into production code, recognizing that the system no longer served the design would have saved time.
To read more about projects made with Unity, visit the Resources page.