
With its retro-futuristic deathscape aesthetic and relentless combat loop, KILL KNIGHT has carved out a unique place in the action shooter scene. To celebrate the game’s first anniversary, we spoke with Hugh Trieu, the lead artist at PlaySide Studios, to discuss the visual approach behind the title and how the team worked with Unity to overcome development challenges.
What was the main goal when creating the art for this game?
Hugh Trieu: Our goal was to create a brutal, unforgiving visual identity that immersed players in a lo-fi deathscape – something that felt like a lost relic from a darker timeline. We pushed a macabre aesthetic with heavy contrasts, decaying environments, and oppressive atmospheres, aiming for a world where chaos and brutality were part of the experience.
Visually, we aimed to reimagine the PlayStation® 1 retro look, like a game that time travelers from the future made in the past. This low-fidelity style allowed us to do more with less while maintaining a unique and striking identity.
Beyond style, visual clarity was essential. The key is to allow players to enter a flow state, where their focus is on action and decision-making rather than fighting the visuals. Any confusion whether from cluttered effects, unclear silhouettes, or poor contrast would instantly break that flow. We put heavy emphasis on set value ranges across all elements of the art. We focused on clear character/environmental readability, distinct VFX language, and clear UI feedback to make sure the experience stayed intuitive even at high speed.

What was the art strategy, and how did the team execute it?
HT: Our strategy focused on efficiency and specialization – leveraging industry-standard tools and the team’s existing strengths to build a cohesive, stylized world without overextending our small art team.
We used tools like Blender, Photoshop, ZBrush, Maya, Substance Painter, Houdini, and Unity across the pipeline. Early concepting involved rapid environment mockups using Blender and Photoshop to test style variations offline. Once in production, we blocked out modular environment pieces with Unity’s ProBuilder to quickly determine the dimensions and we then finalized the meshes modelling in Maya. We used Houdini to procedurally create damage/cracks to the environment pieces. Character and enemy workflows began with 2D concepts in Photoshop, high-poly sculpts in ZBrush, and retopology in Maya. We created the weapons directly as low-poly models.
All assets were textured in Substance Painter and imported into Unity, where we used Shader Graph and post-processing effects to experiment and lock in the final aesthetic.
A key challenge was the amount of enemies and its variants we had to do for a small team. We partnered with 1518 Studios to alleviate some of the repetitive work such as some of the enemy variant sculpts, retopology and UVs.

How important was prototyping?
HT: Prototyping played a crucial role in shaping the foundation of the game. We dedicated significant time to building a separate vertical slice focused purely on visuals, isolated from gameplay development. This early step allowed the team to validate our visual direction quickly, featuring one biome, the main character, several enemies, key VFX (like pistol shots and gems), and a rough UI wireframe.
This snapshot let us quickly test, refine, and validate the visual direction before full production, ensuring everything from tone to technical execution was on target.
How did Shader Graph keep development on-track and flexible?
HT: We relied heavily on Shader Graph to streamline our workflow, enabling both tech and non-tech artists to build shaders without writing code. We created a library of modular subgraphs, allowing the team to reuse and share shader features across characters, environments, VFX, and post-processing effects. This modularity kept development fast, consistent, and flexible.

How were Timeline and Cinemachine used to create the in-game cinematics?
HT: We used Timeline to support in-game cutscenes – featured in the intro, boss transitions, and ending sequences. It allowed us to sequence animations, VFX, events, and post-effects precisely within a single timeline, giving us full control over key narrative moments directly in-Engine.
Used alongside Timeline, Cinemachine provided film-style camera control and smooth shot transitions. It enhanced the compositions of key scenes and blended smoothly between multiple cameras.
How did the team approach the technicalities of art exploration during development?
HT: Our approach to art exploration during development was quite technical and hands-on. Working with Unity’s Universal Render Pipeline (URP) presented both challenges and creative opportunities, as it was still evolving with limited documentation and shifting class structures. We leaned into that uncertainty, diving into the internals by iterating constantly, testing assumptions, and learning how render passes interacted under the hood.
Once we found stable ground, working with URP became genuinely rewarding. Implementing the pre-boss walkway transition post-effect was a highlight, turning earlier trial-and-error into a stylized, tangible result that elevated the player experience.
Ultimately, working with URP during its early stages gave us the flexibility to shape it to our needs. That trial-and-error journey helped the team not just solve problems, but push the art direction further than a more rigid pipeline might have allowed.

What performance challenges did the team face, and how were they addressed?
HT: Performance became a key focus once we began testing on consoles. The biggest challenge came from optimizing for lower-end hardware. On those platforms, we transitioned from Deferred to Forward rendering, which required reworking shaders, post-processing effects, and VFX to maintain visual quality.
Following the first playable vertical slice, where rapid iteration left behind a lot of experimental or unoptimized tech, we began a major cleanup phase. We stripped unnecessary elements from shaders and the post-process stack, with a particular focus on improving the environment shader, which had become overly complex.
To further optimize performance, we implemented LODs across the board, covering character and equipment models, environments, and even shaders. This significantly reduced triangle counts, while shader LODs trimmed visual overhead without sacrificing functionality. This allows the game to run efficiently without compromising its look on lower-end consoles.
The UI was another performance hotspot. We optimized the canvas hierarchy to reduce draw calls, smartly grouped dynamic and static canvases, and audited our UI shaders to remove unused features and lighten the rendering load.
These combined efforts ensured the game remained visually striking while hitting performance targets across all platforms.
How was the Unity Profiler used to address performance issues?
HT: We used the Unity Profiler to pinpoint performance bottlenecks – particularly expensive shader passes and rendering techniques. This insight led to key optimizations, such as reallocating the rendering of character swarms to more efficient methods like batched rendering, which significantly improved performance without sacrificing visual fidelity.

What were some of the team’s key milestones during development?
HT: A standout moment for the team was achieving 60 fps on lower-end platforms – a huge leap from the 5 fps we were getting in early builds. Reaching that target made all the optimization work worth it. It required extensive collaboration and optimization across every department from 3D art, VFX, shaders, UI, engineering, and audio.
Another big technical win for us was supporting baked lighting in arenas that shift and evolve spatially in real time. This was a high priority for us, as detailed lighting and dynamic levels were an essential part of our aesthetic and sense of relentless gameplay progression. Getting that feature up and running meant we could strip out real-time lights and make substantial performance gains.
Is there anything the art team would have done differently?
HT: In hindsight, we would have placed more emphasis on getting into the Unity Engine earlier, rather than spending too much time refining 2D concepts.
Some ideas, like dithering, looked great in static concept art but didn’t translate well into 3D gameplay. Once implemented, it felt noisy and hard to read, detracting from the clarity we needed rather than enhancing the retro aesthetic.
We also learned the importance of optimizing shaders early. Deferring that work led to a major shader refactor later in development, which we could’ve avoided with a more iterative approach from the start.
In the end, every artistic decision, whether a breakthrough or a hard-earned lesson, shaped KILL KNIGHT into a game that’s as visually relentless as the world it throws you into, and we couldn’t be prouder of what we built.
To read more about projects made with Unity, visit the Resources page.