Autonomy. Decision Making. High-Fidelity Simulation.
Our reinforcement learning pipeline has been refined across a wide range of demanding projects — from unmanned marine vehicles to adversarial self-play in complex environments.
We focus on decision quality first, but when needed, we can also push visual fidelity to its limits.
Areas of Expertise
Simulation & Autonomy
Robotics & Simulation
- •Adaptable Control: Robust policies for autonomous devices (USV, Robots, UAV) operating in dynamic conditions.
- •Perception Pipelines: Neural Sensor fusion and denoising for clear decision-making inputs.
- •Sim-to-Real: Leveraging domain randomization to train agents that generalize to the real world.
Adversarial Testing
Balancing & Wargaming
- •Scenario Wargaming: Multi-agent simulations for tactical analysis in hostile environments.
- •System Stress-Testing: Exposing vulnerabilities and edge-cases through adaptive adversarial agents.
- •Game Balance: Detecting dominant strategies and exploits in complex rule systems.
Live Agents
In-Game AI
- •Competitive Opponents: Human-like AI for fighting, strategy, and racing games.
- •Adaptive Difficulty: Agents that scale dynamically to match player skill.
- •Runtime Inference: Optimized models ready for deployment on consumer hardware.
Our Projects
Fully simulated fighting robots trained with reinforcement learning
Final Automata is not just an entertainment project. It tackles one of the most demanding problems of embodied intelligence: martial arts as a testbed for autonomy and control. The project explores and solves challenges such as:
- ✔Artificial bipedal athletic skills
- ✔Hierarchical decision making
- ✔Long horizon planning in adversarial scenarios
- ✔Physically based, high throughput simulation
- ✔Imitation, behavior shaping, and sophisticated reward design
This work directly transfers to real-world autonomy and simulation tasks in robotics and robotics-in-the-loop simulation.
Final Automata Featured
RL Agents Learn a CCG Decks in Hours
We ran an independent case study on a popular collectible-card game on consumer hardware - delivering these key insights:
- ✔Lower Costs, Faster Wins – Small-scale RL agent outperformed scripted bots with minimal resources.
- ✔Predict Player Behavior – Simulate human-like strategies to identify balance issues before launch.
- ✔Data-Driven Balancing – Measure deck complexity through AI learning speed for smarter design decisions.
ChaosRL: Zero-Dependency PPO in C#
A custom-built Reinforcement Learning framework running entirely within Unity/C# with zero external dependencies. Built for education and in future high-performance simulation integration.
- ✔Custom Autodiff Engine: Scalar & Tensor-based automatic differentiation written from scratch.
- ✔Embedded Training: Train agents directly in the game loop without Python bridges.
- ✔Transparent Implementation: A learning playground to understand PPO internals.
Maritime Autonomy: RL for Surface Vessels
Autonomous navigation for unmanned surface vessels (USVs) using reinforcement learning and high-fidelity simulation. We tackle the challenges of unstable dynamics and sensor uncertainty through a two-layer architecture combining low-level boat handling with high-level mission planning.
- ✔Navigation Primitives: Learned skills for waypoint following and heading stabilization in dynamic water.
- ✔Hierarchical Control: Composing primitives for complex tasks like obstacle avoidance.
- ✔Sim-to-Real: Domain randomization and sensor modeling for deployment on physical hardware.
- ✔Scalable Simulation: Rapid iteration of control policies for continuous marine dynamics.
AI Video Denoising & Restoration
High-fidelity simulation and real-world autonomy share a common enemy: noise. Whether it is Monte Carlo ray-tracing artifacts or low-light sensor grain, raw data is often too noisy for reliable perception or presentation.
This project demonstrates our expertise in Computer Vision and Deep Learning, solving critical pipeline challenges such as:
- ✔Temporal Stability: Reconstructing video sequences without flickering or ghosting artifacts.
- ✔Real-time Inference: Optimized neural networks capable of running alongside heavy simulations.
- ✔Perception Enhancement: Cleaning sensor data to improve the accuracy of downstream autonomy models.
- ✔Synthetic Data Training: Leveraging simulation to generate infinite noisy/clean training pairs.
This technology ensures that both human stakeholders and autonomous agents perceive the simulated world with absolute clarity.
About us
Based in Vancouver and working globally, we help companies bring autonomy, simulation, and intelligent decision-making into real-world and virtual environments. Our work spans marine robotics, robotics-in-the-loop simulation, and advanced reinforcement learning systems.
Viktor Zatorsky is a systems architect with over 15 years of experience in high-performance software, simulation engineering, and game technology. His work ranges from robotics and autonomous control to adversarial RL, physics-based simulation, and AI-driven gameplay systems.
Mariia Zatorskais a veteran CG artist with experience across AAA pipelines, digital humans, and production tooling. Her background in visual fidelity, simulation art direction, and creative technology enables us to deliver simulations that are not only accurate but also visually cohesive and production-ready.
Services for Simulation, Robotics & Gaming
Accelerate Development with Intelligent Simulation & Autonomy
From high-fidelity simulations to adversarial game agents, we build robust decision-making systems for virtual and physical worlds.
- ✔Scalable Simulation – Train agents in parallel for rapid iteration.
- ✔Adversarial Testing – Stress-test systems with adaptive opponents.
- ✔Sim-to-Real – Deploy robust policies to physical hardware.
Let's Discuss Your Project
Tell us about your project. We’ll reply within 1 business day.