A film visualising the data generated by an AI agent learning to walk. INNOQ's developers trained a neural network using neuroevolutionary algorithms to navigate space independently. The output was raw skeleton data: point positions per frame. My job was to turn that data into something visible and beautiful.
The skeleton data was imported into Houdini and used as the foundation for four visual chapters, each revealing the movement differently.
Surfaces: cloth simulations driven by velocity fields extracted from the data.
Splines: lines flowing through space like waves, forming silhouettes and contours.
Elements: three-dimensional topographical representations of the AI's tracks.
The final chapter strips back to the raw data lines themselves, gradually resolving into recognisable human motion as the AI's movement becomes more coordinated.
All visual development, animation, and the final edit are my work.
Made with Houdini and Redshift.