Skip to content

Animation Tweening of 3D vertex data using a Feed-Forward Neural Network.

License

Notifications You must be signed in to change notification settings

mrbid/NEURAL_ANIMATION_TWEENING

Repository files navigation

This project takes a vertex colored and rigged animation in Blender, exports each 3D animation frame as a PLY file, then converts the PLY files to CSV training data, then trains an MLP/FNN network with an input of 0-totalframes and an output of a vertex buffer so that decimal inbetween frames can be requested from the network and it will generate the interpolated vertex data within a deviance of ~0.003 of the original training data from a network that has 97,034 parameters (379.04 KB).

A Feed-Forward Neural Network that generates and interpolates your 3D animation frames for you.

naming conventions

girl_ply - These are the exported frames for each step of the animation in PLY format.
girl_data - This is the training data for the neural network.
models - Data generated from the training process is saved here.

tips

  • You can drag multiple ASC files into Meshlab at once to see a point cloud of motion between frames.
  • Run reset.sh to delete all pre-generated training data and start afresh.

steps

  1. Open girl_rig_exporter.blend and run the script export_frames and the girl_ply folder will be created containing each 3D animation frame.
  2. Open scripts.blend and run the script ply_to_csv and the girl_data folder will be created.
  3. Run python3 fit.py and the girl_data will be used to train a network which will be output to the models directory.
  4. In the models directory will be a *_pd directory, cd into it and execute the CSVtoASC.sh file inside of the *_pd directory.
  5. An ASC directory will now exist in the parent directory, inside here is a point cloud file in the .asc format of each vertex buffer for each frame of the animation generated by the neural network, you can load these into Meshlab for viewing.

The *_pd directory contains test prediction data from the trained network for every frame and three interpolated frames between 0, 0.25, 0.50, 0.75..

Ultimately you will want to export the trained network weights and use them in your program to generate the output vertices in real-time based on a variable floating-point input that represents the current normalised time point between two animation frames.

Combine the generated vertex buffer with your pre-existing index buffer, UV/Color buffer, and normal buffer and you have a fully animated, textured/colored, shaded, triangulated mesh!

reality check

Why would anyone want to do this?

There are 100 frames of training data, but in reality that is only 10 frames that would be linearly interpolated between in a vertex shader. Each frame is ~22.63 KB in vertex data, so 10 frames is only 226.32 KB. This trained network provided as-is is 379.04 KB. That's a 67.47% increase in size.

Furthermore the amount of multiplications and additions used in this network is far higher by an order of magnitude than a simple linear interpolate between frames and it's producing a much less accurate result.

Finally the network weights probably compress less well than the traditional 10 frames would even if they where the same starting size.

There is no benefit to using a neural network to generate vertex data for your mesh animations over lerping between animation frames, or even better, using a quaternion based skeletal animation system. PGA?

But, it's pretty cool that it works, it's not a huge increase in disk/ram space (67.47% is a large increase but at a small scale it's not too bad) and the quality loss is not visually that bad.

It's a loss I'd be willing to take just to be different.

Although I would run the network on a CPU with FMA auto-vectorisation -mfma rather than in a shader... Which again could be seen as another loss as you'd have to send each frame from the CPU over to the GPU each time; whereas the traditional method easily all happens in a vertex shader on data already loaded into GPU memory.

Maybe you could efficiently run the FNN in a vertex shader..