Working cheap motion capture into our gamedev pipeline
Like most indie developers we’re working on a shoestring budget, and are going to need a lot of humanoid animations for our game. With my awful animation skills and the huge bulk of animations required, we’ll definitely be sunk if I have to hand animate every human in our game.
So that’s where mocap comes in. Hollywood studios have been capturing amazing monkey and dragon themed performances for years now but those systems are astronomically expensive. However, there a few new, low-cost options for motion capture available these days. The developer of the painfully cute Ooblets game, Rebecca, got ahold of a Perception Neuron suit with what looks like great results. Though the suit itself is only cheap relative to most professional mocap systems. Another solution out there right now is using machine learning algorithms to convert 2D video of a human and converting that to mocap data. How neat is that?! (It’s pretty neat) I’ll definitely be experimenting with this but it does seem to be in the early stages with captures that end up being a bit jittery and wooden.
Finally, I came across a solution that seems to capture pretty darn good data using VR equipment called Glycon3D. It’s under heavy development but already has a lot of features and works on multiple VR platforms. Notably, the cheap and easy to use Oculus Quest. It only has tracking points for your head and hands, but also supports the Quest’s hand tracking feature allowing you to get full mocap of all your little digits!
The two biggest drawbacks I came across are that, your animations are very likely going to need some manual clean-up, and that it’s a bit tedious getting all of your animation file takes off the headset and into Blender for review. To partially solve the second problem, you can sideload a fileserver onto your quest. The program Sidequest has a web/file server available for free in their market and lets you directly install it, easy!
And now that you can easily access all of your files through a URL, I thought it’d be great to automate loading up all of your animations, with labels, for viewing so you can quickly pick the best take. Blender really makes this easy as you can write a python script that’ll do almost anything blender is capable of. Here’s my python script, below. All you have to do is modify the IP address, and it’ll load up all of the .bvh animations into the scene, space them out, then label them based on their filenames (which are numbered).
import bpy import urllib.request import re import tempfile import os import math scene = bpy.context.scene scene.frame_start = 0 scene.frame_end = 2000 quest_url = "http://192.168.1.187:7123" fp = urllib.request.urlopen(quest_url + "/list/storage%2Femulated%2F0%2FAndroid%2Fdata%2Fcom.chiltonwebb.glycon%2Ffiles%2FGlyconFiles") mybytes = fp.read() html = mybytes.decode("utf8") fp.close() file_url_regex = r"\/download.*?\.bvh" matches = re.finditer(file_url_regex, html, re.IGNORECASE | re.MULTILINE) for matchNum, match in enumerate(matches, start=1): filename_regex = r"(BVH_\d*?)\.bvh" filename = re.search(filename_regex, match.group(), re.IGNORECASE | re.MULTILINE).group() urllib.request.urlretrieve (quest_url + match.group(), tempfile.gettempdir() + "\\" + filename) fullpath = tempfile.gettempdir() + "\\" + filename print("Importing " + fullpath) #Get an X coord for each demo x_pos = (matchNum-1)*2 #import the mocap data and place it on it's respective x coord bpy.ops.import_anim.bvh(filepath=fullpath, update_scene_fps=True) ob=bpy.context.object ob.location.x -= x_pos #Add text to delineate each mocap skeleton bpy.ops.object.text_add(enter_editmode=False, align='WORLD', location=(-x_pos + 0.5, 0, -0.6), rotation=(math.radians(90),math.radians(0),math.radians(180))) ob=bpy.context.object ob.scale = (0.3,0.3,0.3) ob.data.body = filename.split('.') #remove the temp bvh file os.remove(fullpath)
If you’re an indie-dev in the same boat as us, let us know what types of systems and work-flows you’ve tried out on your team!Back to homepage