Making computers smarter with TikTok dance videos

TikTok dance videos have captured legions of fun-hungry fans during the Covid-19 lockdown. But U of M researcher Yasamin Jafarian found a deeper purpose for the viral phenomena.

For the past year, Jafarian, a PhD student in computer science and engineering, has harnessed videos for the frame-by-frame building blocks she uses to construct realistic 3D avatars of real people. Seeing that many of today’s 3D avatars look like cartoons, she wants to replace them by using machine learning and artificial intelligence (AI) to generate more realistic avatars for use in future virtual reality settings. .

To this end, she trains AI computers to understand visual data through images and videos.

Better off in Hollywood?

The film industry produces realistic avatars for movies or video games using CGI (computer-generated imagery). But the industry can afford to take thousands of shots of performers.

“The problem with cinematic technology is that it’s not available to everyone,” says Jafarian. “I wanted to generate the same opportunity for the average person so they could just use their phone’s camera and be able to create a 3D avatar of themselves.”

Jafarian aimed to design an algorithm that only required a single photo or video of a person to generate a realistic avatar. This required a large dataset of videos to “train” the algorithm. TikTok dance videos – which often feature just one person, showing the full length of their body in multiple poses – filled the billboard.

Real progress in virtual reality

After watching some 1,000 TikTok videos, Jafarian chose 340 for his data set, each 10 to 15 seconds long. At 30 frames per second, that was over 100,000 images of people dancing.

So far, she has successfully used her algorithm to generate a 3D avatar of a person seen from the front. She has published her work and won an Honorable Mention Award for Best Paper at the 2021 Computer Vision and Pattern Recognition Conference.

Jafarian plans to continue refining the algorithm until it can generate a person’s full body using just a few views. She hopes real people will one day use technology to interact in virtual social spaces online, and not just through Zoom.

“We can have virtual environments, using VR glasses like Oculus, for example, where we can see and interact with each other,” she says. “If we can make these digital avatars look realistic, it would make these interactions deeper and more interesting.”

His research could also help all of us, that is, our avatars, try on clothes virtually, reducing trips to the store.

Read and watch a video on the original story of the College of Science and Engineering to place.

Comments are closed.