Skip to content

lpigeon/Pose-Estimation-for-Interactive-Metaverse-Fitness

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pose Estimation for Interactive Metaverse Fitness(Capstone_Design_2022)

Abstract

Virtual reality for fitness and health care applications require accurate and real-time pose estimation for interactive features. Yet, they suffer either a limited angle of view when using handset devices such as smartphones and VR gears for capturing human pose or a limited input interfaces when using distant imaging/computing devices such as Kinect. Our goal is to marry these two into an interactive metaverse system with human pose estimation. This paper describes the design and implementation of Yoroke, a distributed system designed specifically for human pose estimation for interactive metaverse system. It consists of a remote imaging device for estimating human pose, and a handset device for implementing a multi-user interactive metaverse system. We have implemented and deployed Yoroke on embedded platforms and evaluated its effectiveness in delivering accurate and real-time pose estimation for multi-user interactive metaverse platform.

Development Environment

  • Ubuntu 18.04
  • python-opencv 3.4.10
  • Tensorflow 1.14.0
  • Tensorflow-GPU
  • Cuda 11.2
  • Cudnn 8.1.0
  • Unity

Version

  • v1: Basic network and spatial configuration using Photon2 and unitychan_dynamic_locomotion
  • v2: pose estimation based on 2 people
  • v3: Pose estimation of 3 or more people, lobby, and character selection possible
  • v4 : UI upgrade
  • v5: Change the way data is read for pose estimation without delay on Unity
  • yoroke: Final deployment

Reference