[NeurIPS2022] Egocentric Video-Language Pretraining
-
Updated
May 9, 2024 - Python
[NeurIPS2022] Egocentric Video-Language Pretraining
Official code and data for EgoBody dataset (2022 ECCV)
PyTorch code for EgoHMR (ICCV 2023): Probabilistic Human Mesh Recovery in 3D Scenes from Egocentric Views
[CVPR 2024] Official code for EgoGen: An Egocentric Synthetic Data Generator
Official implementation of Balanced Spherical Grid for Egocentric View Synthesis (CVPR 2023)
[CVPR 2022] Egocentric Action Target Prediction in 3D
[ICCV, 2023] Multiple humans in 3D captured by dynamic and static cameras in 4K.
Codebase for "Multimodal Distillation for Egocentric Action Recognition" (ICCV 2023)
The champion solution for Ego4D Natural Language Queries Challenge in CVPR 2023
EventEgo3D: 3D Human Motion Capture from Egocentric Event Streams [CVPR'24]
Code implementation for paper titled "HOI-Ref: Hand-Object Interaction Referral in Egocentric Vision"
✌️ Detection and tracking hand from FPV: benchmarks and challenges on rehabilitation exercises dataset
Official code repository to download the TREK-150 benchmark dataset and run experiments on it.
A dataset of egocentric vision, eye-tracking and full body kinematics from human locomotion in out-of-the-lab environments. Also, different use cases of the dataset along with example code.
The official PyTorch implementation of the IEEE/CVF Computer Vision and Pattern Recognition (CVPR) '24 paper PREGO: online mistake detection in PRocedural EGOcentric videos.
Official implementation of "A Backpack Full of Skills: Egocentric Video Understanding with Diverse Task Perspectives", accepted at CVPR 2024.
A collection of the forefront of Egocentric Human Activity Recognition (HAR) and Action Anticipation through Deep Learning
Official repository of the "Ego3DPose: Capturing 3D Cues from Binocular Egocentric Views" (SIGGRAPH Asia 2023)
Official repository of the "Attention-Propagation Network for Egocentric Heatmap to 3D Pose Lifting" (CVPR 2024 Highlight)
Deep Learning models to fuse imu-based motion capture and first-person video data to improve the prediction of future knee and ankle joint kinematics, in complex real-world environments.
Add a description, image, and links to the egocentric-vision topic page so that developers can more easily learn about it.
To associate your repository with the egocentric-vision topic, visit your repo's landing page and select "manage topics."