Skip to content

Official Pytorch implementation of ICCV 2021 2020 paper "Bridging Unsupervised and Supervised Depth from Focus via All-in-Focus Supervision"

Notifications You must be signed in to change notification settings

albert100121/AiFDepthNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

[ICCV 2021] Bridging Unsupervised and Supervised Depth from Focus via All-in-Focus Supervision

Project Page | Paper (Arxiv) | Video | Poster

Depth estimation is a long-lasting yet important task in computer vision. Most of the previous works try to estimate depth from input images and assume images are all-in-focus (AiF), which is less common in real-world applications. On the other hand, a few works take defocus blur into account and consider it as another cue for depth estimation. In this paper, we propose a method to estimate not only a depth map but an AiF image from a set of images with different focus positions (known as a focal stack). We design a shared architecture to exploit the relationship between depth and AiF estimation. As a result, the proposed method can be trained either supervisedly with ground truth depth, or unsupervisedly with AiF images as supervisory signals. We show in various experiments that our method outperforms the state-of-the-art methods both quantitatively and qualitatively, and also has higher efficiency in inference time.

Overview

This is the official PyTorch implementation of
"Bridging Unsupervised and Supervised Depth from Focus via All-in-Focus Supervision",
Ning-Hsu Wang, Ren Wang, Yu-Lun Liu, Yu-Hao Huang, Yu-Lin Chang, Chia-Ping Chen, Kevin Jou (National Tsing Hua University, MediaTek Inc.) in International Comference on Computer Vision (ICCV) 2021 conference. If you find this code useful for your research, please consider citing the following paper and star this repo.

Requirements

  • Python == 3.6.8
  • PyTorch == 1.5.1
  • torchvision == 0.6.1
  • h5py == 2.8.0
  • tensorboardX == 2.1
  • tqdm = 4.47.0
  • see requirements.txt for more detail

Usage

1. Download Dataset

DDFF-12-Scene Dataset

  1. Download trainval and test h5py to ./data ddff-dataset-trainval.h5py ddff-dataset-test.h5py

DefocusNet Dataset

  1. Download zip file to ./data/DefocusNet_Gen

  2. Run the following command under ./data/DefocusNet_Gen

    unzip fs_6.zip
    python DefocusNet_gen_txt.py
    cd ../../

4D-Light-Field Dataset

  1. Go to this website to request for the 4D-Light-Field dataset
  2. Download full_data.zip under ./data/4D-Light-Field_Gen
  3. Run the following command under ./data/4D-Light-Field_Gen
    unzip full_data.zip
    python LF2hdf5.py --base_dir ./full_data --output_dir ./LF
    python FS_gen.py --LF_path ./LF/HCI_LF_trainval.h5 --output_dir ./FS 

FlyingThings3D Dataset

  1. Download FlyingThings3D_FS under ./data/Barron2015_Gen/
  2. Unzip the dataset

Middlebury Dataset

  1. Download Middlebury_FS under ./data/Barron2015_Gen/
  2. Unzip the dataaset

Mobile Depth Dataset

  1. Download both zip files from https://www.supasorn.com/dffdownload.html to ./data/Mobile_Depth_Gen
  2. Run the following command under ./data/Mobile_Depth_Gen
    mkdir Photos_Calibration_Results
    mv depth_from_focus_data2.zip Photos_Calibration_Results
    cd Photos_Calibration_Results
    unzip ./depth_from_focus_data2.zip
    mv calibration/metal calibration/metals
    mv calibration/GT calibration/zeromotion
    mv calibration/GTSmall calibration/smallmotion
    mv calibration/GTLarge calibration/largemotion
    cd ..
    unzip depth_from_focus_data3.zip
    python gen_txt_mobile.py
    cd ../../

2. Download Pretrained Model

  1. Download the ckpt.zip file and upzip ckpt.zip

3. Prepare Runtime Environment

Install packages from requirements.txt in your conda environment.

conda create --name AiFDepthNet --file requirements.txt -c pytorch
conda activate AiFDepthNet

4. Run The following command

CUDA_VISIBLE_DEVICES=[GPU_ID] python test.py --txt(optional) [path to the txt file of the dataset] --h5py(optional) [path to the h5py file of the dataset] --pth [path to your pretrained model] --outdir(optional) [path to your output results storage] --dataset [dataset name] --disp_depth [pretrained model is trained with disparity or depth] --test(optional) [Run DDFF-12-Scene Dataset on testing data]

Results

DDFF-12-Scene Dataset

DefocusNet Dataset

4D Light Field Dataset

Mobile Depth Dataset

Dataset

Citation

Please cite our paper if you find the code or dataset useful for your research.

@inproceedings{Wang-ICCV-2021,
        author    = {Wang, Ning-Hsu and Wang, Ren and Liu, Yu-Lun and Huang, Yu-Hao and Chang, Yu-Lin and Chen, Chia-Ping and Jou, Kevin}, 
        title     = {Bridging Unsupervised and Supervised Depth from Focus via All-in-Focus Supervision}, 
        booktitle = {International Conference on Computer Vision},
        year      = {2021}
}

Resources

Acknowledgement

About

Official Pytorch implementation of ICCV 2021 2020 paper "Bridging Unsupervised and Supervised Depth from Focus via All-in-Focus Supervision"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages