Skip to content

yashk2000/SneakySketchers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

25 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

SneakySketchers

πŸ–ΌοΈ Sneaky Sketchers

πŸ™‹ What is Sneaky Sketchers?

Have you ever taken a photo in which there is some unneeded object in the background, ever taken a photo and someone just walked behind you, and now you feel the photo is not as good as it could have been? πŸ“·

If so, worry not! This desktop application allows you to select the areas in a photo you want to remove by drawing on them. Once you're done, the app will erase the objects you drew on, and generate a new photo which would look as if those obejcts were never present! πŸ”₯

You can also use this application for having some fun. Ever wondered how you would look without a moustache, or with spectacles? Or wondered how your face would like with different features? Well draw over your face, and let the app do the magic! πŸ‘¦ πŸ‘§

πŸ‘¨β€πŸ­ Who are we?

This project was built by Shambhavi Aggarwal, Farhan Kiyani, Yash Khare and Ridham Bhat.

πŸ’» What did we use?

Sneaky Sketchers is built completely in Python. We have created Jupyter Notebooks on Google Colab to train our models and the desktop app is built using PyQt5 🐍

πŸ”¨ Installation

For setting up the desktop app, head over here.

Video instructions for using the application πŸŽ₯

Click on the image below to view a video of how to use the application.

Sneaky Sketchers

πŸ‘¨β€πŸ’» How do I contribute?

  • To get more detailed documentation, please check out our project's wiki. πŸ“–
  • Before contributing do go through the Code of Conduct πŸ”§
  • Please go through the Contributing Guidelines as well. πŸ“š
  • If you find any bugs in the application, or a feature you think would be nice to have, please open an issue.❗

πŸͺœ Folder Struture

Each folder has it's own dedicated readme on what it's contents do, how to set them up and use them.

  • InPainting Notebook: This folder contains the jupyter notebook we trained on Google Colab.
  • application: This folder contains the main PyQt5 desktop application.
  • inpainting: This folder contains python scripts that can be used for training a model, or making predictions. They can directly be imported into your own project.

πŸ’­ What we learned

  • This was the first time Yash worked with PyQt and leanred a lot about making desktop apps with it. He also worked with PyTorch to this extent for the first time.
  • Shambhavi had never worked with PyTorch before, and helped implement an entire research paper by NVIDIA in completely in PyTorch.
  • Farhan did not have much experince with machine leanring, but he contributed to the project and learned a lot as well.

This project ended up being a kind of a research project for us since we spent quite a lot of our time reading the paper on Partial Convolutions by NVIDIA, understanding how it works, and implementing it. We found a Keras implementation which gave pretty good results. We understood the paper with the Keras code, and created a PyTorch implementation. Since this model is a bit large in size, we decided to go ahead with building a desktop application which can be used offline.

πŸ•’ Training Time

The paper which we referred to has trained the model on 3 different datasets, for a period of 14 days. Where as with the resources we had(thanks to Google Colab), we just trained our model on a subset of the Places2 dataset for one night. Based on this limited amount of training, the model does not match the performance given by the original implementation, but it does a pretty good job. In future, should we get the time and resources to train the model completely, we would be able to improve our model a lot.

Our model can be downloaded from here.

πŸ“· How the desktop app looks

Screenshot_20201114_023136 Screenshot_20201114_023152
PyQt App Loading an image
gg gg1
Original Inpainted
ff ff2
Original Inpainted
ee ee1
Original Inpainted

πŸ”œ What's next?

  • A web version of the application with a lighter model.
  • A better UI for the desktop app
  • Training the model further to improve it's performance.

πŸ“œ License

This project is released under a free and open-source software license, Apache License 2.0 or later (LICENSE or https://www.apache.org/licenses/LICENSE-2.0). The documentation is also released under a free documentation license, namely the GFDL v1.3 license or later.

πŸ–ŠοΈ Contributions

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be licensed as above, without any additional terms or conditions.

πŸ“š Resources

Citation

@inproceedings{liu2018partialpadding,
   author    = {Guilin Liu and Kevin J. Shih and Ting-Chun Wang and Fitsum A. Reda and Karan Sapra and Zhiding Yu and Andrew Tao and Bryan Catanzaro},
   title     = {Partial Convolution based Padding},
   booktitle = {arXiv preprint arXiv:1811.11718},   
   year      = {2018},
}
@inproceedings{liu2018partialinpainting,
   author    = {Guilin Liu and Fitsum A. Reda and Kevin J. Shih and Ting-Chun Wang and Andrew Tao and Bryan Catanzaro},
   title     = {Image Inpainting for Irregular Holes Using Partial Convolutions},
   booktitle = {The European Conference on Computer Vision (ECCV)},   
   year      = {2018},
}