Skip to content
/ mmai Public

MultiModal AI: Out-of-box pretrained models for multimodel tasks.

License

Notifications You must be signed in to change notification settings

svarunid/mmai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MultiModal AI: Out-of-box pretrained models for multimodel tasks.

License: MIT

A repository hosting deep neural network architectures trained visual data. These include ImageGen, VideoGen, ImageClassification etc. The main purpose of this repository is to serve as a practitioner's guide to building an end-to-end industry grade deep learning model training pipeline.

Table of Contents

  1. Introduction
  2. Contributing
  3. License

Introduction

Humans are able to process different types of data such as images, videos, audio, and text. We are trying to replicate this capability in machines. AGI (Artificial General Intelligence) is the ultimate goal of AI research. However, we are still far from achieving this goal but we are making progress. Multi-modality is a key aspect of AGI. This repository focuses on developing deep neural network models that are capable of handling variety of data types. The main purpose of this repository is to serve as a practitioner's guide to building an end-to-end industry grade deep learning model training pipeline. The repository also serves as a learning resource for beginners who are interested in learning about multi-modal deep learning.

Contributing

Contributions to this repository are very much welcomed! Since I'm a beginner myself, I'm sure there are many things that can be improved. Please refer to the CONTRIBUTING file for more details.

License

This repository is licensed under the MIT License. Please see the LICENSE file for more details.

About

MultiModal AI: Out-of-box pretrained models for multimodel tasks.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published