Skip to content

Markov model manually implemented that predicts words probabilistically based on historic data.

Notifications You must be signed in to change notification settings

RameshAditya/markov-sherlock-holmes-twitterbot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Markov Models and Word Prediction

This is a markov model implemented manually that probabilistically predicts the consequent word by using historically validated data, and leveraging nested dictionaries.


NOTE: Works well for generating small sentences (albeit sometimes meaningless), and not so well for paragraphs or pages. Best when trained over a huge corpus.

  • Good news is, it's never grammatically wrong, as expected from every other proper markov model.

  • Bad news is, it usually loses overall context after every sentence (or full stop) and sometimes mid-sentence too, due to the massive volume of the training set.

This implementation parses through a Sherlock Holmes book, and predicts words. Twitter APIs were also used (Tweepy) but the project is not live since it tweets from my main account and another account requires another phone number.

About

Markov model manually implemented that predicts words probabilistically based on historic data.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages