You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this research, the performance of three state-of-the-art models was compared for a machine translation task. The codes used for implementing the models in this project were sourced from the GitHub repository located at https://github.com/JoyeBright/MT-HF.git.
This code can be used to implement a text search algorithm. Specifically, given an input text it returns a summary of the most relevant paragraph from a given list of paragraphs.
This is a Streamlit app that has the ability to paraphrase input text, which means it modifies the sentence to bear the same meaning, but different structure and word usage. It uses Huggingface Text-to-text Transfer Transformers (T5) as the base model.
Text-To-Text Textbots to Demonstrate Output Differences Between Models Trained on Filtered/Unfiltered Datasets for HSS4 - The Modern Context: Select Figures and Topics
An implementation of the GPT(generative pretrained transformer) model, from scratch, which produces Shakespearean text by training on the dialogues written by Shakespeare along with the GPT Encoder.
The web app uses logistic regression on a dataset of 20,000 news articles, achieving 96% accuracy. It employs NLTK for text preprocessing and TF-IDF for feature extraction.