Extremely simple and fast word2vec implementation with Negative Sampling + Sub-sampling
-
Updated
Jan 21, 2021 - Python
Extremely simple and fast word2vec implementation with Negative Sampling + Sub-sampling
Using pre trained word embeddings (Fasttext, Word2Vec)
Dict2vec is a framework to learn word embeddings using lexical dictionaries.
Web服务:使用腾讯 800 万词向量模型和 spotify annoy 引擎得到相似关键词
Implementing Facebook's FastText with java
Aspect-Based Sentiment Analysis
PyTorch implementation of the Word2Vec (Skip-Gram Model) and visualizing the trained embeddings using TSNE
Repository for the experiments described in the paper named "DeepSentiPers: Novel Deep Learning Models Trained Over Proposed Augmented Persian Sentiment Corpus"
📖 📚 📰 Workshop that demonstrates using and analyzing text in R.
TwitPersonality: Computing Personality Traits from Tweets using Word Embeddings and Supervised Learning
Cross-Lingual Alignment of Contextual Word Embeddings
This repository contains source code to binarize any real-value word embeddings into binary vectors.
Improving Word Embeddings by combining word embeddings with their POS (Part Of Speech) tag.
Storage and retrieval of Word Embeddings in various databases
Network analysis experiment on echo-chamber relative to COVID-19 tweets.
Implementation of various Machine Learning and Deep Learning models for Sentiment Analysis on the 'Sentiment Labelled Sentences Data Set' by University of California, Irvine.
A Persian Word2Vec Model trained by Wikipedia articles
Add a description, image, and links to the wordembeddings topic page so that developers can more easily learn about it.
To associate your repository with the wordembeddings topic, visit your repo's landing page and select "manage topics."