Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

apply to big data? #7

Open
maodouchen opened this issue Aug 16, 2018 · 1 comment
Open

apply to big data? #7

maodouchen opened this issue Aug 16, 2018 · 1 comment

Comments

@maodouchen
Copy link

Does this apply to big data? 500 million?

@dt-rush
Copy link

dt-rush commented Nov 19, 2018

Consider how the internal data structure (an RB tree of centroids) will grow with N, M, where

N = number of input values
M = range of values

If you look at the implementation or the paper, you'll see that at a simplified level, we can say that if the data point is within a certain distance of an existing centroid, it will add to its weight. Else we create a new centroid or possibly merge two centroids.

So basically the number of centroids scales with M. Given that M is fixed, you have constant storage. But data points are not individually stored.

You can always run a test to confirm your suspicions about a data structure being usable for large input or not ;)

It's generally a safe bet, as well, that online algorithms (processing streams of data) tend to be pursued exactly because storage scaling with input size is a problem at scale, and the online algorithm will have some space guarantees.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants