Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallelize generate_hash() with Rayon #162

Open
abonander opened this issue Jul 8, 2019 · 3 comments
Open

Parallelize generate_hash() with Rayon #162

abonander opened this issue Jul 8, 2019 · 3 comments

Comments

@abonander
Copy link
Collaborator

abonander commented Jul 8, 2019

This should be pretty trivial actually, lifting the Rng::gen() call out of try_generate_hash() here: https://github.com/sfackler/rust-phf/blob/6e1f6ac9b1f917089a4501ccb32f4f477799e39c/phf_generator/src/lib.rs#L40

And instead of the loop, doing rng.sample_iter(Standard).par_iter().find_map_first() in https://github.com/sfackler/rust-phf/blob/6e1f6ac9b1f917089a4501ccb32f4f477799e39c/phf_generator/src/lib.rs#L22

find_map_first lets us keep determinism by ensuring the return value is the same as it would have been if we iterated sequentially (as long as we're using a deterministic RNG): https://docs.rs/rayon/1.1.0/rayon/iter/trait.ParallelIterator.html#method.find_map_first

Maybe we could have a parallelism heuristic so we don't spin up a thread pool if we don't need it; perhaps we should remain single-threaded unless the number of entries is over a certain threshold or we go a certain amount of time without solving the PHF.

I figure this should also be an optional feature, although I'm debating whether this would be on or off by default.

@abonander abonander changed the title Speedup generate_hash() with Rayon Parallelize generate_hash() with Rayon Jul 8, 2019
@abonander
Copy link
Collaborator Author

abonander commented Jul 11, 2019

I originally figured I could get away with something like this:

pub fn generate_hash<H: PhfHash>(entries: &[H]) -> HashState {
    use rayon::prelude::*;

    SmallRng::seed_from_u64(FIXED_SEED)
        .sample_iter(Standard)
        .par_bridge()
        .find_map_first(|keys| hash_entries(entries, keys)))
        .expect("failed to solve PHF")
}

However this doesn't work H needs to be Sync (so it can be referenced from multiple threads at once). I was thinking I'd instead do this:

pub fn generate_hash<H: PhfHash>(entries: &[H]) -> HashState {
    use rayon::prelude::*;

    SmallRng::seed_from_u64(FIXED_SEED)
        .sample_iter(Standard)
        .map(|keys| hash_entries(entries, keys))
        .par_bridge()
        .find_map_first(try_make_displacements)
        .expect("failed to solve PHF")
}

But the iterator type still needs to be Send because .par_bridge() requires it because reasons. Even after implementing the equivalent with Crossbeam scoped threads it turns out that try_displace() isn't the slow part of this; hashing is (which I had a hunch on anyway). I would like to be able to do this transparently but that would require specialization to make it nice. So instead, we should probably just have a separate version that leaks the Sync bound that's available to utilize.

@abonander
Copy link
Collaborator Author

It turns out my initial assumption was faulty; I assumed that the bottleneck was the loop in generate_hash(), that it took time to find a displacement series that works. However, with some experimentation it turns out that calculating the displacements themselves is what takes the longest and the loop in generate_hash() runs at most once in most cases. Parallelizing the outermost loop would only help if many displacement attempts failed. Parallelizing the displacement loops themselves would be difficult since they share the entire hashtable as state.

@derekdreery
Copy link

This does feel like one of those times when parallelism is hard, Maybe you could do something with lock-free data structures? In any case, fixing the complexity of the algorithm is massively more impactful and you've already done this successfully :).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants