Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comparison to alternatives is misleading #185

Open
qdequele opened this issue Aug 3, 2023 · 1 comment
Open

Comparison to alternatives is misleading #185

qdequele opened this issue Aug 3, 2023 · 1 comment

Comments

@qdequele
Copy link

qdequele commented Aug 3, 2023

For eg, additional sort orders require duplicate indices in both Algolia and Meilisearch, which increases costs and memory requirements.

While this was true for Algolia but not anymore, it never was the case with Meilisearch.

Based on the project's documented limitations, it seems to be geared for small dataset sizes,
and specifically for cases where high availability is not a requirement. Since it does not have multi-node clustering or node-node replication, it is not production-ready yet.

In assessing the readiness of a product, one could argue that it hinges on having users in production who experience satisfactory service quality. By that measure, we currently boast a user base of over 20,000 individuals utilizing the open-source edition, coupled with over 1,000 users on our cloud platform. Most of these users actively employ the software in live environments, often handling substantial datasets. Furthermore, we would appreciate you clarifying your definition of "large datasets." Notably, we have clients leveraging Meilisearch for datasets exceeding 100 million documents. This is a considerable scale for most users. We would appreciate your perspective on this matter.

@jasonbosco
Copy link
Member

While this was true for Algolia but not anymore, it never was the case with Meilisearch.

Happy to fix this.

Furthermore, we would appreciate you clarifying your definition of "large datasets." Notably, we have clients leveraging Meilisearch for datasets exceeding 100 million documents.

This was written when issues about Meilisearch not being able to handle updates as fast as first time indexing were common. It sounds like that has been fixed recently? Happy to update.

In assessing the readiness of a product

I'm assuming this was in reference to the "non-production ready" wording. I disagree with your definition that production-readiness is based on how many users are using the product.

Instead, historically production-readiness has been described as "does this service have a mechanism to not be a single point of failure in a stack". That usually translates to having multiple instances of the service running either in an active-passive configuration or in a distributed multi-node setup, so that even when one node of the service goes down, the others will continue servicing traffic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants