Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue in scaling Hazelcast for High Load Scenarios in Node.js Microservices #1509

Open
rajeesab opened this issue Oct 4, 2023 · 0 comments

Comments

@rajeesab
Copy link

rajeesab commented Oct 4, 2023

Hello,

We are currently using Hazelcast in our Node.js microservices architecture to store and retrieve data. However, we've encountered an issue in high-load scenarios. When multiple users log in and perform load tests, we've noticed that RAM and CPU usage increase significantly, and the cache doesn't seem to provide the expected results.

To address this, we're considering the use of multiple instances of Hazelcast within a cluster. Specifically, we're looking at creating a Hazelcast cluster with multiple instances (Instance 1, 2, ...n) all belonging to the same cluster.

Our primary goal is to improve performance and resource utilization under high-load conditions.

We would appreciate any guidance, best practices, or recommendations on how to effectively scale Hazelcast for such scenarios. Are there any better solutions or strategies to address this issue?

Thank you in advance.

Detailed Explanation:

We are looking for the utilization of multiple Hazelcast instances to address memory management challenges in our environment.
Our specific use case involves a Hazelcast cluster with two instances, Instance A and Instance B, which are part of the same Hazelcast cluster.
Our requirements are as follows:

  1. When we store a key-value pair ('key1', 'value1') into a distributed map using Instance A, we want Hazelcast to make an internal decision about which instance should store this data. This may involve storing it on Instance A.
  2. Regardless of whether we retrieve the value associated with 'key1' using Instance A or Instance B, Hazelcast should seamlessly route the request to the instance that contains the data (in this case, Instance A), ensuring that we consistently obtain 'value1' as the result.
  3. In the event that we shut down or remove Instance A, we expect Hazelcast to intelligently redistribute the data to ensure ongoing availability. Subsequently, we should still be able to access 'key1' and retrieve 'value1' using Instance B.
    We are keen to gather insights and best practices on how to effectively configure Hazelcast to meet our memory management requirements.

I came across the following code snippet on the Hazelcast website, and I'm wondering if this 'clusterMembers' configuration will be effective in addressing our specific challenge ?

// 'use strict';
const { Client } = require('hazelcast-client');
(async () => {
try {
const configNode1 = {
clusterName: 'dev',
network: {
clusterMembers: [
'127.0.0.1:5701',
'127.0.0.2:5701'
]
},
};
const clientNode1 = await Client.newHazelcastClient(configNode1);
const mapNode1 = await clientNode1.getMap('mapNode1');
await mapNode1.put('key1', 'value1');
const value1 = await mapNode1.get('key1');
console.log('Value from Node 1:', value1);
await clientNode1.shutdown();
} catch (err) {
console.error('Error occurred:', err);
}
})();

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant