-
-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: fatal "InvalidStateError: readyState not OPEN" error after losing connection #3519
Comments
I decided to look into the code behind this so I could fix it myself, but it looks like this issue was already solved in 2714e1e, as the refactored startHeartbeating method in Shard.ts now checks the socket's readyState (through a call to |
The node migration is v19 that is still in development, so it's not yet released but there is a per-commit version If you want to update to v19 there are some changes that aren't too much documented at the moment (like desired proprieties) but if you face any issue you can ask on our discord server (https://discord.gg/ddeno) Footnotes
|
Trying to use the latest version (19.0.0-next.6ad4e1d) of the (For the moment I've made a fork of the repo published on deno.land at 18.0.1 with my attempt at a fix.) |
I will look into the npm error, for the version before it's because of desired proprieties, you need to be explit on what proprieties you want on the Message object, the Interaction object and so on from bot.transformers.desiredProprieties |
I've actually just run into a new similar error while waking up my computer while using Discordeno, that seems to be in parts of code that are unchanged in v19:
The relevant parts of send.ts (in v18, but v19 still has this same code as-is refactored into Shard.ts): discordeno/gateway/shard/send.ts Lines 3 to 27 in c522cd9
It seems like when Here are the places that the offlineSendQueue is resolved (v19 code, also identical here to v18): discordeno/packages/gateway/src/Shard.ts Lines 429 to 450 in 687c29d
My guess is that the socket can be closed at this point, possibly because of any discordeno/packages/gateway/src/Shard.ts Lines 340 to 343 in 687c29d
I'm not sure if Also I noticed that the offlineSendQueue is never emptied. The current code is a memory leak, though a very minor one because I think already-called Promise resolvers don't keep references to anything. And kind of a nitpick but a offlineSendQueue.forEach call would be a little better than a offlineSendQueue.map call if you don't want a new result list to be returned from the call. Here's a suggested fix which clears it. The resolve() calls don't do any real work synchronously and only cause the promise to execute its callbacks later in a microtask, so there's no risk that more items will be added to the offlineSendQueue in between the time we resolve everything in it and when we empty it. case 'RESUMED':
+ if (!this.isOpen()) return
this.state = ShardState.Connected
this.events.resumed?.(this)
// Continue the requests which have been queued since the shard went offline.
- this.offlineSendQueue.map((resolve) => resolve())
+ this.offlineSendQueue.forEach((resolve) => resolve())
+ this.offlineSendQueue.length = 0
this.resolves.get('RESUMED')?.(packet)
this.resolves.delete('RESUMED')
break
case 'READY': {
+ if (!this.isOpen()) return
// Important for future resumes.
const payload = packet.d as DiscordReady
this.resumeGatewayUrl = payload.resume_gateway_url
this.sessionId = payload.session_id
this.state = ShardState.Connected
// Continue the requests which have been queued since the shard went offline.
// Important when this is a re-identify
- this.offlineSendQueue.map((resolve) => resolve())
+ this.offlineSendQueue.forEach((resolve) => resolve())
+ this.offlineSendQueue.length = 0 |
At this point you made me notice another problem, if the events we call are sync calls to something sync blocking wouldn't that also cause the promise to resolve while the shard could have got closed by discord? Because if yes then there is also that to account for |
When If there's a risk of that or we want the code to be solid against that kind of thing, then offlineSendQueue needs not to be a list of promise resolvers that are all resolved together all at once, but a list of callbacks that are sequentially executed with a fresh isOpen() check before each one starts, so if one callback calls socket.close() then the subsequent callbacks stay queued: class Shard {
socket: WebSocket | undefined;
isOpen() {
return this.socket?.readyState === WebSocket.OPEN;
}
#runWhenOnlineCallbacks: Array<() => void> = [];
#executeRunWhenOnlineCallbacks() {
while (this.#runWhenOnlineCallbacks.length) {
if (!this.isOpen()) break;
const callback = this.#runWhenOnlineCallbacks.shift()!;
callback();
}
}
runWhenOnline<T>(callback: () => T, highPriority: boolean): Promise<T> {
return new Promise((resolve) => {
const queuedCallback = () => {
resolve(callback());
};
if (highPriority) {
// Higher priority requests get added at the beginning of the array.
this.#runWhenOnlineCallbacks.unshift(queuedCallback);
} else {
this.#runWhenOnlineCallbacks.push(queuedCallback);
}
});
}
async send(message: ShardSocketRequest, highPriority: boolean): Promise<void> {
await this.runWhenOnline(() => {
this.socket!.send(JSON.stringify(message));
}, highPriority);
}
async discordMessage(message) {
// ...
// some awaits etc
if (message.type === 'RESUME') {
this.#executeRunWhenOnlineCallbacks();
}
// ...
}
} |
Runtime
Deno
Runtime Version
1.42.0
Version
18.0.1
Describe the bug
Sometimes when I leave a Discord bot running on my computer, let it fall asleep, and then come back on, I find that the bot has exited with this error output:
This doesn't always happen when my bot is running while my computer falls asleep. Sometimes it works fine.
I have not looked into Discordeno's code, but having previously developed programs using Websockets and running into the same "readyState not OPEN" error, I know that this can be caused by calling .send() on a Websocket that is still connecting to a server or when the connection has been closed already. I think it's likely that Discordeno has a heartbeat timer that can call .send() while the connection is closed and not yet reopened and/or while the connection has been previously closed and is still reconnnecting now. The fix would be to either make the heartbeat function check the current state of the Websocket before sending with it, or to make the heartbeat timer shut down entirely once the websocket has been closed and restart once a new websocket connection has been established.
What should've happened?
Discordeno should always be able to handle a connection closing and reopening without a fatal error.
Code to reproduce the bug
The text was updated successfully, but these errors were encountered: