We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello.
I was in the middle of a test, and I noticed that the SS22 server RAM usage was much higher than other servers.
Server 1 : xray-core v1.8.9 : Shadowsocks2022 (tcp+udp) Server 2 : xray-core v1.8.9 : Vless-tcp-tls+nginx Server 3 : xray-core v1.8.9 : Vless-tcp-reality-vision
The uptime is correct and the server load is exactly equal. (network load and user counts are equal between 3 servers)
Is SS22 RAM usage always this much compared to other Vless combinations?
The text was updated successfully, but these errors were encountered:
Just like a trojan, the RAM will be full and restart automatically due to OOM (Out of Memory)
I use vless to fallback to many modes like trojan ws, grpc and tcp
Nginx -> Vless TLS / Non TLS -> Trojan [ WS / gRPC / TCP ]
I don't know how to fix it. I've tested on older versions but still getting OOM
Sorry, something went wrong.
目前 Xray 中的 SS2022 是基于 sing 那边的代码,在 Xray 这里好像有点 buggy(比如服务端 UDP),正好 v2fly 前段时间也实现了 SS2022,你测下它的有没有 OOM 问题,没问题的话我们 port 过来即可
刚刚在想 XUDP 的 Global ID 没读取 SS2022 UDP 的那个 ID,回头看了下 VLESS 转 VLESS 好像都没传递 Global ID,还是要写一下
No branches or pull requests
Hello.
I was in the middle of a test, and I noticed that the SS22 server RAM usage was much higher than other servers.
Server 1 : xray-core v1.8.9 : Shadowsocks2022 (tcp+udp)
Server 2 : xray-core v1.8.9 : Vless-tcp-tls+nginx
Server 3 : xray-core v1.8.9 : Vless-tcp-reality-vision
The uptime is correct and the server load is exactly equal. (network load and user counts are equal between 3 servers)
Is SS22 RAM usage always this much compared to other Vless combinations?
The text was updated successfully, but these errors were encountered: