Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Socket leakage #48

Open
huiyiqun opened this issue Jan 17, 2016 · 8 comments
Open

Socket leakage #48

huiyiqun opened this issue Jan 17, 2016 · 8 comments

Comments

@huiyiqun
Copy link

I'm using snimpy as a client to gathering snmp data and scheduling the tasks with apscheduler and threadpool.

Recently, I found fd of our process has been used out. During debug, I found that there are many udp ports are open even when no snmp task is running.

I'm curious about the reason why udp ports must be kept open.

What matters more is that I found the number of open ports is increasing slowly.

I'm monitoring about 70 devices with snimpy once half hour, but there is 100 open udp port now. Every half hour this number increase one or more. Several days ago, my process exit for too many fds after running for three days.

@vincentbernat
Copy link
Owner

Which Snimpy version are you using? Are you using SNMPv2 or SNMPv3? Is the threadpool reusing threads or just spawning new ones?

@huiyiqun
Copy link
Author

I updated the version of snimpy from 0.8.2 to 0.8.8 serveral hours ago.

SNMPv2c.

The threadpool reusing threads.

@huiyiqun
Copy link
Author

I have to update the information I provided:

The number of open udp ports does not increase again. I'm not sure whether the upgrade of snimpy causes it.

What's more: I have 100 worker thread and now I have 100 open port. Maybe there is some relation between the two number?

@vincentbernat
Copy link
Owner

Before 0.8.6, Snimpy was reinitializing the whole SNMP stack for each manager. I am not surprised that it could have been the source of your problem. Now, Snimpy does this initialization only once per thread. So, it is expected that you have one open port for each thread (is that a problem for you?).

@huiyiqun
Copy link
Author

It's a problem but not so serious.

Is it possible to release unused port? After all, fd is limited.

@huiyiqun
Copy link
Author

As I can see, manually releasing resource is also acceptable.

@vincentbernat
Copy link
Owner

It's possible but it's conflicting with another bug. Snimpy relies on PySNMP. It uses some high level interface ("command generator"). This high level interface is not thread safe, so we need one instance of it in each thread. However, initializing this interface is memory and CPU-hungry. Moreover, it seems that it's not possible to reclaim all the resources that were allocated (hence the leak you observed, plus the memory leak in #33). So, I don't see a way around that.

The default FD limit is 1024, so with 100, you seem to be safe.

Otherwise, maybe you can use a dedicated (smaller) threadpool for SNMP?

In the future, I may switch back to NetSNMP because of bugs like this (notably, for SNMPv3 where I need a command generator for each manager) but also for performance.

@huiyiqun
Copy link
Author

Well, thanks for your graceful contribution. I will continue to keep sight
over snimpy.

On Sun, Jan 17, 2016, 8:20 PM Vincent Bernat notifications@github.com
wrote:

It's possible but it's conflicting with another bug. Snimpy relies on
PySNMP. It uses some high level interface ("command generator"). This high
level interface is not thread safe, so we need one instance of it in each
thread. However, initializing this interface is memory and CPU-hungry.
Moreover, it seems that it's not possible to reclaim all the resources that
were allocated (hence the leak you observed, plus the memory leak in #33
#33). So, I don't see a
way to do that.

The default FD limit is 1024, so with 100, you seem to be safe.

Otherwise, maybe you can use a dedicated (smaller) threadpool for SNMP?

In the future, I may switch back to NetSNMP because of bugs like this
(notably, for SNMPv3 where I need a command generator for each manager) but
also for performance.


Reply to this email directly or view it on GitHub
#48 (comment)
.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants