You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If we run 500 scans, pymeasure crashes.
In the device manager, it says that the python memory usage is several gigabits.
This seems to be consistent with what other apparatuses see (elsewhere in my lab), which makes me suspect that it's an issue with pymeasure, rather than the specific code for the individual experiments.
Each scan file only takes up 11kb, so it's not that the quantity of data would be an issue, even if all of the scans were in the ram at the same time. Maybe there's some cache for the graphs in the QT gui interface window?
The text was updated successfully, but these errors were encountered:
I suspect also, that it is too much plotting all the graphs.
In my measurement software, Pyqtgraph (the plotting library, also used by pymeasure) can't keep up plotting the new data, once I have too many data points, especially, if I plot them as dots (graphically small circles) instead of a line.
I agree, though it seems like even if you clear all the graphs, the information is still cached somewhere. Because (after all the scans finished) the memory usage did not change after hitting clear all
It's possible, just it would mean that separate programs used in different parts of the lab made the same mistake.
Is there an easy way to check that?
I would try replacing the experiment specific part with some emulation which generates similar amount of data and then check that you have similar problems. Or use the tutorial example possibly with some modifications to replicate the problem.
If we run 500 scans, pymeasure crashes.
In the device manager, it says that the python memory usage is several gigabits.
This seems to be consistent with what other apparatuses see (elsewhere in my lab), which makes me suspect that it's an issue with pymeasure, rather than the specific code for the individual experiments.
Each scan file only takes up 11kb, so it's not that the quantity of data would be an issue, even if all of the scans were in the ram at the same time. Maybe there's some cache for the graphs in the QT gui interface window?
The text was updated successfully, but these errors were encountered: