You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the example. Yes this is slow and related to part 1 of this issue. The reason is pandas avoids creating group dataframes which is expensive. I plan to look at this problem after the next release v0.4.0.
Currently, how is the groupby done? Would using the pandas's inherent groupby structure help, as it would mean translating pd.groupby.agg to group_by >> summarise, pd.groupby.assign to group_by >> mutate.
It is the straight forward (and naive) way, a loop over the group dataframes. For each group calculate the summary statistic, after which concatenate the individual summaries into the final result. So far this is the same process for all calculations that involve grouped data i.e mutate, create, do, arrange and summarise.
I do not know how easy it would be translate summarise to pd.groupby.agg because summarise is a more handy function, i.e. you can do this
summarise(c="mean(a+b)" , d="mean(np.sin(a+b))")
I do not think that is possible with groupby.agg!
The real problem is, completing a pandas groupby operation to obtain the group dataframes is very slow. groupby.agg labours to avoid it and where possible it substitutes and uses it's own cython functions to compute the aggregates i.e if your function is np.mean, pandas uses a private smarter implementation to compute the mean of the groups where by it does not fully partition the data.
I think part of the solution will be to recognise the simple uses of summarise and translate those to groupby.agg .
Why is groupby + summarize so slow compared to simple pandas. Please find the reproducible example.
The text was updated successfully, but these errors were encountered: