-
-
Notifications
You must be signed in to change notification settings - Fork 648
Doctests for RMSE
and MeanPairwiseDistance
#2307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
# create default evaluator for doctests | ||
def process_function(engine, batch): | ||
y_pred, y = batch | ||
return y_pred, y | ||
default_evaluator = Engine(process_function) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added a global for the default evaluator to which @vfdev-5 was referring to in this comment.
I also felt we could mention it somewhere... What could be a right place to mention it, if necessary?
Thanks :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@DevPranjal Thank you. I think we already discussed that point, and the idea is to have simple and copy/paste pieces of code, even if a lot of code could be reused.
@vfdev-5 what do you think ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sdesrozis I do not see the point to make API examples cluttered with other "unrealistic" code just to make them executable.
We can however publicly display doctest_global_setup
to users such that they could reuse it if needed reproducible code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The point is the user can't copy/paste the code to test. I thought we would like to provide users with easily reproducible examples.
Anyway, let's choose whether we hide things and let's deploy 😊
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know, but it means for this metric we'll have the following:
import torch
from torch import nn, optim
from ignite.engine import *
from ignite.handlers import *
from ignite.metrics import *
from ignite.utils import *
manual_seed(666)
def process_function(engine, batch):
y_pred, y = batch
return y_pred, y
default_evaluator = Engine(process_function)
# --- API example
metric = MeanPairwiseDistance(p=4)
metric.attach(default_evaluator, 'mpd')
preds = torch.Tensor([
[1, 2, 4, 1],
[2, 3, 1, 5],
[1, 3, 5, 1],
[1, 5, 1 ,11]
])
target = preds * 0.75
state = default_evaluator.run([[preds, target]])
print(state.metrics['mpd'])
You have half of the code snippet of preps.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are talking about more or less 30 code lines, and using lambda function, we could reduce this. Not sure that it's worth hiding rather than exposing.
Anyway, that's a matter of taste.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks @DevPranjal !
* Add doctests for RMSE and MeanPairwiseDistance * Add doctests for RMSE and MeanPairwiseDistance * Make metric name consistent
Addresses #2265
Description: Added doctests for
RMSE
andMeanPairwiseDistance
Check list: