Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Interpretation of calculateScore Function. #45

Open
wienans opened this issue Jan 22, 2022 · 2 comments
Open

Interpretation of calculateScore Function. #45

wienans opened this issue Jan 22, 2022 · 2 comments

Comments

@wienans
Copy link

wienans commented Jan 22, 2022

Hi koide,

thank you for your great work. I am using you hdl_localization together with this library and wanted to get a bit more information from the result about „how good“ the alignment is.

Therefore I stumbled across your calculateScore function and the TransformationProbability Function (in newer versions it is renamed to likelihood) of PCL.
If I understood the comments in the code of PCL and yours correctly both should implement Magnusson 2009 Eq 6.9.
In your case you use negative log likelihood instead of likelihood in the pcl case right?

Now I used the calculateScore Function together with the aligned Pointcloud which you feed into https://github.com/koide3/hdl_localization/blob/1cdef711d66fdb4ec8969623dd2568eb3f45ce25/apps/hdl_localization_nodelet.cpp#L441 , which I hope was the correct usage.
But I got in trouble with interpreting the result.

Your comment to the function suggests lower equals better but I am not sure if this really applies. For visually good alignment I got scores of 0.2-0.3 but if I moved the robot a bit it was also possible that it dropped close to zero.
For no alignment at all (out of map) the result is 0 which is no problem but I was wondering what negative values may mean.
Because I was only familiar with neg log likely hood and positive results.
I got negative results first only if I completely miss placed the robot in the map. So i thought this may make sense but then I also got negative values while driving with a correct localization results.

Maybe you can help me a bit to interpret the results of the Scoring function and especially why it gets negative at the first place.

thank you in advance,
Sven

@koide3
Copy link
Owner

koide3 commented Jan 24, 2022

Hi @wienans ,

What the function calculates is not true negative log likelihood but an approximation of it, which can be negative. If you want to keep it positive, you can remove gauss_d3_, which adds a constant offset to the score function (see descriptions just below eq. (6.9)).

double score_inc = -gauss_d1_ * e_x_cov_x - gauss_d3_;

Anyways, I think this metric is not useful to measure the "goodness" of a registration result, because it cannot take into account voxel-point correspondence changes. For example, if source cloud is far away from the target cloud, no points will have a valid corresponding voxel, and this metric, which calculates "point-to-voxel distance", doesn't make sense.

@wienans
Copy link
Author

wienans commented Jan 24, 2022

Hi @koide3
Thanks for your answer.
Regarding your last example I think all of the metrics I tested sadly fail. Like FittnessScore and TransformstionPorpability.

My goal actually is to filter out this problem. As I found that if the inlier fraction drops due to un-mapped objetcts sometimes the scanmatcher starts to find not the correct or partly correct solution. It behaves as the velocity was instantly dropped to zero and the next result ist very close to the result before and then it slowly drifts to the side. This is not only due to Direct1 method as I could trigger this with direct7 method too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants