Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem of the “full context embeddings” implement #4

Open
cultivater opened this issue Sep 4, 2018 · 3 comments
Open

Problem of the “full context embeddings” implement #4

cultivater opened this issue Sep 4, 2018 · 3 comments

Comments

@cultivater
Copy link

cultivater commented Sep 4, 2018

First thank you for your code.
But I am puzzled about your “full context embeddings” implementation. In the paper, the process of g' and f' are different, but you just stacked g' and f' and put them into a LSTM.
Perhaps this is why the accuray is low under miniImagenet ,I think?

@sunbear616
Copy link

I face the same question, I don't know why, but most pytorch code, they all do not implemente the same both g' and f', just use g I think

@sunbear616
Copy link

Do you implement this part? may I see your code?

@AugusYaoI
Copy link

I also want to ask the same question. I doubt if this code is consistent with the paper. I hope the author of this code can answer this question.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants