Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Part2_Music_Generation: model prediction inputs #112

Open
maple24 opened this issue Aug 7, 2022 · 0 comments
Open

Part2_Music_Generation: model prediction inputs #112

maple24 opened this issue Aug 7, 2022 · 0 comments

Comments

@maple24
Copy link

maple24 commented Aug 7, 2022

Maybe this gets a little bit late, but I think this is also a good chance to say thank you to Alex and Ava for this great course.

Here is my questions:

When I went through Music_Generation codes, the answer here was quite confusing to me. Only one character passes to the model each time although it is updating but the previous information is missing. (I think this also part the reason why the songs generated are always invalid.)

~Pass the prediction along with the previous hidden state
~as the next inputs to the model
input_eval = tf.expand_dims([predicted_id], 0)

So I save the initial input and concatenate with each output as the next input. This makes more sense to me and the results start to be much better, but I'm not sure if I make something wrong or there are better ways, like taking the previous state as the next initializer state.

out_eval = tf.expand_dims([predicted_id], 0)
input_eval = tf.concat([input_eval, output_eval], 1)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant