You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The algorithm generates the sentence using the beam search based on the priority queue (line 96). The while loop works until top-k sentences are generated (the generation of the sentence is finished if the model outputs the EOS token) (line 103). In the loop, we take the sequence with the highest probability (line 108) and pass it through decoder given the history (line 121).
Given the beam width we add new hypotheses to the queue and move to the next probable path.
In line 147 we traverse back the queue to generate the most probable sentences.
Could you please, describe more beam search decoding ?
The text was updated successfully, but these errors were encountered: