You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Your Paper RAFT is truly fantastic. It addresses the issues we've encountered when deploying generative AI applications for enterprise data. I have two questions regarding the data used in RAFT:
How many data samples did you use to train LLama2?
Could you provide me with some examples of real data you used to train LLama2? Specifically:
Example of data usage: P % of data: Q + D∗ + D2 + . . . + Dk → A∗
Example of data usage: (1 − P) % of data: Q + D1 + D2 + . . . + Dk → A∗ (when there are no 'oracle' documents, what would A* look like? I can't imagine if it's an answer like "no information to answer")
Answering these questions would greatly assist me in training our LLM. Thank you very much.
The text was updated successfully, but these errors were encountered:
Hello,
Your Paper RAFT is truly fantastic. It addresses the issues we've encountered when deploying generative AI applications for enterprise data. I have two questions regarding the data used in RAFT:
Answering these questions would greatly assist me in training our LLM. Thank you very much.
The text was updated successfully, but these errors were encountered: