You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How do we know that the data that an LLM is being trained on is accurate? One way is to use the discriminators that are part of Generative Adversarial Networks (GANs), that determine how true the fact is (in GANs in the image generation context it is about figuring out if a generated human face really looks like a human face). Can discriminators work on text data and use logic/reasoning/knowledge graph to detect what's true.
The text was updated successfully, but these errors were encountered:
How do we know that the data that an LLM is being trained on is accurate? One way is to use the discriminators that are part of Generative Adversarial Networks (GANs), that determine how true the fact is (in GANs in the image generation context it is about figuring out if a generated human face really looks like a human face). Can discriminators work on text data and use logic/reasoning/knowledge graph to detect what's true.
The text was updated successfully, but these errors were encountered: