You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Do we need a robots.txt to stop ai crawlers from training on VC content? We have a lot of content here and are adding stuff constantly.
We also have a list of members names and socials plus now approximate locations. Do we need to disallow OpenAI and Bard from training on what our members are writing?
Proposed solution
I could write a roboot file that tells the crawlers to not read our files.
Resources that can help
No response
Collaborators
No response
Code of Conduct
I've read the Code of Conduct and understand my responsibilities as a member of the Virtual Coffee community
The text was updated successfully, but these errors were encountered:
@ClJarvis in general, we do want robots to crawl the site, so that we show up on google etc. However I hadn't really thought about AI. If you can find some documentation on how to do that (tell openAI and Bard not to crawl, but allowing other bots) I'd definitely consider this.
I have updated robots.txt on my own sites to forbid the AI crawlers. I would recommend it because content on VC properties is copyrighted. the VC Code is copyrighted under creative commons so I personally would not recommend contributing to any AI unless we intentionally want to.
Is there an existing issue for this?
Type of Change
Brand new page
URL of existing page
No response
Context for content change
Do we need a robots.txt to stop ai crawlers from training on VC content? We have a lot of content here and are adding stuff constantly.
We also have a list of members names and socials plus now approximate locations. Do we need to disallow OpenAI and Bard from training on what our members are writing?
Proposed solution
I could write a roboot file that tells the crawlers to not read our files.
Resources that can help
No response
Collaborators
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: