Skip to content
View calico-1226's full-sized avatar

Highlights

  • Pro

Organizations

@PKU-Alignment
Block or Report

Block or report calico-1226

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned

  1. PKU-Alignment/safe-rlhf PKU-Alignment/safe-rlhf Public

    Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback

    Python 1.2k 106

  2. PKU-Alignment/omnisafe PKU-Alignment/omnisafe Public

    OmniSafe is an infrastructural framework for accelerating SafeRL research.

    Python 867 126

  3. PKU-Alignment/beavertails PKU-Alignment/beavertails Public

    BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).

    Makefile 85 3