RAID is the largest and most challenging benchmark for machine-generated text detectors. (ACL 2024)
-
Updated
Jun 5, 2024 - Python
RAID is the largest and most challenging benchmark for machine-generated text detectors. (ACL 2024)
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
[ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models
AIShield Watchtower: Dive Deep into AI's Secrets! 🔍 Open-source tool by AIShield for AI model insights & vulnerability scans. Secure your AI supply chain today! ⚙️🛡️
A Comprehensive Study of the Robustness for LiDAR-based 3D Object Detectors against Adversarial Attacks [IJCV2023]
A reading list for large models safety, security, and privacy.
Official Source Code of the paper "Exploring Effective Data for Surrogate Training Towards Black-box Attack", which is accepted by CVPR 2022
Quadratic Upper Bound Loss for Robust Adversarial Training
a collection of small machine learning projects
Adversary Emulation Framework
The Security Automation Toolkit
Official implementation of Black-box Universal Adversarial Attacks with Bayesian Optimization.
PAL: Proxy-Guided Black-Box Attack on Large Language Models
This project evaluates the robustness of image classification models against adversarial attacks using two key metrics: Adversarial Distance and CLEVER. The study employs variants of the WideResNet model, including a standard and a corruption-trained robust model, trained on the CIFAR-10 dataset. Key insights reveal that the CLEVER Score serves as
FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids
A list of recent papers about adversarial learning
Generate adversarial patches against YOLOv5 🚀
PyTorch implementation of adversarial attacks [torchattacks].
DPLL(T)-based Verification tool for DNNs
🎯 Enhanced Adversarial Patch: Extends adversarial attacks on TartanVO model with a new loss function, rotation attack, and momentum optimizer
Add a description, image, and links to the adversarial-attacks topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-attacks topic, visit your repo's landing page and select "manage topics."