Packages

Adversarial testing and robustness framework for AI models with 25 attacks (character/word/semantic perturbations, prompt injection, jailbreak, extraction, inversion), defenses (detection/filtering/sanitization), certified robustness metrics, and attack composition.

Current section

Readme

Jump to
Loading README...

Checksum

Dependency Config

mix.exs

rebar.config

Gleam

erlang.mk

Package Details

Downloads Last 30 days, all versions
0 2 4 6 8

this version

84

yesterday

0

last 7 days

8

all time

302

Last Updated

Dec 29, 2025

License

MIT

Build Tools

mix

Publisher

nshkrdotcom nshkrdotcom