L

LLM Testing

VERIFIED

by community

(0 reviews)
66,319installs
Updated Mar 2026

Description

Provides curated prompts to test LLM security, bias, privacy, alignment, and robustness for authorized AI safety and red team assessments.

Security Analysis

⚠️警告63/100

Open Source

Code is publicly available for audit.

Community Verified

Reviewed by the ClawHub community.

User Reviews

No ratings yet

No reviews yet. Be the first!

Community Signal

ClawHub Score2.82 / 5.00
📥 Installs66,319
🔄 Last UpdateMar 21, 2026
🟢 Actively maintained (9d ago)
View on ClawHub →

Submit Your Review

Share your experience with the community and help others find the best skills.