Superagent

 »  AI Agents  »  SDK  »  Superagent  URL Share on

 Repository   6,496  958
PIVOTED from agent framework to AI safety/guardrails — protects AI apps against prompt injections, data leaks
ai-safety guardrails prompt-injection data-leak-prevention pivoted
TypeSDK
MaturityBeta
Status Maintained

Key Features

Plugin System Guardrails Self-Hosted

Community Feedback

  Strengths

  • Important AI safety niche
  • Open-source
  • Low-latency (50-100ms)
  • No data leaves environment

  Weaknesses

  • Pivoted significantly from original agent framework concept
  • Smaller community
  • Documentation may lag behind current state

Superagent Details

OrganizationSuperagent AI
Organization TypeCompany
CategorySDK
SubcategorySingle agent
DeploymentSDK/Framework
Primary LanguageTypeScript
LicenseMIT
Commercial UseUnrestricted
GitHub Stars6,496
GitHub Forks958
Release CadenceIrregular
MaturityBeta
Pricing ModelFree
Free TierFully free and open-source (MIT)
Self-Hosted FreeYes
Cost Modelfree + LLM costs
Community SizeSmall-medium (6.5k stars)
Community ActivityModerate
SentimentMixed
GPU RequiredNo
Research Date2026-03-24

Use Cases

When to Use

Best for: Adding safety guardrails to AI agent applications

Avoid when: Looking for a general agent framework — it pivoted

  Back to Agent Directory
Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Check out Ag3ntum — our secure, self-hosted AI agent for server management.
Release v20260328a