Llmworks

projectactive

Llmworks is an LLM security and testing platform designed to evaluate and stress-test large language model deployments. The frontend is built with React, TypeScript, and Vite, while the backend relies on Supabase for data persistence and authentication. The platform integrates both OpenAI and Anthropic APIs, enabling security testing workflows that can target multiple model providers from a single interface.

Originally developed as part of the nexus-framework ecosystem, Llmworks has since been separated into a standalone project with its own repository at alawein/llmworks. The architecture supports containerized deployment via Docker, making it straightforward to run in isolated environments. Production observability is handled through Sentry monitoring for error tracking and performance telemetry.

Testing infrastructure includes Playwright-based end-to-end tests that exercise the full application stack from the browser layer down to API interactions. Python appears in the stack for backend tooling and scripts alongside the TypeScript/React/Vite UI and Supabase data layer.

Llmworks references the Alembiq project (repo: neper), which focuses on the complementary concern of LLM training, alignment, and evaluation. Together, the two projects cover the lifecycle from model preparation through deployment security validation.