
AI Redteam Cyber Security pentest author
Upwork
Remoto
•15 hours ago
•No application
About
We need an experienced AI/LLM security professional to create a reusable boilerplate red-team playbook that our team can apply as we deploy and update new AI systems. This engagement is documentation and knowledge-transfer only – no penetration testing or live attacks will be performed. Objectives Provide a complete set of templates, checklists, and instructions so our internal security team can conduct AI red-team and penetration tests in the future. Align all content with OWASP Top 10 for LLM Applications and MITRE ATLAS adversarial tactics. Deliver a reporting framework and sample reports so our team can produce professional outputs for executives and auditors. Deliverables & Acceptance Criteria Red-Team Boilerplate Playbook Markdown format. Covers environment scoping, data classification, threat mapping, and planning for LLM-specific testing. Includes at least 10 mapped techniques drawn from OWASP Top 10 + MITRE ATLAS with tool suggestions and references. Must provide the level of detail a qualified tester would need to execute a 2-week AI penetration test without further guidance. Report Templates (2 Examples) Two fully drafted Markdown reports: Technical Report – deep-dive findings, methodology, and remediation guidance. Executive/Board Summary – risk posture, high-level findings, and business impact. Each template should include sample tables, scoring rubrics, and evidence sections. Knowledge-Transfer Package All templates, spreadsheets, and supporting documents. A live, recorded walkthrough session (approx. 1–2 hours) to explain structure, usage, and how to adapt the playbook for new systems. Documentation Quality Clear, concise, and ready to drop into an internal wiki or Git repository. Cross-references to the relevant OWASP/ATLAS techniques for every test case. Each section reviewed for completeness so an internal team could start testing immediately. Qualifications 5+ years in offensive security / red teaming, with specific experience in AI/LLM security. Proven ability to produce professional security documentation and reporting templates. Familiarity with common AI security tools (e.g., Garak, LLM Guard, fuzzing frameworks) and their role in a pentest plan. Engagement Details Type: Fixed-price or milestone-based. Duration: ~4–6 weeks total, including the live walkthrough. Language: English. Collaboration: GitHub or similar for document delivery; NDA required. Application Instructions Please provide: A brief summary of your AI/LLM security and technical-writing experience. Samples of previous security playbooks, runbooks, or red-team documentation (sanitize as needed). A high-level outline of how you would structure the playbook and report templates. Important: This engagement is purely documentation and knowledge transfer. No live testing or attacks on external systems will be performed.