AI & Emerging Tech 9 min read

    SEBI's AI Cyber Advisory: What Regulated Entities Must Do When AI Tools Like Mythos Hunt Vulnerabilities

    SEBI's May 2026 circular flags a new class of risk — emerging AI tools (e.g. Mythos) that find and potentially exploit vulnerabilities at speed and scale. Here's what the advisory mandates, who it applies to, and the 10-point control list from Annexure-A.

    SEBI's AI Cyber Advisory: What Regulated Entities Must Do When AI Tools Like Mythos Hunt Vulnerabilities

    What This Advisory Is

    On May 5, 2026, the Securities and Exchange Board of India (SEBI) issued circular HO/13/19/12(1)2026-ITD-1 — an advisory titled "Advisory on Emerging Advanced Artificial Intelligence (AI) Tools for Vulnerability Detection (like Mythos)".

    The trigger: a new generation of AI-driven tools (the circular names Claude Mythos as an example) can now identify — and potentially exploit — vulnerabilities in production systems at a speed and scale that traditional VA workflows weren't designed to defend against. SEBI treats this as a systemic risk to the securities-market ecosystem, not just an individual-firm risk, because participants in that ecosystem are deeply interconnected and interdependent.

    This is issued under Section 11(1) of the SEBI Act, 1992 — meaning it's binding, not advisory in the casual sense.

    Who It Applies To

    The scope is broad. Every regulated entity (RE) in the Indian securities market is in scope, including:

    • Market Infrastructure Institutions (MIIs) — stock exchanges, clearing corporations, depositories
    • Qualified RTAs (QRTAs) and other Registrars to an Issue and Share Transfer Agents
    • Custodians, Depository Participants, DDPs
    • Mutual Funds / AMCs, AIFs, VCFs, Collective Investment Schemes
    • Portfolio Managers, Investment Advisors, Research Analysts
    • Stock Brokers, Merchant Bankers, Bankers to an Issue, SCSBs
    • Credit Rating Agencies, Debenture Trustees, KYC Registration Agencies

    If you're a third-party application vendor providing COTS solutions to any of the above, the advisory expects MIIs and depositaries to direct you to perform comprehensive AI-risk assessments and implement mitigations as part of empanelment.

    Why This Matters: AI as a Force Multiplier on Both Sides

    SEBI's framing is careful. The advisory acknowledges three distinct risk surfaces introduced by AI-driven vulnerability identification tools:

    • Heightened exploitation risk — the same AI that finds bugs in your security audit can find them for an attacker, faster.
    • Data confidentiality concerns — sending production systems through an AI vulnerability scanner can leak sensitive code, configurations, or data into the AI provider's environment.
    • Application integrity and reliability of outputs — AI-generated findings can hallucinate, miscategorize severity, or miss exploit chains a human auditor would catch. Acting on faulty outputs is itself a risk.

    The cascading-failure concern is explicit: because market participants are interdependent, a breach at one node can ripple. That's why the response is centralized rather than left to individual firms.

    The cyber-suraksha.ai Task Force

    SEBI has constituted a task force named cyber-suraksha.ai (contact: project-cyber-suraksha.ai@sebi.gov.in), comprising representatives from MIIs, QRTAs, QREs, and related stakeholders. Its mandate has four pillars:

    1. Risk examination & uniform mitigation strategy — closely study the cybersecurity risks posed by AI-based models and devise a common mitigation playbook across regulated entities.
    2. Information sharing — threat intelligence, best practices for vulnerability management, use cases, and incident-response playbooks.
    3. Priority incident reporting — cyber incidents, significant attack vectors, and vulnerability information must be reported on a priority basis through this task force to strengthen the securities-market posture.
    4. Third-party vendor review — review the cyber-security posture of third-party application service providers, including empaneled vendors.

    A task-force meeting with MIIs and QRTAs has already been convened to review risks from AI platforms like Mythos. The output of that consultation is captured as Annexure-A of the advisory, summarized below.

    Annexure-A: The 10-Point Mitigation Checklist

    The bulk of the advisory's operational substance is in Annexure-A. Ten controls, paraphrased for clarity:

    1. Patch immediately, virtual-patch the rest. Update all operating systems and applications with the latest patches to mitigate identified/known vulnerabilities. Where a patch isn't available, use virtual patching (WAF rules, IPS signatures) as an interim defensive measure.

    2. Regular Vulnerability Assessment & audits. Run VA using both conventional and AI-based tools where appropriate, plus security audits on a regular/continuous basis — aligned to SEBI's existing Cyber Security and Cyber Resilience Framework (CSCRF).

    3. Engage third-party vendors on patch cadence. Push your third-party vendors (including empaneled application vendors providing COTS) to release timely patches. Exchanges and depositaries shall direct vendors to comprehensively assess AI-led vulnerability detection risks and implement safeguards: patches, VAPT, continuous monitoring, system hardening.

    4. Change Management with teeth. Every change — even "minor" ones — must include full documentation, impact analysis, structured review, rigorous testing, and secure deployment. The intent is operational resilience and system stability; the implicit warning is that small changes are how AI-discoverable regressions creep in.

    5. API Security.

    • Maintain an up-to-date inventory of all APIs and the applications consuming them.
    • Enforce strong authentication and authorization — least-privilege, end-user identity verification, restricted information transfer.
    • Apply API rate limiting and throttling to prevent and detect abuse.
    • API connections strictly on a whitelist-based approach.

    6. SOC Monitoring — including the low-priority alerts.

    • Vigorous day-to-day monitoring of systems and networks. Examine low-priority alerts, not just high-priority ones — AI-driven attacks often hide in the noise.
    • Implement SOAR playbooks integrated with SIEM, properly tested before rollout.
    • The Market SOC (M-SOC) — established by NSE and BSE — is the centralized 24×7 monitoring platform for the securities market. All eligible REs not yet onboarded must expedite onboarding.
    • MIIs must run awareness and handholding workshops to make that M-SOC onboarding actually happen.

    7. Risk Assessment that includes AI as a scenario. SEBI's CSCRF already mandates periodic risk assessment of REs and their third-party providers. The advisory adds: assessments must include scenario-based testing that explicitly considers AI-model capability as a risk vector — both attacker AI and defender AI failure modes.

    8. System hardening with Zero Trust. Adopt secure configurations, disable unnecessary services and default accounts, enforce least-privilege, and move toward Zero Trust Network Architecture (ZTNA) to minimize attack surface.

    9. Asset Inventory and SBOM. Periodically update the asset inventory and the Software Bill of Materials (SBOM) for all critical applications — explicitly including the open-source stack. AI tools weaponize known-vulnerable transitive dependencies faster than humans can; you can't defend what you can't enumerate.

    10. IT-committee guidance and a long-term AI plan. MIIs and REs must seek guidance from their IT committees on mitigating AI-led VA risks. All REs must prepare a long-term plan for the use of AI in:

    • Detection
    • Autonomous / agentic mitigation
    • Risk recalibration for AI-accelerated threats
    • AI-augmented SOC transformation
    • Continuous vulnerability management with AI tools

    How to Operationalize This (Securion's Read)

    Most of the 10 controls aren't new — they're CSCRF hygiene. What's new is the explicit recognition that AI-based attackers and defenders both rewrite the cadence. A few practical implications:

    • Assume your VA cadence is too slow. Quarterly VA against quarterly attackers used to be defensible. Against AI tooling that can scan thousands of endpoints continuously, it isn't. Move toward continuous VA with proper rate-limiting and change-window discipline.
    • SBOM is no longer optional. If you can't generate an SBOM for your critical applications today, that's now an audit finding waiting to happen. Tools like Syft, CycloneDX, and SPDX generators are table-stakes.
    • M-SOC onboarding is the fastest visible deliverable. If your firm is M-SOC-eligible and not onboarded, expect this to be the first thing inspected. The advisory explicitly says "expedite."
    • Vendor risk paperwork has to mention AI now. Your existing TPRM questionnaires need an AI-tooling section: "Do you use AI vulnerability detection tools? On whose data? With what controls on output handling and data residency?"
    • Document your AI usage policy. The "long-term plan" requirement in point 10 is going to be asked for. Even a one-page statement of intent — what AI you'll use defensively, what you won't allow attackers' AI to do unchallenged — is better than nothing.

    If you handle GRC for a regulated entity, treat this as a board-level briefing item. The IT-committee guidance line in point 10 is SEBI explicitly pushing AI risk up to governance level rather than leaving it as an IT-team problem.

    Read This Alongside

    SEBI is explicit: this advisory must be read in conjunction with applicable SEBI circulars — most importantly the Cyber Security and Cyber Resilience Framework (CSCRF) — and any subsequent updates issued by SEBI from time to time.

    • SEBI CSCRF — the parent framework for cybersecurity controls across REs.
    • SEBI circular on cyber-incident reporting — timelines and channels for breach notification.
    • RBI's framework on outsourcing of IT services — where vendor scope overlaps with banking-related custody/clearing.
    • CERT-In directions of 2022 — log-retention and incident-reporting obligations that apply in parallel to all entities operating in India.

    For the original circular, see SEBI's website (the document referenced here is the May 5, 2026 circular signed by Mamata Roy, Deputy General Manager, IT Department).

    Bottom Line

    "AI-driven vulnerability identification has introduced new dimensions of risk for Regulated Entities… speed and scale… data confidentiality, application integrity, and reliability of outputs."

    That sentence — straight from the circular — is the regulatory framing every CISO at an Indian securities-market firm should be able to quote by Q3 2026. The Annexure-A checklist is what your next internal audit will look for. Start with the gaps you already know exist: patch latency, third-party scope, M-SOC onboarding, SBOM completeness. The rest is plumbing.