• CRYPTO-GRAM, March 15, 2026 Part1

    From TCOB1 Security Posts@21:1/229 to All on Wednesday, April 08, 2026 11:26:17

    Crypto-Gram
    March 15, 2026

    by Bruce Schneier
    Fellow and Lecturer, Harvard Kennedy School
    schneier@schneier.com
    https://www.schneier.com

    A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

    For back issues, or to subscribe, visit Crypto-Gram's web page.

    Read this issue on the web

    These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.

    ** *** ***** ******* *********** *************
    In this issue:

    If these links don't work in your email client, try reading this issue of Crypto-Gram on the web.

    The Promptware Kill Chain
    Side-Channel Attacks Against LLMs
    AI Found Twelve New Vulnerabilities in OpenSSL
    Malicious AI
    Ring Cancels Its Partnership with Flock
    On the Security of Password Managers
    Is AI Good for Democracy?
    Poisoning AI Training Data
    LLMs Generate Predictable Passwords
    Phishing Attacks Against People Seeking Programming Jobs
    Why Tehran?s Two-Tiered Internet Is So Dangerous
    LLM-Assisted Deanonymization
    On Moltbook
    Manipulating AI Summarization Features
    Hacked App Part of US/Israeli Propaganda Campaign Against Iran
    Israel Hacked Traffic Cameras in Iran
    Claude Used to Hack Mexican Government
    Anthropic and the Pentagon
    New Attack Against Wi-Fi
    Jailbreaking the F-35 Fighter Jet
    Canada Needs Nationalized, Public AI
    iPhones and iPads Approved for NATO Classified Data
    Academia and the "AI Brain Drain"
    Upcoming Speaking Engagements

    ** *** ***** ******* *********** *************
    The Promptware Kill Chain

    [2026.02.16] Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on ?prompt injection,? a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, singular vulnerability. This framing obscures a more complex and dangerous reality. Attacks on LLM-based systems have evolved into a distinct class of malware execution mechanisms, which we term ?promptware.? In a new paper, we, the authors, propose a structured seven-step ?promptware kill chain? to provide policymakers and security practitioners with the necessary vocabulary and framework to address the escalating AI threat landscape.

    The promptware kill chain: initial access, privilege escalation, reconnaissance, persistence, command & control, lateral movement, action on objective

    In our model, the promptware kill chain begins with Initial Access. This is where the malicious payload enters the AI system. This can happen directly, where an attacker types a malicious prompt into the LLM application, or, far more insidiously, through ?indirect prompt injection.? In the indirect attack, the adversary embeds malicious instructions in content that the LLM retrieves (obtains in inference time), such as a web page, an email, or a shared document. As LLMs become multimodal (capable of processing various input types beyond text), this vector expands even further; malicious instructions can now be hidden inside an image or audio file, waiting to be processed by a vision-language model.

    The fundamental issue lies in the architecture of LLMs themselves. Unlike traditional computing systems that strictly separate executable code from user data, LLMs process all input -- whether it is a system command, a user?s email, or a retrieved document -- as a single, undifferentiated sequence of tokens. There is no architectural boundary to enforce a distinction between trusted instructions and untrusted data. Consequently, a malicious instruction embedded in a seemingly harmless document is processed with the same authority as a system command.

    But prompt injection is only the Initial Access step in a sophisticated, multistage operation that mirrors traditional malware campaigns such as Stuxnet or NotPetya.

    Once the malicious instructions are inside material incorporated into the AI?s learning, the attack transitions to Privilege Escalation, often referred to as ?jailbreaking.? In this phase, the attacker circumvents the safety training and policy guardrails that vendors such as OpenAI or Google have built into their models. Through techniques analogous to social engineering -- convincing the model to adopt a persona that ignores rules -- to sophisticated adversarial suffixes in the prompt or data, the promptware tricks the model into performing actions it would normally refuse. This is akin to an attacker escalating from a standard user account to administrator privileges in a traditional cyberattack; it unlocks the full capability of the underlying model for malicious use.

    Following privilege escalation comes Reconnaissance. Here, the attack manipulates the LLM to reveal information about its assets, connected services, and capabilities. This allows the attack to advance autonomously down the kill chain without alerting the victim. Unlike reconnaissance in classical malware, which is performed typically before the initial access, promptware reconnaissance occurs after the initial access and jailbreaking components have already succeeded. Its effectiveness relies entirely on the victim model?s ability to reason over its context, and inadvertently turns that reasoning to the attacker?s advantage.

    Fourth: the Persistence phase. A transient attack that disappears after one interaction with the LLM application is a nuisance; a persistent one compromises the LLM application for good. Through a variety of mechanisms, promptware embeds itself into the long-term memory of an AI agent or poisons the databases the agent relies on. For instance, a worm could infect a user?s email archive so that every time the AI summarizes past emails, the malicious code is re-executed.

    The Command-and-Control (C2) stage relies on the established persistence and dynamic fetching of commands by the LLM application in inference time from the internet. While not strictly required to advance the kill chain, this stage enables the promptware to evolve from a static threat with fixed goals and scheme determined at injection time into a controllable trojan whose behavior can be modified by an attacker.

    The sixth stage, Lateral Movement, is where the attack spreads from the initial victim to other users, devices, or systems. In the rush to give AI agents access to our emails, calendars, and enterprise platforms, we create highways for malware propagation. In a ?self-replicating? attack, an infected email assistant is tricked into forwarding the malicious payload to all contacts, spreading the infection like a computer virus. In other cases, an attack might pivot from a calendar invite to controlling smart home devices or exfiltrating data from a connected web browser. The interconnectedness that makes these agents useful is precisely what makes them vulnerable to a cascading failure.

    Finally
    --- FMail-lnx 2.3.2.6-B20251227
    * Origin: TCOB1 A Mail Only System (21:1/229)