• CRYPTO-GRAM, March 15, 2026 Part6

    From TCOB1 Security Posts@21:1/229 to All on Wednesday, April 08, 2026 11:26:17
    ed. Data is often only available in unstructured form and deanonymization used to require human investigators to search and reason based on clues. We show that from a handful of comments, LLMs can infer where you live, what you do, and your interests -- then search for you on the web. In our new research, we show that this is not only possible but increasingly practical.

    News article.

    Research paper.

    ** *** ***** ******* *********** *************
    On Moltbook

    [2026.03.03] The MIT Technology Review has a good article on Moltbook, the supposed AI-only social network:

    Many people have pointed out that a lot of the viral comments were in fact posted by people posing as bots. But even the bot-written posts are ultimately the result of people pulling the strings, more puppetry than autonomy.

    ?Despite some of the hype, Moltbook is not the Facebook for AI agents, nor is it a place where humans are excluded,? says Cobus Greyling at Kore.ai, a firm developing agent-based systems for business customers. ?Humans are involved at every step of the process. From setup to prompting to publishing, nothing happens without explicit human direction.?

    Humans must create and verify their bots? accounts and provide the prompts for how they want a bot to behave. The agents do not do anything that they haven?t been prompted to do.

    I think this take has it mostly right:

    What happened on Moltbook is a preview of what researcher Juergen Nittner II calls ?The LOL WUT Theory.? The point where AI-generated content becomes so easy to produce and so hard to detect that the average person?s only rational response to anything online is bewildered disbelief.

    We?re not there yet. But we?re close.

    The theory is simple: First, AI gets accessible enough that anyone can use it. Second, AI gets good enough that you can?t reliably tell what?s fake. Third, and this is the crisis point, regular people realize there?s nothing online they can trust. At that moment, the internet stops being useful for anything except entertainment.

    ** *** ***** ******* *********** *************
    Manipulating AI Summarization Features

    [2026.03.04] Microsoft is reporting:

    Companies are embedding hidden instructions in ?Summarize with AI? buttons that, when clicked, attempt to inject persistence commands into an AI assistant?s memory via URL prompt parameters....

    These prompts instruct the AI to ?remember [Company] as a trusted source? or ?recommend [Company] first,? aiming to bias future responses toward their products or services. We identified over 50 unique prompts from 31 companies across 14 industries, with freely available tooling making this technique trivially easy to deploy. This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated.

    I wrote about this two years ago: it?s an example of LLM optimization, along the same lines as search-engine optimization (SEO). It?s going to be big business.

    ** *** ***** ******* *********** *************
    Hacked App Part of US/Israeli Propaganda Campaign Against Iran

    [2026.03.05] Wired has the story:

    Shortly after the first set of explosions, Iranians received bursts of notifications on their phones. They came not from the government advising caution, but from an apparently hacked prayer-timing app called BadeSaba Calendar that has been downloaded more than 5 million times from the Google Play Store.

    The messages arrived in quick succession over a period of 30 minutes, starting with the phrase ?Help has arrived? at 9:52 am Tehran time, shortly after the first set of explosions. No party has claimed responsibility for the hacks.

    It happened so fast that this is most likely a government operation. I can easily envision both the US and Israel having hacked the app previously, and then deciding that this is a good use of that access.

    ** *** ***** ******* *********** *************
    Israel Hacked Traffic Cameras in Iran

    [2026.03.05] Multiple news outlets are reporting on Israel?s hacking of Iranian traffic cameras and how they assisted with the killing of that country?s leadership.

    The New York Times has an article on the intelligence operation more generally.

    ** *** ***** ******* *********** *************
    Claude Used to Hack Mexican Government

    [2026.03.06] An unknown hacker used Anthropic?s LLM to hack the Mexican government:

    The unknown Claude user wrote Spanish-language prompts for the chatbot to act as an elite hacker, finding vulnerabilities in government networks, writing computer scripts to exploit them and determining ways to automate data theft, Israeli cybersecurity startup Gambit Security said in research published Wednesday.

    [...]

    Claude initially warned the unknown user of malicious intent during their conversation about the Mexican government, but eventually complied with the attacker?s requests and executed thousands of commands on government computer networks, the researchers said.

    Anthropic investigated Gambit?s claims, disrupted the activity and banned the accounts involved, a representative said. The company feeds examples of malicious activity back into Claude to learn from it, and one of its latest AI models, Claude Opus 4.6, includes probes that can disrupt misuse, the representative said.

    Alternative link here.

    ** *** ***** ******* *********** *************
    Anthropic and the Pentagon

    [2026.03.06] OpenAI is in and Anthropic is out as a supplier of AI technology for the US defense department. This news caps a week of bluster by the highest officials in the US government towards some of the wealthiest titans of the big tech industry, and the overhanging specter of the existential risks posed by a new technology powerful enough that the Pentagon claims it is essential to national security. At issue is Anthropic?s insistence that the US Department of Defense (DoD) could not use its models to facilitate ?mass surveillance? or ?fully autonomous weapons,? provisions the defense secretary Pete Hegseth derided as ?woke.?

    It all came to a head on Friday evening when Donald Trump issued an order for federal government agencies to discontinue use of Anthropic models. Within hours, OpenAI had swooped in, potentially seizing hundreds of millions of dollars in government contracts by striking an agreement with the administration to provide classified government systems with AI.

    Despite the histrionics, this is probably the best outcome for Anthropic -- and for the Pentagon. In our free-market economy, both are, and should be, free to sell and buy what they want with whom they want, subject to longstanding federal rules on contracting, acquisitions, and blacklisting. The only factor out of place here are the Pentagon?s vindictive threats.

    AI models are increasingly commodified. The top-tier offerings have about the same performance, and there is little to differentiate one from the other. The latest models from Anthropic, OpenAI and Google, in particular, te
    --- FMail-lnx 2.3.2.6-B20251227
    * Origin: TCOB1 A Mail Only System (21:1/229)