<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Posts on Silicon Bubble</title><link>https://siliconbubble.com/posts/</link><description>Recent content in Posts on Silicon Bubble</description><generator>Hugo -- 0.157.0</generator><language>en-us</language><lastBuildDate>Sun, 01 Mar 2026 12:00:00 -0400</lastBuildDate><atom:link href="https://siliconbubble.com/posts/index.xml" rel="self" type="application/rss+xml"/><item><title>Anthropic’s Claude Was Used in the Iran Strikes. The Fallout is Already Here.</title><link>https://siliconbubble.com/posts/anthropic-claude-iran-strikes/</link><pubDate>Sun, 01 Mar 2026 12:00:00 -0400</pubDate><guid>https://siliconbubble.com/posts/anthropic-claude-iran-strikes/</guid><description>&lt;p&gt;According to reports from the Wall Street Journal, the U.S. military crossed a massive threshold this week. They used Anthropic’s Claude AI to assist with &amp;ldquo;strategic decision-making&amp;rdquo; and operational planning during the recent strikes on Iran.&lt;/p&gt;
&lt;p&gt;For two years, Silicon Valley has sold Large Language Models as friendly assistants that write your emails and debug your Python code. Now, they are officially processing battlefield intelligence, and the military loves the idea of using AI for “operational efficiency”.&lt;/p&gt;</description></item></channel></rss>