Real problems worth solving

Browse frustrations, pains, and gaps that founders could tackle.

When a submarine is disabled on the ocean floor, the crew's survival depends on rescue arriving before their air supply runs out — typically 5-7 days with emergency life support. The US Navy's Submarine Rescue Diving and Recompression System (SRDRS) can theoretically rescue crew from submarines down to 2,000 feet, but the system must be transported by aircraft to the nearest port, loaded onto a vessel of opportunity, transited to the distressed submarine's location, and then deployed — a process that takes 72+ hours under ideal conditions. In remote operating areas like the Western Pacific, transit time alone could consume most of the crew's survivable window. The Titan submersible implosion in 2023 brought public attention to deep-sea rescue limitations, but military submarine rescue faces distinct challenges. A disabled submarine on the bottom may be at an angle, buried in silt, or leaking radiation from a damaged reactor — all of which complicate rescue vehicle mating with the escape trunk. The rescue vehicle must dock with a hatch on the submarine's hull while both are sitting on an uneven seabed, potentially in strong currents, at depths where human divers cannot operate. The real gap is that modern submarines routinely operate at depths well beyond their rescue-rated depth. The actual operating depth of US submarines is classified, but it is publicly known that they operate significantly deeper than the 2,000-foot rescue limit. If a submarine is disabled at or near its maximum operating depth, no existing rescue system can reach it. The crew's only option is the Steinke hood or SEIE (Submarine Escape Immersion Equipment) suit for individual free ascent — which is itself limited to roughly 600 feet and carries extreme risk of decompression sickness and hypothermia. The problem persists because building a rescue vehicle rated to full submarine operating depth is an extraordinary engineering challenge. The pressures involved — hundreds of atmospheres — require pressure vessels and mechanical systems of extreme complexity and cost. The rescue vehicle must not only survive these pressures but perform precise docking maneuvers at them. International submarine rescue cooperation (like the NATO ISMERLO system) helps with coordination but does not solve the fundamental depth and time limitations. Structurally, submarine rescue is an afterthought in submarine design and procurement. The overwhelming design priority is operational capability — speed, stealth, weapons capacity, sensor performance. Escape and rescue features add weight, complexity, and cost to boats that are already at the limits of their design envelopes. The low probability of a submarine casualty (relative to the certain daily need for operational capability) means rescue systems chronically lose the resource competition within the submarine enterprise.

defense+10 views

The Columbia-class ballistic missile submarine (SSBN) program is the US Navy's top acquisition priority, designed to replace the 14 Ohio-class SSBNs that have provided continuous at-sea nuclear deterrence since the 1980s. The first Columbia (SSBN-826) must be delivered by 2027 and conduct its first deterrent patrol by 2031 to prevent a gap in the nuclear deterrent as Ohio-class boats reach the end of their extended service lives. The program has essentially zero schedule margin — any significant delay means a period where the US cannot maintain the required number of SSBNs on patrol. A gap in SSBN deterrence patrols would be an unprecedented event in the nuclear age. Since 1960, the US has maintained continuous at-sea nuclear deterrence, ensuring that no first strike by an adversary could eliminate America's ability to retaliate. Even a brief gap — a few months where one fewer SSBN is available — would be noticed by adversary intelligence services and could alter their strategic calculations. The psychological and deterrent value of guaranteed second-strike capability is the foundation of strategic stability. The Columbia program is already under pressure. The missile tube module, built in collaboration with the UK for their Dreadnought-class SSBN, has experienced manufacturing challenges. The integrated power system, which uses an electric-drive propulsion train (a first for US SSBNs), introduces new technology risk. The construction workforce overlaps with Virginia-class production at Electric Boat, creating direct labor competition between the two programs. The problem persists because the Ohio-class boats were designed for a 30-year service life that was already extended to 42 years — they cannot be extended further without unacceptable reactor and hull risks. The Columbia program was delayed repeatedly in the 2010s due to budget constraints (sequestration) and design optimism, consuming the schedule margin that originally existed. Every year of delay in the 2010s became a year of unrecoverable risk in the 2030s. At the structural level, the US chose to let its SSBN industrial base atrophy after the Ohio class was complete in 1997, going nearly 30 years without building a single ballistic missile submarine. The knowledge, tooling, workforce, and supplier base for SSBN construction had to be substantially rebuilt from scratch. This is the inevitable consequence of a boom-bust procurement model applied to the most complex and consequential weapons system in the arsenal.

defense+10 views

Despite advances in communications technology, submarines must still come to periscope depth (approximately 60 feet) to communicate with shore commands, receive operational orders, transmit intelligence, and update navigation systems via GPS. At periscope depth, the submarine is dramatically more vulnerable: it can be detected by radar, visual observation, magnetic anomaly detection aircraft, and even satellite imagery. The mast and periscope create radar cross-sections, and the submarine's wake can be visible from above. Every trip to periscope depth is a calculated risk. The operational impact is severe because modern anti-submarine warfare forces specifically hunt for submarines at periscope depth. Maritime patrol aircraft like the P-8 Poseidon carry radar systems optimized to detect periscope-sized targets, and adversary satellites increasingly have the resolution and revisit rates to spot a submarine's mast or wake. During a high-threat transit or combat operation, the submarine commander faces an impossible choice: stay deep and lose communications (potentially missing critical orders including nuclear launch authorization), or come shallow and risk detection and attack. Extremely low frequency (ELF) and very low frequency (VLF) radio can reach submarines at depth, but these systems carry only one-way, extremely low-bandwidth messages — essentially a bell-ringer telling the submarine to come to periscope depth for the real message. Submarine-launched communication buoys offer partial solutions but create their own detection risks and have limited bandwidth. Laser communication through seawater is theoretically promising but remains in early research stages and is limited by water clarity and depth. The problem persists because the physics of radio propagation through seawater are fundamentally unfavorable. Seawater is highly conductive, and electromagnetic waves attenuate exponentially with depth and frequency. Only extremely low frequencies penetrate to operational depths, and those frequencies can carry only a few characters per minute. There is no known technology that can provide broadband, two-way communication to a submarine at depth without the submarine revealing its position in some way. This is ultimately a physics problem masquerading as an engineering problem. The submarine community has worked around it for decades with operational procedures and risk acceptance, but as adversary detection capabilities improve, the vulnerability window at periscope depth grows more dangerous. The gap between what commanders need (real-time, high-bandwidth, two-way comms at depth) and what physics allows (one-way, minimal-bandwidth messages via ELF/VLF) has not narrowed meaningfully in 40 years.

defense+10 views

Modern submarine sonar suites — including the BQQ-10 on Virginia-class boats — generate enormous volumes of acoustic data from towed arrays, hull-mounted hydrophones, and wide-aperture arrays. A single towed array can produce terabytes of raw data per day. Sonar technicians must process this data to detect, classify, and track contacts in real time, but the volume increasingly exceeds human cognitive bandwidth. Critical contacts can be buried in noise for hours before being identified, and the problem worsens in littoral (shallow water) environments where biological noise, shipping traffic, and acoustic reverberation dramatically increase the background clutter. Missed or delayed contact detection in a combat environment can be fatal. If an adversary submarine or torpedo is not detected in time, the crew has seconds to minutes to react. In the 2005 USS San Francisco grounding, the sonar team had data that could have indicated shoaling water, but it was not processed and correlated in time. The cognitive load on sonar operators during high-threat operations is unsustainable — 6-hour watches staring at sonar displays with intermittent contacts buried in noise leads to attention fatigue and missed detections. AI and machine learning could theoretically automate much of the initial detection and classification, but submarine combat systems operate on classified networks with severely constrained computing hardware. The submarine's combat system processors were designed years before modern ML inference engines, and upgrading them requires extensive shock qualification, electromagnetic compatibility testing, and nuclear safety certification. You cannot simply install a GPU cluster on a submarine the way you would in a data center. The problem persists because the submarine combat system acquisition cycle is measured in decades while sonar processing requirements grow annually. The Navy's Acoustic Rapid COTS Insertion (ARCI) program was designed to accelerate technology refresh, but even ARCI updates take years to develop, test, certify, and deploy. Meanwhile, adversary submarines are getting quieter, requiring even more sophisticated signal processing to detect — creating a growing gap between what the sonar suite collects and what the crew can meaningfully analyze. Structurally, the certification and security requirements for submarine combat systems create a technology lag of 5-10 years behind the commercial state of the art. Every piece of hardware and software that goes on a submarine must survive shock testing (simulating depth charge attacks), operate silently (no fans or spinning disks that create acoustic signatures), and meet stringent TEMPEST and cybersecurity requirements. These are legitimate requirements, but they collectively ensure that submarine crews are always fighting with yesterday's processing technology against today's data volumes.

defense+20 views

The US Navy is developing the Conventional Prompt Strike (CPS) weapon, a submarine-launched hypersonic boost-glide missile, for deployment on Virginia-class and future Columbia-class submarines. The fundamental engineering challenge is fitting a hypersonic glide vehicle and its booster into a submarine's vertical launch system (VLS) tubes, which were designed decades ago for Tomahawk cruise missiles. The CPS missile is significantly larger and generates extreme thermal loads during boost phase that stress the launch tube and surrounding hull structure in ways Tomahawk never did. This matters because hypersonic weapons are considered essential for defeating advanced integrated air defense systems that can intercept conventional cruise missiles and ballistic missiles. Without submarine-launched hypersonic capability, the Navy loses its ability to conduct prompt conventional strikes from a survivable, covert platform. Surface ships launching hypersonic missiles must operate within range of adversary anti-ship missiles, negating much of the weapon's value. The submarine is the only platform that can get close enough to launch while remaining hidden. The Virginia Payload Module (VPM), an additional hull section being added to Block V Virginia-class boats, was designed partly to accommodate larger weapons including CPS. But integration has proven far more complex than anticipated. The thermal management system needed to protect the submarine during launch, the gas management system to handle boost motor exhaust in a confined space, and the structural reinforcements required have driven costs and timelines well beyond initial estimates. The first CPS-capable submarine delivery has slipped multiple times. The problem persists because submarine weapon interfaces are among the most constrained engineering environments in existence. Every cubic inch inside a submarine is already allocated, and adding new systems means removing or relocating existing ones. Unlike surface ships or aircraft, submarines cannot simply bolt on external launchers — everything must fit inside the pressure hull and function at depth. The hydrodynamic, structural, and safety requirements for submarine-launched weapons are orders of magnitude more demanding than land or air launch. At root, this is a problem of legacy platform geometry constraining next-generation weapons. Submarine hulls are designed and built on 10-15 year timelines; weapon systems evolve faster. The result is a perpetual mismatch where new weapons must be crammed into platforms designed for the previous generation's munitions.

defense+10 views

Submarine stealth has historically relied on reducing acoustic signatures — quieting machinery, shaping hulls, and using sound-absorbing coatings. Modern US and allied submarines are so quiet that passive sonar detection at meaningful ranges is extremely difficult. However, quantum sensing technologies — particularly superconducting quantum interference devices (SQUIDs) and nitrogen-vacancy center magnetometers — threaten to detect submarines through their magnetic signatures rather than their acoustic ones. China has invested heavily in quantum magnetometry research with explicit anti-submarine applications. If quantum magnetic anomaly detection (MAD) achieves operational ranges of even a few kilometers, the entire calculus of submarine warfare changes. The strategic value of the submarine force rests on the assumption that a submerged submarine is effectively invisible. Ballistic missile submarines provide second-strike nuclear deterrence precisely because an adversary cannot find and destroy them preemptively. If that assumption breaks, the most survivable leg of the nuclear triad becomes vulnerable, fundamentally destabilizing the strategic balance. The near-term threat is not a single magic sensor but a network of distributed quantum sensors — potentially deployed on the seabed, on unmanned underwater vehicles, or on surface ships — that collectively achieve detection capabilities no single sensor could. China's ocean observation networks and its extensive seabed sensor programs in the South China Sea suggest this distributed approach is already being pursued. The problem persists because countermeasures against magnetic detection are fundamentally harder than countermeasures against acoustic detection. You can make a submarine quieter by isolating vibrations and redesigning machinery, but you cannot easily eliminate the magnetic signature of a 7,000-ton steel vessel with a nuclear reactor. Degaussing (reducing a ship's magnetic field) helps but is imperfect and degrades over time. The physics of magnetic signatures create an asymmetry that favors the sensor over the hider. Structurally, the US submarine community has optimized for acoustic stealth for decades and has institutional resistance to treating non-acoustic detection as an existential threat. Research funding for magnetic signature reduction and quantum countermeasures has lagged behind the threat. The classified nature of the problem also limits the academic and commercial research base that could contribute solutions.

defense+10 views

Nuclear submarine deployments typically last 6-7 months with zero port calls for ballistic missile submarines (SSBNs) and limited port calls for attack submarines (SSNs). Crew members live in a sealed steel tube with no natural light, no fresh air, minimal personal space, and zero contact with family beyond occasional brief email-like messages. A 2022 Naval Health Research Center study found that submariners report clinically significant psychological distress at rates roughly 50% higher than surface sailors. The immediate human cost is severe: elevated rates of depression, anxiety, sleep disorders, and relationship breakdown. Submarine divorce rates are among the highest in the military. Crew members describe the psychological toll of spending months unable to see the sun, unable to call home when a family member is sick, and sleeping in bunks stacked three high in spaces smaller than a closet. Junior enlisted sailors on Virginia-class boats often hot-rack, sharing a bunk with another sailor on the opposite watch rotation. Operationally, mental health degradation directly threatens mission effectiveness. A submarine crew of 130-140 people operates a nuclear reactor, handles weapons systems, and conducts intelligence operations in contested waters. Cognitive impairment from poor sleep, chronic stress, or undiagnosed depression can lead to errors with catastrophic consequences. The 2021 USS Connecticut grounding in the South China Sea, which caused significant damage and injuries, was attributed partly to navigation team fatigue and complacency. The problem persists because the submarine force has a deeply embedded culture of stoicism that treats mental health concerns as weakness. Seeking help is career-ending in practice if not in policy — a submariner who is diagnosed with certain psychological conditions can lose their submarine qualification, effectively ending their career path. This creates powerful incentives to hide symptoms rather than seek treatment. Embedded mental health professionals are impractical given space constraints on submarines. Structurally, the submarine force cannot reduce deployment lengths because there are not enough boats and crews to maintain required coverage with shorter rotations. The fleet shortfall means each boat deploys more frequently, creating a vicious cycle where the same crews are ground down harder because there are too few of them. Until the fleet grows or mission requirements shrink — neither of which is happening — the human cost will continue to compound.

defense+10 views

Under the AUKUS Pillar 1 agreement, the United States committed to selling 3-5 Virginia-class submarines to Australia in the early 2030s as a bridge until the jointly designed SSN-AUKUS boat is ready in the 2040s. The problem is that the US industrial base cannot build enough Virginia-class submarines to meet the US Navy's own requirements, let alone provide boats to Australia. Selling Virginias to Australia means the US Navy's attack submarine fleet drops even further below its 66-boat target. This creates an impossible trilemma: the US can meet its own fleet requirements, deliver submarines to Australia, or maintain the current build rate — but not all three simultaneously. Congressional opposition is already mounting, with multiple members stating publicly that no US submarines should be transferred until the Navy's own shortfall is addressed. If the Virginia transfers slip or are cancelled, Australia's submarine capability faces a critical gap between the retirement of its aging Collins-class boats and the arrival of SSN-AUKUS. The strategic consequences extend beyond bilateral relations. AUKUS is the centerpiece of Indo-Pacific alliance architecture, and failure to deliver submarines would fundamentally undermine allied confidence in US security commitments at precisely the moment when China's naval expansion demands stronger deterrence. Australia has committed over $200 billion (AUD) to the submarine program and is restructuring its entire defense industrial base around it. A delivery failure would be catastrophic for the alliance. The problem persists because AUKUS was negotiated as a geopolitical commitment before the industrial base capacity to fulfill it was secured. The agreement assumed that shipyard investments and workforce growth would accelerate production, but those improvements have consistently lagged projections. There is no contractual penalty or enforcement mechanism if the US simply cannot produce enough boats — the physics of submarine construction set the real timeline, not diplomatic agreements. At its core, this is a problem of political timelines outrunning industrial reality. Nuclear submarine construction has irreducible lead times measured in years, and no amount of political will can compress the time it takes to train a nuclear welder or qualify a reactor compartment. The AUKUS timeline was set by diplomats, but it will be delivered — or not — by shipyard workers.

defense+10 views

US Navy nuclear submarines require engineered refueling overhauls (EROs) roughly midway through their service lives to replace spent nuclear fuel and perform extensive maintenance. These overhauls are performed exclusively at the four public naval shipyards (Portsmouth, Norfolk, Puget Sound, and Pearl Harbor), and they routinely run months or even years behind schedule. As of 2024, submarines in maintenance have accumulated over 3,700 days of idle time beyond their planned return-to-fleet dates. Every day a submarine sits in extended maintenance is a day it cannot deploy. The submarine force already faces a shortfall against combatant commander demand signals, so maintenance delays directly translate to operational gaps. A submarine that was supposed to return to the fleet in 18 months but takes 30 months instead means an entire deployment cycle is missed — affecting deterrence patrols, intelligence collection, and theater anti-submarine warfare posture. The crews assigned to those boats also lose proficiency sitting pierside rather than operating at sea. The root cause is a cascading resource bottleneck at the public shipyards. These facilities were built during World War II and the early Cold War, and many dry docks and work areas have not been substantially modernized. When one overhaul runs late, it blocks the dry dock for the next submarine in the queue, creating a domino effect across the entire maintenance schedule. The Shipyard Infrastructure Optimization Program (SIOP) is a $21 billion, 20-year plan to modernize these yards, but meaningful capacity improvements are years away. The problem persists because nuclear maintenance work cannot be outsourced to private shipyards — federal law and Nuclear Regulatory Commission equivalents restrict nuclear propulsion work to naval shipyards with specific certifications. This creates an inelastic bottleneck that cannot be relieved by throwing money at private contractors. Additionally, the public shipyards compete for the same scarce nuclear-qualified workforce as the construction yards, and federal pay scales make it difficult to retain experienced workers who can earn more in the private nuclear sector. Fundamentally, the Navy designed a fleet around a maintenance schedule that assumed shipyard throughput would improve, but decades of deferred infrastructure investment made things worse instead. The result is a vicious cycle: delayed maintenance reduces fleet readiness, which increases operational tempo on the remaining boats, which accelerates their wear, which increases future maintenance demand.

defense+10 views

The United States has only two shipyards capable of building nuclear submarines: Huntington Ingalls Industries in Newport News, Virginia, and General Dynamics Electric Boat in Groton, Connecticut. Between them, they currently produce roughly 1.2 to 1.4 Virginia-class submarines per year, well below the Navy's stated requirement of two per year to maintain a 66-boat attack submarine fleet. The Navy's own projections show the attack submarine fleet dropping to as few as 46 boats by the mid-2030s. This matters because the submarine force is the single most survivable leg of the nuclear triad and the primary tool for undersea intelligence, surveillance, and reconnaissance. Every boat short of the requirement means gaps in combatant commander coverage — fewer submarines available to track adversary ballistic missile submarines, fewer available to launch Tomahawk strikes, and fewer available to support special operations. The Pacific theater alone demands more attack submarines than the entire fleet will soon have. The workforce is the binding constraint. Submarine construction requires highly specialized welders, pipefitters, and nuclear-qualified technicians who take 3-5 years to train to full productivity. Both shipyards lost thousands of skilled workers during post-Cold War drawdowns in the 1990s and have struggled to rebuild. Electric Boat alone needs to hire roughly 18,000 workers over the next decade while competing with commercial employers offering comparable wages without security clearance requirements or mandatory overtime. The problem persists because nuclear submarine construction is a monopsony — the US Navy is the only customer, so the industrial base scales to match Navy procurement budgets, not strategic need. When Congress cut submarine procurement in the 1990s peace dividend, the workforce and supplier base shrank accordingly. Rebuilding that capacity takes a decade or more, and there is no commercial market to sustain it in the interim. The AUKUS agreement to build SSN-AUKUS boats for Australia adds further pressure to an already overstretched industrial base. Structurally, the two-shipyard model creates a single point of failure with no surge capacity. Unlike aircraft or vehicle manufacturing, you cannot simply open a third submarine yard — the specialized facilities, tooling, and workforce represent billions in capital investment and years of regulatory qualification. The result is a slow-motion crisis where strategic requirements outpace industrial reality.

defense+10 views

Military tactical communications software, including radio firmware, encryption updates, mission command applications, and electronic warfare libraries, can only be updated through wired connections at fixed facilities. A forward-deployed company operating 50+ km from its base cannot receive software patches, threat library updates, or critical vulnerability fixes for its radios, EW systems, and battle management tools. The update process requires physically bringing each device to a secure facility, connecting it to a SIPR terminal, downloading the update, and verifying the installation. This means that when a critical vulnerability is discovered in a tactical radio's firmware, or when the enemy deploys a new electronic warfare technique that requires an updated threat library, forward units continue operating with the compromised or outdated software for days or weeks until they can rotate back to base. During the 2022 Army Cyber Command exercise, it took an average of 14 days from patch release to full deployment across a brigade combat team's tactical systems, compared to the 24-72 hour patching timelines that commercial enterprises maintain. The operational consequence is that adversaries can exploit known vulnerabilities in fielded systems faster than defenders can patch them. If a signals intelligence unit intercepts a new adversary waveform, the EW library update that would allow friendly systems to detect and jam it cannot reach the frontline units that need it most. Similarly, when a zero-day vulnerability is found in a mission command application, every fielded instance remains exploitable until physically touched by a technician, creating a window of exposure measured in weeks. This persists because the military's software certification and distribution infrastructure was built for garrison environments with reliable wired networks. The Army Software Logistics Center and equivalent organizations certify updates, sign them cryptographically, and push them to repositories that are only accessible from fixed facilities. Over-the-air software distribution for classified systems faces the same spectrum and bandwidth limitations that constrain all tactical communications, plus additional security certification requirements that no program has fully satisfied. The structural barrier is that the DoD's software assurance process treats every update as a potential supply chain attack vector, requiring extensive testing and certification before distribution. This caution is justified given the consequences of compromised military software, but the resulting process is so slow and facility-dependent that it creates a different security vulnerability: the inability to patch known flaws in a tactically relevant timeframe. The tension between supply chain security and rapid patching has no institutional owner empowered to make the tradeoff.

defense+20 views

The fiber optic backbone infrastructure on most major U.S. military installations was installed in the late 1990s during the first wave of base network modernization. These single-mode fiber runs, many using outdated connector standards and lacking modern splicing techniques, now fail at rates that would be unacceptable for any commercial ISP. Fort Liberty (formerly Bragg), one of the Army's most critical power-projection platforms, averages 2-3 fiber cuts per week from aging cables, corroded splice closures, and construction crews hitting unmapped conduits. Each fiber cut takes the base network operations center 4-12 hours to locate and repair because as-built documentation from the 1990s installations is incomplete or inaccurate. Technicians must physically trace cables through underground conduit to find the break. During the outage, every building on the affected segment loses SIPRNet and NIPRNet connectivity. A single cut at Fort Liberty in 2022 took down network access for the 82nd Airborne Division headquarters for 9 hours during a no-notice deployment readiness exercise. The cascading impact is that commanders cannot trust their garrison network to be available when they need it most. Deployment orders, personnel tracking, logistics requests, and intelligence products all flow over these aging fiber links. When the network goes down, units revert to phone calls and physical runners to push information, exactly the 20th-century methods that digital networks were supposed to replace. The unpredictability of outages means that every headquarters maintains manual backup procedures that consume staff time and attention even when the network is functioning. This persists because military construction (MILCON) funding prioritizes visible infrastructure like barracks, training facilities, and motor pools. Underground fiber optic cable, which nobody sees, competes poorly for limited MILCON dollars. Installation network modernization projects are funded through the Defense Information Systems Agency (DISA) or service-specific IT budgets, which are chronically underfunded compared to weapons systems. A fiber refresh for a single major installation costs $50-100 million, and there are over 400 DoD installations worldwide. The root cause is that the DoD treats installation networks as facilities infrastructure rather than warfighting capability. Fiber optic cables on a base are managed by the Directorate of Public Works alongside roads and plumbing, not by the signal community that depends on them. This organizational misalignment means that the people who suffer from network outages have no authority over the budget to fix the cables, and the people who control the facilities budget do not experience the operational impact of outages.

defense+20 views

The FCC's ongoing reallocation of mid-band spectrum (3.1-3.55 GHz) for commercial 5G services directly conflicts with military radar and communications systems that have operated in these bands for decades. The AN/SPN-43 aircraft marshaling radar used on Navy carriers, multiple Air Force airborne early warning radars, and Army ground-based air defense radars all operate in the 3.1-3.55 GHz range. The DoD has been directed to vacate or share these frequencies to make room for 5G auction winners who paid $81 billion for C-band spectrum. The immediate problem is that these radar systems cannot simply be retuned to different frequencies. Their transmitters, receivers, antennas, and signal processing are physically designed for specific bands. Moving a radar to a new frequency requires redesigning the hardware, which for military systems means a new acquisition program with 5-10 year timelines and billions in costs. Meanwhile, the commercial 5G buildout is happening now, creating interference zones around military installations and operating areas. The downstream impact is that Navy carriers approaching ports with active 5G networks experience interference that degrades their precision approach radars, forcing pilots to use less accurate landing systems in poor weather. Air Force ranges co-located near growing cities find their radar testing increasingly constrained. The Army's Patriot and THAAD radar systems, which operate in nearby bands, face potential interference as 5G buildouts expand near military bases. This persists because spectrum is governed by two separate authorities with conflicting mandates. The FCC maximizes commercial spectrum value and the revenue from auctions. The NTIA advocates for federal spectrum needs but lacks veto power over FCC auctions. When the C-band auction generated $81 billion, the economic pressure to reallocate military spectrum became politically irresistible. DoD was offered relocation funds but the money is insufficient to redesign and replace entire radar families within the commercial buildout timeline. The structural issue is that spectrum in the mid-band is finite and the commercial demand for 5G bandwidth is enormous. Military systems were allocated these bands in an era when commercial wireless demand was negligible. Now that mid-band spectrum is worth thousands of dollars per MHz-POP, the economic incentive to push military systems out overwhelms the national security argument for keeping them. There is no institutional framework for valuing military spectrum usage in economic terms that can compete with commercial auction prices.

defense+20 views

U.S. military electronic warfare (EW) units are prohibited from transmitting jamming signals on American soil because the Federal Communications Commission regulates all electromagnetic spectrum usage domestically, including on military installations. The Communications Act of 1934 makes it illegal to willfully interfere with licensed radio communications. While narrow exemptions exist for specific test ranges like White Sands, the vast majority of Army posts, where EW units are based and train daily, cannot conduct live jamming exercises. This means that EW soldiers and officers deploy to combat having never actually jammed an enemy signal in a realistic training environment. They study theory, run simulations on laptops, and practice antenna placement, but they have never experienced the feedback loop of detecting a signal, targeting it, jamming it, and assessing the effect. It is the equivalent of training infantry soldiers who have never fired a live round. An Army EW officer at the National Training Center described it as showing up to a gunfight having only played laser tag. The consequence is that when EW capabilities are most needed, against near-peer adversaries with sophisticated communications and radar systems, U.S. EW operators lack the muscle memory and practical experience to employ their systems effectively. During exercises at JMRC in Germany, where some jamming is permitted under host-nation agreements, U.S. EW units consistently underperform because operators are encountering live electromagnetic effects for the first time. Mistakes in the training environment are learning opportunities; mistakes in combat mean the enemy's communications stay up and their fires remain accurate. This problem persists because the FCC's statutory authority over spectrum is absolute within U.S. borders, and the political cost of changing the law is perceived as higher than the military readiness risk. Every time DoD has proposed expanded jamming authorities on military ranges, commercial spectrum users, broadcasters, and cellular carriers lobby against it, fearing interference with their signals. The National Telecommunications and Information Administration, which manages federal spectrum, has been unable to broker a compromise. The root cause is that spectrum governance in the United States was designed for a peacetime commercial economy, not for wartime military readiness. The 1934 Act predates electronic warfare as a concept. There is no legal framework that balances commercial spectrum protection with the military's need to train on the electromagnetic capabilities it will use in combat. Other nations, notably Russia and China, train their EW forces extensively in realistic jamming environments on their own territory, creating a readiness gap that widens every year.

defense+20 views

The military's MILSATCOM constellation provides approximately 40 Gbps of total throughput for all DoD users worldwide. Studies of actual traffic on classified networks like SIPRNet over satellite links show that 90-95% of bandwidth is consumed by administrative traffic: email attachments, PowerPoint briefings, SharePoint synchronization, and video teleconferences for staff meetings. The remaining 5-10% is available for actual tactical data such as sensor feeds, fire missions, and intelligence products. This means that a platoon in contact trying to push a drone video feed to their battalion TOC is competing for bandwidth with a general's 80-slide quarterly training review. The tactical traffic almost always loses because priority-of-service settings on military routers are rarely configured correctly, and when they are, staff officers at higher echelons override them to ensure their VTCs work smoothly. A 2020 Army study found that during combat training center rotations, tactical units received an average of 256 Kbps of effective bandwidth, roughly equivalent to a 2003-era DSL connection, while division and corps headquarters consumed megabits for administrative functions. The operational impact is that the Army's vision of sensor-to-shooter data flows measured in seconds is physically impossible on the current network. A full-motion video feed requires 2-4 Mbps. Intelligence products with imagery can be 50-100 MB each. When these must traverse a satellite link already saturated by staff email, they queue for minutes or hours. Commanders then revert to voice-only reporting, losing the data richness that modern sensors provide. This persists because there is no institutional mechanism to enforce bandwidth discipline. Every headquarters believes its administrative traffic is essential. The network operations centers that manage bandwidth allocation report to signal officers who are outranked by the staff principals demanding VTC connectivity. Technically, quality-of-service policies exist, but they require constant manual tuning and are overridden by senior officers who call the NOC and demand priority for their traffic. The structural cause is that the military built its enterprise IT infrastructure (email, SharePoint, VTC) to run on the same network as its warfighting systems, and then gave the enterprise side no bandwidth constraints. In the commercial world, companies separate corporate IT from operational technology networks. The DoD merged them onto a single backbone and then wonders why PowerPoint crowds out targeting data.

defense+20 views

Military encrypted communications require cryptographic keys loaded onto radios via fill devices like the AN/PYQ-10 Simple Key Loader (SKL). Distributing new keys across a brigade combat team of 4,000+ soldiers requires physical transport of fill devices to every radio operator at every echelon, a process that takes 48-72 hours under ideal conditions. In contested environments where courier movement is restricted, key distribution can take a week or more. The immediate pain is that key rollovers, which doctrine says should happen regularly to maintain security, are operationally so disruptive that units delay them as long as possible. Many units in garrison operate on the same COMSEC keys for 30-60 days instead of the prescribed shorter periods. In deployment, key changes are timed to operational pauses because the unit essentially goes communications-dark during the transition as radios are taken offline to load new keys. The catastrophic risk is that a single lost or captured SKL compromises every key stored on it, which can include keys for an entire battalion or brigade. When a fill device goes missing, every radio net that used those keys must be considered compromised, requiring an emergency rekey of potentially thousands of radios. The 2017 theft of an SKL from a vehicle in Germany triggered a multi-week emergency rekey across an entire division, consuming over 10,000 person-hours and degrading operational readiness during a critical NATO exercise. This persists because the military's key management infrastructure was designed in the 1980s and assumes physical distribution via trusted couriers. Over-the-air rekeying (OTAR) exists but is not universally fielded, works unreliably in contested electromagnetic environments, and itself requires an initial key exchange that circles back to physical distribution. The NSA controls COMSEC key generation and distribution timelines, and their processes are optimized for security compliance rather than operational speed. Structurally, the tension between information security and operational agility has no institutional resolution mechanism. The NSA's equities prioritize zero compromise, which means physical control of keying material. Operational commanders' equities prioritize speed and flexibility. There is no authority that can balance these competing demands and mandate an over-the-air or network-based key distribution system that is both secure enough for NSA and fast enough for combat operations.

defense+10 views

Type-1 encrypted tactical radios such as the AN/PRC-163 and AN/PRC-152A operate in the VHF/UHF bands (30-512 MHz) and rely on line-of-sight propagation. The moment a dismounted soldier enters a concrete or rebar-reinforced building, signal attenuation of 20-40 dB effectively kills the radio link. The encrypted sync handshake that must be re-established when signal returns takes 8-15 seconds, during which the soldier has zero communication with their squad leader or command element. In urban combat, where 80% of future conflicts are projected to occur, this means soldiers clearing a building floor-by-floor are communicating blind. A team leader on the third floor cannot talk to their squad leader outside. The platoon leader cannot reach the company commander to report contact, request fire support, or coordinate casualty evacuation. Units compensate by posting relay soldiers at windows and doorways, pulling trigger-pullers off the fighting line to serve as human repeaters. The consequences compound rapidly. Without communications inside buildings, units cannot coordinate simultaneous breach of multiple rooms, leading to sequential clearing that is slower and more dangerous. Friendly units on different floors cannot deconflict fires, increasing fratricide risk. In the 2004 Battle of Fallujah and the 2016-2017 Battle of Mosul, after-action reviews repeatedly identified in-building communication failure as a primary cause of coordination breakdowns and friendly fire incidents. This persists because the physics of VHF/UHF propagation through dense urban structures are fundamentally hostile to these frequencies. The military knows this but has not fielded a ubiquitous in-building communication solution. Mesh networking radios like the MANET-based PRC-167 exist but are not yet at scale, and they still struggle with multi-floor penetration. Commercial LTE-based solutions work but are not approved for classified traffic. The root cause is a procurement system that tests and certifies radios in open-terrain environments. Operational testing at Aberdeen or Yuma proves radios work at range in the desert but never validates performance inside a concrete apartment block. The test criteria do not include urban penetration metrics, so radios that fail in exactly the environment where they will be used continue to pass acceptance testing and get fielded.

defense+10 views

A standard Army Signal Company deploys Satellite Transportable Terminals (STTs) and Tropospheric Scatter (TSCM) systems that require 4-6 hours to set up, align, and establish network connectivity. These terminals emit detectable electromagnetic signatures the moment they begin transmitting. Near-peer adversaries like Russia and China have demonstrated the ability to geolocate and target active emitters within 15-20 minutes using electronic intelligence and counter-battery radar linked to precision strike. This means that a Signal team spends an entire work shift erecting a communications node that an adversary can destroy before it completes its first full data sync. The unit that depends on this node, typically a brigade or division headquarters, loses its primary command-and-control link just as operations begin. The Signal soldiers then must either set up again at a new location, consuming another 6 hours plus movement time, or the supported unit must operate in communications blackout. The operational impact is devastating for the Army's concept of multi-domain operations, which assumes continuous high-bandwidth connectivity between echelons. Division and corps headquarters need to move every 2-4 hours to avoid being targeted, but their communications infrastructure cannot keep up. Commanders face an impossible choice: stay connected but stationary and get killed, or stay alive but disconnected and lose the ability to coordinate subordinate units. This problem persists because military satellite terminals were designed for the counterinsurgency era, where the threat of precision strike against rear-area communications nodes was negligible. The Taliban could not geolocate and strike a satellite terminal. The entire equipment fielding pipeline, from STT to WIN-T to the newer Integrated Tactical Network, optimized for bandwidth and reliability, not for rapid displacement and low probability of intercept. The structural cause is that communications equipment acquisition cycles run 10-15 years from requirement to fielding. The requirements for current terminals were written in the early 2010s against a COIN threat. By the time the equipment reached units, the threat had shifted to near-peer adversaries with sophisticated electronic warfare and long-range precision fires. There is no mechanism to rapidly update fielded hardware to meet the current threat, and the replacement programs are themselves on decade-long timelines.

defense+20 views

Consumer-grade GPS jammers available online for under $50 can deny military L1-band GPS signals within a 5-10 km radius. While military receivers use encrypted M-code on the L1 and L2 bands, the transition to M-code-capable receivers is incomplete. Thousands of fielded military systems, from JDAMs to Blue Force Trackers, still depend on legacy GPS signals that are trivially jammed. A truck driver's illegal jammer near Newark Airport in 2013 disrupted the FAA's ground-based augmentation system daily for months before being caught. The immediate consequence is that units lose position, navigation, and timing (PNT) data during operations. Without GPS, precision-guided munitions become unguided, dismounted infantry cannot call accurate grid coordinates for fire missions, and logistics convoys get lost. During exercises against near-peer adversary simulations at the National Training Center, units that lost GPS saw a 40-60% degradation in fires accuracy and a doubling of navigation errors leading to fratricide risk. This cascades into a deeper problem: GPS is not just navigation but the military's primary timing source. When GPS timing is jammed, frequency-hopping radios lose synchronization and drop off the net. Blue Force Tracking systems cannot update positions. Encrypted data links that depend on precise time-of-day keys fail to authenticate. A single $50 jammer does not just deny navigation; it degrades the entire digital command-and-control fabric. The problem persists because the M-code GPS receiver rollout has been delayed repeatedly. The Ground-Based GPS Receiver Application Module (GB-GRAM) program was supposed to field M-code receivers starting in 2018 but has slipped to the late 2020s for full fielding. Meanwhile, the military continues purchasing platforms and munitions with legacy GPS receivers because M-code modules are not yet available in quantity. The installed base of vulnerable receivers grows larger every year. Structurally, the Pentagon treats GPS modernization as an Air Force Space Command program but the users are across all services. No single program executive officer owns the end-to-end problem from satellite constellation to user receiver. Each platform program office makes independent decisions about which GPS receiver to install, and most choose the cheapest legacy option to stay within budget.

defense+10 views

The U.S. Army, Navy, Air Force, and Marine Corps each operate tactical radio systems built on incompatible waveform standards. An Army squad using AN/PRC-163 radios cannot natively communicate with a Marine fire team on AN/PRC-158 without going through a gateway or relay, even when they are standing 50 meters apart in the same battlespace. Joint operations require dedicated liaison officers whose sole job is to bridge these communication gaps manually. This matters because in a firefight, seconds determine survival. When a ground unit needs close air support from a different branch, the request must traverse multiple radio nets, get retransmitted through gateway devices, and often requires voice relay by a human operator. The Government Accountability Office found that during joint exercises, cross-service communication delays averaged 3-7 minutes for routine coordination messages. In combat, a 3-minute delay in calling for fire support or medevac can mean casualties bleed out or friendly positions get overrun. The deeper pain is that this kills the entire concept of Joint All-Domain Command and Control (JADC2), the Pentagon's flagship strategy for fighting near-peer adversaries. If a soldier cannot talk to a sailor without a $200,000 gateway box and a specialist to operate it, then the vision of seamless multi-domain operations is fiction. Commanders end up falling back to deconflicting by time and space rather than truly integrating forces, which surrenders the speed advantage that JADC2 is supposed to deliver. This persists because each service branch controls its own acquisition budget and has decades of institutional investment in proprietary waveform families. The Army's SINCGARS heritage, the Navy's LINK-16 ecosystem, and the Air Force's HAVE QUICK series each have massive installed bases and training pipelines. No service wants to abandon its waveform investment, and the Joint Tactical Networking Center lacks the authority to force convergence. Defense contractors also profit from selling branch-specific radios and then selling gateway products to bridge them. The structural root cause is that the DoD acquisition system funds communications programs by service rather than by joint capability. Until procurement authority for tactical radios is centralized or a true software-defined radio standard is mandated and enforced with teeth, each branch will continue optimizing for its own needs and interoperability will remain an afterthought bolted on with expensive middleware.

defense+10 views

Non-nuclear electromagnetic pulse weapons — including flux compression generators (FCGs), vircators (virtual cathode oscillators), and high-power microwave (HPM) devices — can be built for $5,000-50,000 using components available on the open market. Detailed design information has been published in open scientific literature since the 1990s (Prishchepenko, 1995; Benford, 2007). These weapons can disable electronics within a 100-meter to 1-km radius without any nuclear material, making them attractive to terrorists and non-state actors. Despite this, international export controls on EMP weapon components are essentially nonexistent for non-nuclear designs. The Wassenaar Arrangement covers some HPM components but enforcement is inconsistent, and key components like explosively driven power supplies are dual-use (mining, demolition). A competent electrical engineer with $20,000 and access to published literature could build a vehicle-mounted EMP device capable of disabling a data center, hospital, or airport terminal from a parking lot. This persists because arms control frameworks were designed around nuclear weapons and missiles — the delivery systems of Cold War EMP. Non-nuclear EMP weapons do not contain fissile material, are not covered by the Nuclear Non-Proliferation Treaty, and most are too small for missile technology control regimes (MTCR). The U.S. has no domestic law specifically criminalizing the construction of a non-nuclear EMP weapon. The gap between weapon accessibility and regulatory frameworks widens every year as component costs fall and designs improve.

defense+20 views

Modern vehicles contain 70-150 electronic control units (ECUs) managing everything from engine timing and braking (ABS/ESC) to power steering and airbag deployment. These ECUs are connected by CAN bus networks with no electromagnetic shielding beyond basic EMC compliance. No automaker tests vehicles against EMP field strengths, and no NHTSA regulation requires it. A HEMP event or localized EMP weapon could simultaneously disable every vehicle in a metro area. The immediate consequence is not just immobility — it is danger. A vehicle traveling at 70 mph that loses its ECU loses power steering, ABS, electronic stability control, and engine power simultaneously. Drive-by-wire throttle and electric power steering have no mechanical backup in most vehicles manufactured after 2015. Mass multi-vehicle accidents on highways would occur at the moment of the pulse, with emergency vehicles themselves disabled. Subsequent emergency response is paralyzed because ambulances, fire trucks, and police vehicles use the same unshielded electronics. This persists because automakers design to UNECE Regulation 10 (EMC for vehicles), which tests at field strengths of 30-200 V/m — roughly 100-1,000x below HEMP E1 levels. Adding military-grade shielding to every ECU would add $500-2,000 per vehicle and 20-50 lbs of weight, which conflicts with fuel efficiency mandates. NHTSA has never issued a notice of proposed rulemaking on EMP. The auto industry's position is that EMP is a national defense issue, not a vehicle safety issue.

automotive+20 views

Extra-high-voltage (EHV) transformers — the 345 kV to 765 kV units that form the backbone of the U.S. bulk power system — are custom-built to order. There is no domestic U.S. manufacturer of EHV transformers above 345 kV; nearly all are made by Siemens (Germany), ABB (Switzerland), Hyundai (South Korea), or TBEA (China). Each unit takes 12-18 months to manufacture, weighs 200-800 tons, and requires specialized railcars (Schnabel cars, of which fewer than 30 exist in North America) for transport. If a HEMP event destroys even 9 of the roughly 2,000 EHV transformers in the U.S. — which the EMP Commission considered a conservative scenario — the replacement timeline extends to 3-5 years because manufacturers cannot surge production. During that period, entire regions lose bulk power transmission. You cannot reroute around missing EHV transformers the way you reroute internet traffic; the physics of power flow means load must be shed, resulting in rolling or permanent blackouts for millions. This persists because the transformer supply chain was optimized for efficiency, not resilience. Domestic manufacturing was offshored decades ago because foreign producers offered 20-40% cost savings. Congress authorized a Strategic Transformer Reserve in the FAST Act (2015), but as of 2025, the reserve contains fewer than 10 units — enough to replace routine failures, not a coordinated attack. The economics of maintaining surge manufacturing capacity for a once-in-a-century event are unfavorable for any single company.

energy+20 views

Municipal water treatment and wastewater systems across the U.S. are controlled by SCADA (Supervisory Control and Data Acquisition) systems that use unshielded PLCs, RTUs (remote terminal units), and radio/cellular telemetry links. EPA regulations and AWWA (American Water Works Association) standards address cybersecurity since 2018 (America's Water Infrastructure Act) but contain zero requirements for electromagnetic pulse resilience. An EMP event that damages SCADA controllers means water treatment plants cannot monitor chlorine levels, adjust pH, manage pump stations, or detect contamination. Raw sewage backup begins within hours as lift stations fail. Within 2-3 days without treated water, a city faces a public health emergency — hospitals cannot sterilize, dialysis centers shut down, and fire departments lose hydrant pressure. The EPA's own worst-case scenarios assume grid failure with SCADA intact; the possibility that SCADA itself is destroyed is not modeled. This persists because water utilities are mostly municipal, operating on razor-thin budgets funded by ratepayers who resist rate increases. The average U.S. water utility serves fewer than 10,000 people and has an annual capital budget under $1 million. EMP hardening of a SCADA system costs $100,000-500,000 — often more than the utility's entire annual technology budget. The EPA has explicitly chosen not to regulate EMP resilience, deferring to DHS sector-specific guidance that is advisory only.

infrastructure+20 views

A growing market of consumer "EMP protection" devices — surge protectors, whole-home EMP shields, Faraday bags — is sold to preparedness-minded consumers with claims of HEMP protection. Most of these devices are essentially MOV (metal oxide varistor) surge protectors or SPDs (surge protective devices) that clamp at microsecond timescales. The E1 component of a HEMP event rises in 5 nanoseconds — roughly 1,000x faster than these devices can respond. Consumers spend $200-2,000 on devices that provide a false sense of security. When they rely on these devices in an actual EMP scenario — whether from a HEMP, a non-nuclear EMP weapon, or a severe geomagnetic storm — their electronics are destroyed anyway. Worse, the false confidence discourages people from taking actually effective measures like maintaining analog backups, storing spare electronics in proper military-spec shielded enclosures, or hardening their homes' electrical entry points with proper waveguide-below-cutoff techniques. This persists because there is no FTC enforcement against EMP protection claims — the event has never occurred, so no consumer can prove the product failed. Companies cite MIL-STD-188-125 compliance without undergoing actual military testing. The prepper market is driven by fear, not engineering literacy, and Amazon's review system cannot distinguish between "this arrived in nice packaging" and "this would survive 50 kV/m E1."

consumer+10 views

Modern financial exchanges, payment networks, and banking systems depend on GPS-disciplined oscillators for nanosecond-precision timestamps required by SEC Rule 613 (Consolidated Audit Trail) and MiFID II. GPS receivers are among the most EMP-vulnerable components in any system — their signals are already at the noise floor (-130 dBm), and even a modest EMP or directed electromagnetic weapon could blind or spoof them across a metro area. If GPS timing fails across New York or Chicago, the Consolidated Audit Trail cannot sequence trades, the National Securities Clearing Corporation cannot reconcile settlements, and the Fedwire Funds Service loses synchronization. Exchanges must halt because they cannot prove trade ordering. The SEC mandates 50-microsecond timestamp accuracy; without GPS, atomic clocks in data centers drift apart within hours, making compliance impossible. A trading halt cascading across equities, options, and futures would freeze trillions in liquidity. This persists because the financial industry treats GPS as free, reliable infrastructure — like gravity. Alternative timing sources (eLoran, fiber-optic White Rabbit, chip-scale atomic clocks) exist but cost $50,000-500,000 per installation and are not required by any regulator. FINRA and the SEC mandate timestamp accuracy but do not mandate GPS-independent backup timing. The assumption is that GPS is a military-protected utility that will always be available.

finance+20 views

Modern backup generators — the ones hospitals, 911 centers, data centers, and water treatment plants rely on — use electronic engine control modules (ECMs), automatic transfer switches (ATSs), and programmable logic controllers (PLCs) to start and manage power transfer. These electronic components are highly susceptible to EMP-induced voltage spikes. An EMP event that knocks out the main grid would simultaneously damage the backup generators meant to provide emergency power. The consequence is catastrophic: the entire premise of backup power — that it activates when the grid fails — collapses. A hospital loses mains power AND its generators in the same instant. Patients on ventilators, in surgery, or requiring dialysis die. Data centers lose both primary and backup power, causing cascading failures across cloud services, banking, and communications. The backup system that every emergency plan assumes will work becomes the single point of failure. This persists because generator manufacturers design for common failure modes (storms, grid outages, equipment failure) — none of which involve simultaneous electromagnetic assault on the generator's own electronics. Hardening a generator's control electronics adds $15,000-50,000 per unit, and facility managers cannot justify the cost for an event outside their risk models. JCAHO hospital accreditation and Uptime Institute data center certifications do not include EMP survivability requirements.

healthcare+20 views

There is no consumer-facing rating, label, or standard that indicates how resilient a device — phone, laptop, router, car ECU — is to electromagnetic pulse events. UL, FCC, and CE certifications test for electromagnetic compatibility (EMC) to prevent devices from interfering with each other, but they do not test for survivability under the intense field strengths of an EMP (50 kV/m for HEMP E1, compared to the millivolts-per-meter range of EMC testing). This means consumers, businesses, hospitals, and local governments have zero ability to make informed purchasing decisions about EMP resilience. A hospital cannot specify "EMP-rated" ventilators. A city cannot procure "EMP-rated" traffic control systems. When people buy Faraday bags or "EMP-proof" enclosures on Amazon, there is no standard to verify the manufacturer's claims — most are untested marketing. The structural reason is that EMP resilience testing is destructive and expensive. You cannot test a $1,200 phone for EMP survivability without destroying it, and manufacturers have no market incentive to invest in a rating for an event most consumers do not consider. Standards bodies like IEEE focus on military specs (MIL-STD-188-125) that are classified or export-controlled, making civilian adoption impossible.

consumer+20 views

The U.S. has only a handful of EMP simulators — most notably the ATLAS-I (Trestle) facility at Kirtland AFB, which was decommissioned in 2004 and partially dismantled. The remaining simulators at Patuxent River and White Sands can produce partial E1, E2, or E3 waveforms, but none can generate a simultaneous full-spectrum HEMP pulse (E1 + E2 + E3) at the field strengths a real nuclear detonation at 200-400 km altitude would produce. This means the Department of Defense cannot fully validate whether hardened military systems actually survive a real HEMP event. Testing is done piecewise — E1 susceptibility in one chamber, E3 in another — and then results are stitched together with models. But coupling effects between simultaneous E1/E2/E3 pulses interacting with complex modern electronics (multilayer PCBs, nanometer-scale semiconductors) are nonlinear and cannot be accurately modeled. Any "EMP-hardened" label on military equipment is based on incomplete testing, which means we do not actually know if critical command-and-control systems survive a real attack. This persists because building a full-spectrum HEMP simulator would cost an estimated $500 million or more, and the Comprehensive Nuclear-Test-Ban Treaty (CTBT) norm discourages any test that could be perceived as nuclear weapons development. Congress has repeatedly deferred funding. The last time ATLAS-I was fully operational was 1991.

defense+10 views

There is no widely adopted, enforceable certification standard for EMP hardening of large power transformers (LPTs) used in civilian electrical grids. The IEEE and IEC publish guidelines, but compliance is voluntary and almost no utility commissions require EMP resilience as a procurement criterion. This matters because LPTs are the backbone of high-voltage transmission. A single high-altitude EMP (HEMP) event could induce geomagnetically induced currents (GICs) that destroy dozens of these transformers simultaneously. Each LPT weighs 200-400 tons, costs $3-10 million, and takes 12-18 months to manufacture — with a global production queue that is already backlogged. If 20+ transformers fail at once in a region, there are no spares. The affected area faces months or years without grid power, cascading into water treatment failures, hospital shutdowns, food spoilage, and mass displacement. The reason this persists is economic: utilities operate under rate-of-return regulation and cannot justify the 15-30% cost premium for hardened transformers to public utility commissions that evaluate procurement on lowest cost. No regulator wants to mandate a cost increase for an event that has never happened on U.S. soil. The result is a collective action problem — everyone assumes someone else will pay for resilience, and no one does.

energy+20 views