Real problems worth solving

Browse frustrations, pains, and gaps that founders could tackle.

Military drone swarm concepts depend on two things: GPS for navigation and datalinks for inter-drone communication and coordination. Both are among the first things a peer adversary will jam. Russia demonstrated comprehensive GPS jamming in Ukraine starting in 2022, and communications jamming is a standard capability for any modern military. When a drone swarm loses GPS and comms simultaneously, the swarm's autonomous coordination algorithms — which assume reliable positioning and messaging — degrade catastrophically. Individual drones in a jammed swarm cannot maintain formation, cannot deconflict flight paths (risking mid-air collisions), cannot distribute targets among themselves, and cannot report back to the human operator. The swarm does not gracefully degrade into individually competent drones — it becomes a collection of confused aircraft flying on dead reckoning with no shared awareness. In testing, loss of GPS and comms simultaneously caused swarm mission effectiveness to drop by 60-80%, with some drones loitering aimlessly and others returning to base. The structural cause is that swarm algorithms were developed in benign test environments where GPS and comms are taken for granted. Academic swarm research — which most military programs build on — assumes perfect communication and positioning. Retrofitting resilience to denied environments requires fundamentally different algorithmic approaches: visual-inertial navigation instead of GPS, mesh networking with anti-jam waveforms instead of standard datalinks, and decentralized decision-making that works with intermittent or zero connectivity. These alternatives exist in research labs but are immature, computationally expensive (hard to run on small drone processors), and have not been integrated into any fielded swarm system. Programs like the Air Force's Collaborative Combat Aircraft are beginning to address this, but testing in realistic jamming environments remains extremely limited.

defense+10 views

Modern electronic warfare (EW) systems use machine learning to detect, classify, and respond to enemy radar and communications signals in real time. The problem is that the electromagnetic spectrum in a combat zone is extraordinarily congested — friendly radios, civilian signals, atmospheric noise, and enemy emissions all overlap. Current ML-based EW classifiers generate false positive rates of 10-30% in contested electromagnetic environments, meaning the system incorrectly identifies friendly or neutral signals as threats. Each false positive triggers a countermeasure response — jamming, chaff deployment, or evasive maneuver — that wastes limited resources and disrupts friendly operations. A ship that deploys chaff against a false alarm has fewer countermeasures available for a real attack. An aircraft that breaks formation to evade a non-existent missile loses tactical position. Over the course of a multi-day engagement, accumulated false positives degrade combat effectiveness more than the enemy's actual electronic attacks. This persists because EW training data is fundamentally scarce and non-representative. You cannot collect realistic adversary radar signatures without being in an actual conflict or conducting extremely expensive red-team exercises. Training data from peacetime exercises does not capture the density and complexity of wartime electromagnetic environments. Additionally, adversaries deliberately design their emissions to be ambiguous — making their radar look like civilian signals or friendly systems — which is specifically intended to exploit ML classifiers' weaknesses. The physics of the problem means that improving detection sensitivity inevitably increases false positives, and no amount of algorithmic sophistication can fully resolve signals that are intentionally designed to be confusing.

defense+10 views

When a defense contractor delivers an AI model to the government, the government often cannot independently verify the model's performance because the training pipeline — the specific data, preprocessing steps, hyperparameters, and compute environment — lives on the contractor's classified network and is not fully transferable. Even when the government receives the trained model weights, they cannot retrain or fine-tune the model without reconstructing the entire pipeline. This creates a dangerous dependency. If the contractor goes bankrupt, loses key personnel, or the government wants to switch vendors, years of model development become a black box that nobody else can maintain or improve. It also means the government cannot independently audit whether the model was trained correctly, whether the training data was representative, or whether reported accuracy metrics are honest. You are deploying lethal systems based on performance claims you cannot verify. The root cause is that the DoD acquisition system was not designed for software deliverables. Hardware programs deliver a physical product that can be inspected and tested. AI programs should deliver reproducible pipelines, but contracts rarely specify this requirement, and even when they do, the classified computing environments (SCIFs, air-gapped networks, specialized GPU clusters) are not standardized across contractors. Contractor A's pipeline runs on their internal classified cloud with specific library versions that do not exist on Contractor B's network or the government's test infrastructure. The DoD's Platform One and Party Bus initiatives attempt to standardize DevSecOps environments, but adoption is slow and most AI programs predate these platforms.

defense+10 views

When an autonomous weapon system kills a civilian, who is legally responsible? The commander who authorized its deployment? The operator who activated it? The software engineer who wrote the targeting algorithm? The general who approved the requirements? Under current international humanitarian law — primarily the Geneva Conventions and their Additional Protocols — there is no clear answer, because the law was written assuming a human makes every lethal decision. This is not an abstract legal debate. It determines whether war crimes prosecutions are possible, whether victims can seek redress, and whether military commanders will actually deploy these systems. If nobody is accountable, the deterrent effect of international humanitarian law collapses. If everyone is accountable, no commander will authorize deployment because the legal risk is unlimited. The result is a paralyzing ambiguity that benefits adversaries who do not care about legal frameworks while constraining those who do. The UN Convention on Certain Conventional Weapons (CCW) has been discussing autonomous weapons since 2014 — over a decade — and has produced no binding agreement. The Group of Governmental Experts (GGE) meets annually and issues vague consensus statements about the importance of human control without defining what that means operationally. Russia and the United States resist binding regulations because they are investing heavily in autonomous systems, while smaller nations push for a ban they know will not pass. The diplomatic process is structurally designed to produce inaction because CCW decisions require consensus, and major military powers will never consent to binding constraints on their own weapons programs.

defense+20 views

Machine learning models degrade over time as the real world diverges from training data — a phenomenon called model drift. In commercial settings, companies monitor model performance continuously and retrain when accuracy drops. In military deployments, AI systems are often fielded to austere environments — forward operating bases, ships at sea, deployed aircraft — where there is no connection to a monitoring dashboard, no MLOps pipeline, and no easy way to push model updates. This means a computer vision model that was 95% accurate in testing could silently degrade to 80% accuracy in the field as the adversary changes tactics, seasons change the visual environment, or sensor degradation alters image quality. The operators on the ground have no way to know the model's confidence has dropped. They continue to trust its outputs because it was certified before deployment, not realizing that certification is a point-in-time snapshot, not a guarantee of ongoing performance. The structural reason is that military acquisition treats AI software like traditional hardware: you test it, certify it, field it, and forget it until the next upgrade cycle (typically 2-5 years). The commercial MLOps practices of continuous monitoring, A/B testing, and automated retraining have no equivalent in the military procurement process. The DoD's Chief Digital and AI Office (CDAO) has published guidance on MLOps, but the field units that actually deploy these systems lack the bandwidth, expertise, and infrastructure to implement it. There is no 'AI maintenance crew' equivalent to the mechanics who maintain physical equipment.

defense+10 views

Joint All-Domain Command and Control (JADC2) is the Pentagon's vision for connecting every sensor to every shooter across all military branches. The core promise is that an Air Force satellite could detect a target and pass it directly to a Navy destroyer for engagement in seconds. In practice, the Army, Navy, Air Force, and Marines each use different data link standards, message formats, and coordinate systems that do not interoperate without translation layers. The real-world impact is that data fusion — combining sensor feeds from different platforms into a coherent picture — requires manual intervention by human operators who re-key information from one system to another. In a recent JADC2 exercise, it took over 20 minutes to pass a time-sensitive target from detection to engagement across service boundaries. Against a peer adversary, that target has moved, and the engagement window has closed. People die because systems cannot talk to each other. This persists because each military service has spent decades and billions of dollars building its own command-and-control infrastructure — the Army's IBCS, the Air Force's ABMS, the Navy's Project Overmatch — and none of them were designed to interoperate from the ground up. Retrofitting interoperability onto legacy systems is technically difficult and politically explosive, because it requires one service to adopt another's standards or everyone to adopt a new standard, which means admitting your current system is inadequate. The DoD has designated JADC2 as a priority since 2019, but the services continue to fund their own stovepipe programs because their budgets and bureaucratic power depend on owning their own systems.

defense+10 views

Researchers have demonstrated that a printed patch — essentially a sticker — costing less than a dollar to produce can cause state-of-the-art object detection models to misclassify or entirely ignore military vehicles, personnel, and equipment. An adversary could plaster these patches on the roof of a tank and make it invisible to a drone's AI targeting system, or place them on a civilian bus to make it look like a military target. This is not an academic curiosity. It means that any autonomous targeting system that relies on deep learning-based computer vision can be systematically defeated by an adversary who understands the model architecture — and model architectures for common frameworks like YOLO and Faster R-CNN are publicly documented. The entire value proposition of AI-enabled ISR and autonomous weapons is undermined if the adversary can trivially manipulate what the AI sees. The problem persists because adversarial robustness and model accuracy are in fundamental tension. Techniques like adversarial training (exposing the model to adversarial examples during training) reduce clean accuracy by 5-15%, which program managers are unwilling to accept. Certified defenses that provably resist perturbations only work for small perturbation budgets and do not scale to the physical-world patch attacks that actually matter. Meanwhile, the offensive side keeps advancing — transferable attacks mean an adversary does not even need to know the exact model, just the general architecture family. DARPA's GARD program has funded defensive research since 2019 but has not produced a deployable solution.

defense+10 views

Training computer vision models for military applications — identifying missile launchers, counting vehicles at a base, detecting camouflaged positions — requires labeled training data from classified satellite and aerial imagery. But every person who touches that data needs a Top Secret/SCI clearance, which takes 12-18 months to obtain and costs the government $5,000-$15,000 per investigation. You cannot outsource this to Amazon Mechanical Turk or Scale AI's general workforce. This creates a crippling bottleneck. Commercial AI companies can label millions of images cheaply using global crowdsourced labor. Military AI programs are stuck with a tiny pool of cleared analysts who are already overworked doing operational intelligence work, not data labeling. The result is that military AI models are trained on orders of magnitude less labeled data than their commercial counterparts, which directly translates to worse performance. The problem persists because the classification system was designed for documents and communications, not for the machine learning era where you need tens of thousands of labeled examples to train a single model. Declassifying the imagery is not an option because the resolution and collection patterns reveal satellite capabilities. Using synthetic data is an incomplete workaround because it introduces domain gap — models trained on synthetic images underperform on real-world imagery. The NGA and NRO have explored 'write-to-release' classification policies to make more data available at lower classification levels, but bureaucratic inertia and risk aversion mean most imagery stays locked at TS/SCI.

defense+20 views

Modern hypersonic missiles and swarming drone attacks compress engagement timelines to seconds. A hypersonic glide vehicle traveling at Mach 5+ gives a ship-based air defense system roughly 15-30 seconds from detection to impact. Current human-in-the-loop doctrine requires a human operator to authorize lethal engagement, but the cognitive process of identifying the threat, assessing rules of engagement, and confirming authorization takes longer than the available window. The consequence is stark: either the human becomes a rubber stamp who reflexively approves every recommendation (defeating the purpose of human oversight), or the system misses the engagement window and the ship is hit. Neither outcome is acceptable. This is not a hypothetical — the USS Vincennes incident in 1988 showed what happens when humans are pressured into split-second lethal decisions. The crew shot down Iran Air Flight 655, killing 290 civilians, because the compressed timeline made thoughtful deliberation impossible. This problem persists because international humanitarian law and DoD Directive 3000.09 require "appropriate levels of human judgment" for lethal force, but they do not define what that means when physics makes meaningful human judgment temporally impossible. The policy community and the engineering community talk past each other: lawyers write doctrine assuming humans can always meaningfully intervene, while engineers know the math does not work. Nobody wants to be the person who officially says humans cannot be in the loop for certain engagements, because the political and legal consequences of that admission are enormous.

defense+10 views

Autonomous target recognition (ATR) systems used in military drones and missiles are validated against curated image datasets that represent known threat profiles — specific vehicle silhouettes, radar cross-sections, and infrared signatures. But adversaries constantly modify their equipment, use decoys, or operate civilian-looking vehicles. There is no reliable way to test whether an ATR system will correctly classify a target it has never seen in training. This matters because a misclassification in combat is not a software bug you can patch later — it is a dead civilian or a missed enemy launcher. The 2003 Patriot fratricide incidents, where the system shot down friendly aircraft, demonstrated what happens when recognition logic encounters edge cases outside its training envelope. Two decades later, the fundamental validation problem remains unsolved. The reason this persists is structural: you cannot build a test dataset for threats that do not yet exist. Military acquisition programs require systems to pass acceptance tests against defined threat libraries, but those libraries are backward-looking by definition. The DoD's Test and Evaluation community has flagged this repeatedly — the 2023 DOT&E annual report noted that AI-enabled systems lack adequate test infrastructure — but the acquisition process still demands pass/fail certification against static benchmarks. Nobody has figured out how to certify a system's performance against unknown unknowns, so programs either waive the requirement or test against outdated scenarios and call it good enough.

defense+10 views

The explosion of commercial mega-constellations — Starlink (6,000+ satellites), OneWeb (630+), and planned Amazon Kuiper (3,200+) — has dramatically increased radio frequency interference in the bands adjacent to military satellite communications. Military SATCOM systems operating in Ku-band and Ka-band increasingly experience co-channel and adjacent-channel interference from commercial constellations, degrading signal quality and reducing effective data throughput by 10-30% in operationally critical areas. Military operators at deployed tactical terminals report increasing instances where they must reduce data rates or switch frequencies to maintain link quality. This matters because modern military operations generate enormous data volumes. A single MQ-9 Reaper drone streams full-motion video at 3-5 Mbps continuously; a battalion headquarters may require 50-100 Mbps of SATCOM bandwidth. When interference degrades throughput by 30%, it's not just slower downloads — it means commanders lose real-time video feeds during time-critical operations, intelligence analysts cannot access imagery during targeting cycles, and forward units lose connectivity during firefights when they need it most. The compounding problem is that the U.S. military is itself increasingly dependent on leased commercial SATCOM bandwidth. Over 80% of military SATCOM traffic rides on commercial transponders. The interference that degrades military-dedicated satellites also degrades the commercial satellites that the military leases, creating a double impact. There is no 'clean' frequency to retreat to because the entire orbital environment is getting noisier. This problem persists because spectrum allocation is governed by the International Telecommunication Union (ITU), which operates on a first-come, first-served coordination basis. Commercial operators file spectrum rights years in advance for their mega-constellations, and the ITU process does not give military systems priority. The FCC in the U.S. has approved mega-constellation spectrum licenses without requiring detailed interference analysis with military systems, in part because the military's own spectrum usage is classified and cannot be submitted to a public regulatory process. The structural deadlock is that the military cannot publicly disclose its frequency plans, antenna patterns, and terminal locations — all of which would be needed to conduct a proper interference coordination study — because that information is operationally sensitive. Commercial operators, meanwhile, have no obligation to coordinate with systems they don't know exist. The result is an unmanaged spectrum environment where commercial and military systems interfere with each other and neither side has the information to fix it.

defense+20 views

Military officers assigned to Space Force acquisition programs — the people who manage billion-dollar satellite development contracts — typically rotate to new assignments every 18-24 months, following the standard military career progression model. A satellite program from contract award to first launch takes 7-12 years. This means a single satellite program will be managed by 4-6 different program managers in sequence, each arriving with no institutional knowledge and departing just as they begin to understand the technical and contractual complexities of their system. This matters because contractor teams exploit the rotation. When a new program manager arrives, the contractor team — which has been on the program for years — effectively 'resets' the relationship, re-explaining technical decisions in ways that favor contractor interests, renegotiating informal agreements made with the previous PM, and exploiting the new PM's unfamiliarity with the contract's history. Former program managers report that contractors kept 'two sets of books' — one for the current PM and one reflecting the actual program status — knowing that no single PM would be around long enough to catch the discrepancy. The cost to the taxpayer is enormous. GAO has repeatedly cited program manager turnover as a root cause of cost overruns in major space programs. The SBIRS program exceeded its original budget by over $10 billion; OCX exceeded its budget by over $2 billion; the Space Based Infrared System ran decades late. In each case, GAO identified leadership instability and loss of institutional knowledge as contributing factors. This persists because the military promotion system rewards breadth of experience over depth. An officer who stays on one program for 8 years will be passed over for promotion compared to a peer who held four different assignments demonstrating 'leadership breadth.' The incentive structure explicitly penalizes the deep technical expertise that complex satellite programs require. The Space Force has discussed creating a specialized acquisition career track, but implementing it requires changing promotion board guidance, which requires congressional authorization and cultural buy-in from senior military leaders who themselves succeeded under the rotation model. The fundamental conflict is between the military's generalist leadership development model, designed for an era when officers commanded interchangeable infantry units, and the reality that managing a cutting-edge satellite program requires years of specialized technical and contractual knowledge that cannot be replaced by reading briefing slides during a two-week transition.

defense+20 views

Northrop Grumman's Mission Extension Vehicle (MEV) has successfully docked with and extended the life of commercial GEO communications satellites (Intelsat-901 in 2020, Intelsat 10-02 in 2021), demonstrating that on-orbit satellite servicing is technically feasible today. Yet no U.S. military satellite has ever been refueled, repaired, or upgraded on orbit. Military satellites are still designed as expendable assets with a fixed fuel load that determines their operational lifespan, after which a multi-billion-dollar satellite becomes space junk. This matters because fuel is the single most common life-limiting factor for military GEO satellites. A WGS (Wideband Global SATCOM) satellite has a design life of 14 years, primarily constrained by its hydrazine propellant supply for station-keeping. When the fuel runs out, the satellite must be raised to a graveyard orbit and decommissioned, even if its transponders, solar arrays, and processors are still functioning perfectly. Replacing a WGS satellite costs approximately $424 million and takes years to build and launch. Refueling could extend its useful life by 5-10 years for a fraction of that cost. The warfighting impact is that in a conflict, adversaries could accelerate fuel depletion by forcing repeated evasive maneuvers. If an adversary satellite conducts repeated close approaches to a U.S. military satellite, the defender must burn fuel to maneuver away each time. Without refueling capability, this becomes a war of attrition that the U.S. loses — the adversary can exhaust the defender's fuel supply through harassment alone, effectively achieving a soft kill without firing a weapon. The problem persists because military satellites are designed by contractors under fixed requirements that are locked in a decade before launch. Adding a refueling port or docking interface to a military satellite requires changing those requirements, which triggers a new design review, new testing, and new certifications — a process that adds 2-3 years and hundreds of millions to the program cost. Program managers, measured on cost and schedule, have no incentive to accept that risk for a capability that won't be needed until the satellite is 10+ years old. At the deepest level, the acquisition system does not account for lifecycle costs that span multiple budget cycles. The program office that builds the satellite is different from the organization that will operate it 15 years later. There is no mechanism to credit today's program with future savings from refuelability, so the rational choice under current incentives is always to omit the refueling interface and build the satellite cheaper today.

defense+20 views

When the U.S. Space Force detects a threat to an allied nation's satellite — such as a suspicious close approach by a Russian or Chinese spacecraft — it often cannot share that information in a timely manner because the data originates from classified sensors and resides on networks that allied partners cannot access. The Combined Space Operations Center (CSpOC) at Vandenberg SFB maintains data sharing agreements with Five Eyes nations and a handful of other allies, but the actual sharing process requires manual review, downgrade decisions, and retransmission through separate channels that can take days or weeks. This matters because space is inherently a coalition domain. Allied nations operate satellites that the U.S. military depends on — France's Syracuse military communications satellites, Japan's QZSS positioning augmentation, the UK's Skynet MILSATCOM system. If the U.S. detects a threat to a French military satellite but cannot share that information for 72 hours, France cannot maneuver its satellite to avoid a potential attack. By the time the warning arrives, the damage may already be done. The second-order effect is that allies are building their own space surveillance capabilities independently, creating redundant systems that don't interoperate. France launched its GRAVES radar, the UK invested in its own Space Operations Centre, Japan built its SSA system — all partially because they cannot rely on timely U.S. data sharing. This fragmentation means the free world's collective space surveillance picture has gaps that adversaries can exploit by maneuvering in coverage seams between national systems. The structural cause is the U.S. classification system itself. Space surveillance data is classified at multiple levels — some at SECRET, some at TS/SCI, some under Special Access Programs — and the rules for downgrading vary by sensor, target, and consumer. No single authority can approve a real-time downgrade across all these compartments. The result is that an analyst at CSpOC who sees a threat on their screen must submit requests to multiple classification authorities before sharing anything, even with the closest allies. This problem will not resolve itself because the intelligence community's default posture is to protect sources and methods, while the operational community's need is to share warnings quickly. These two imperatives are fundamentally in tension, and the bureaucratic structures that manage classification were designed for a world where intelligence was consumed by a small number of senior decision-makers, not shared in real time with dozens of allied satellite operators.

defense+20 views

Space Force operators who would direct orbital engagements in a conflict have no high-fidelity simulation environment where they can practice offensive and defensive space operations against a thinking adversary. The primary training tool, the Space Flag exercise held twice yearly at Schriever SFB, uses scripted scenarios on networks that cannot accurately model the real-time orbital mechanics of satellite maneuvers, debris propagation, or electromagnetic interference. Operators describe it as 'PowerPoint wargaming' — they discuss what they would do rather than executing commands in a realistic simulation. This matters because space warfare, if it occurs, will unfold on timescales of hours to days with consequences that are irreversible. Unlike air combat, where a pilot can fly hundreds of training sorties before seeing real action, a Space Force operator may face their first real conjunction threat or jamming event with zero hands-on practice. The physics of orbital mechanics are deeply unintuitive — a maneuver that seems logical (burn toward your target) actually takes you to a different orbit. Without extensive simulation training, operators will make physics errors under the stress of combat that waste precious propellant or expose satellites to unnecessary risk. The downstream effect is that combatant commanders cannot trust that their space operators can execute complex orbital maneuvers under pressure. This lack of confidence means that offensive space options are unlikely to be presented or selected in a crisis, effectively removing an entire domain from the joint warfighting toolkit. The U.S. has invested over $20 billion per year in military space, but the human operators who would employ those assets in conflict get less training time than a commercial airline pilot. The problem persists because building a realistic orbital combat simulator requires combining orbital mechanics engines, RF propagation models, cyber attack/defense simulations, and intelligence feeds into a single integrated environment — a software integration challenge that crosses multiple classification levels and contractor boundaries. Each satellite program has its own proprietary command-and-control software, and no single simulation can replicate all of them. The Space Force's acquisition of the Kobayashi Maru training system is a step forward, but it remains limited to unclassified scenarios and a narrow set of satellite types. At the root, the Space Force is the newest military branch (established 2019) and inherited its training infrastructure from Air Force Space Command, which treated space as a support function rather than a warfighting domain. Building a warfighter training culture from scratch — with realistic simulators, recurring exercises, and a competitive red team — requires sustained investment that competes with hardware procurement for a limited budget.

defense+20 views

Ground control segments for several U.S. military satellite constellations still run on software architectures designed in the 1990s and early 2000s, including legacy operating systems and custom middleware with known cybersecurity vulnerabilities. The GPS Operational Control Segment (OCS) ran on systems that required security waivers because they could not meet current DoD cybersecurity standards. The replacement system, OCX, has been delayed by over a decade and has exceeded its original budget by more than $2 billion. This matters because the ground segment is the most attackable part of any space system. You don't need an ASAT missile to disable a satellite — you can hack its ground station. In 2023, during the early days of the Ukraine conflict, the Russian GRU-linked group Sandworm compromised Viasat's KA-SAT ground infrastructure, knocking out satellite internet for thousands of Ukrainian military and civilian users simultaneously. If a similar attack targeted U.S. military satellite ground stations, adversaries could inject false commands, corrupt ephemeris data, or simply take constellations offline. The operational pain is that satellite operators must maintain dual systems — keeping legacy ground stations running because the replacement isn't ready, while also patching and defending software that was never designed for an adversarial cyber environment. Operators at Schriever Space Force Base describe running satellite commands through interfaces that look like 1990s DOS terminals, with manual processes that take hours for tasks that modern systems could automate in seconds. Human error rates increase with antiquated interfaces, and one wrong command can send a billion-dollar satellite into an unrecoverable spin. This persists because satellite ground systems are procured as part of the satellite program itself, which means ground software requirements are locked in 10-15 years before the system reaches initial operating capability. By the time a ground segment is deployed, its software architecture is already a generation behind commercial state of the art. The OCX replacement for GPS ground control was contracted in 2010 and won't be fully operational until approximately 2026 — by which point its core architecture will already be 16 years old. The deeper structural issue is that the DoD treats ground segments as afterthoughts to the satellite hardware. Roughly 80% of a satellite program's budget goes to the space vehicle and launch; the ground segment gets what's left. This creates a perverse incentive where contractors gold-plate the satellite but deliver the minimum viable ground system, knowing that ground segment upgrades will be funded as separate, lower-priority programs years later.

defense+20 views

The U.S. Space Surveillance Network (SSN) can catalog objects in geosynchronous orbit (GEO, ~36,000 km altitude) but cannot reliably detect or characterize small maneuvers by adversary satellites in that regime. Russia's Luch/Olymp-K satellites have been observed parking themselves next to U.S. and allied military communications satellites, but these proximity operations are typically detected days or weeks after they occur, through ground-based telescopes that can only observe GEO objects during specific lighting conditions near dawn and dusk. This matters because GEO is where the U.S. parks its most critical and expensive military assets: missile warning (SBIRS), nuclear command and control (AEHF), and wideband military communications (WGS). A single AEHF satellite costs over $6 billion. If a Russian inspector satellite parks 50 kilometers away and the U.S. doesn't detect it for two weeks, that adversary satellite could have already mapped the AEHF's antenna patterns, intercepted side-lobe emissions, or positioned itself for a future kinetic or electronic attack. The operational consequence is that Space Force commanders cannot provide accurate, real-time threat assessments for their most valuable assets. When STRATCOM or INDOPACOM asks 'is my SATCOM link secure?' the honest answer is 'we don't know, because we can't see what's happening near our GEO birds in real time.' This uncertainty degrades confidence in space-dependent operations and forces commanders to maintain expensive terrestrial backup communications that partially negate the advantage of having space assets in the first place. The structural cause is that space surveillance was designed to track predictable, cooperative objects — not adversary satellites conducting deliberate maneuvers to evade detection. The SSN relies heavily on a network of ground-based radars and telescopes built during the Cold War to track ballistic missile trajectories, not to characterize close-approach behavior 36,000 km away. The Space Fence radar on Kwajalein Atoll vastly improved LEO tracking but does not cover GEO. Space-based surveillance (like the GSSAP satellites) helps, but there are only a handful of GSSAP birds trying to monitor an entire orbital belt containing over 600 active satellites and thousands of pieces of debris. Upgrading to persistent GEO surveillance requires either a large constellation of space-based sensors (expensive and itself vulnerable) or a global network of advanced ground telescopes (limited by weather and geometry). Neither has been funded at the scale needed because the intelligence community classifies GEO surveillance requirements, making it difficult to build congressional support for the necessary appropriations.

defense+20 views

Russia's November 2021 direct-ascent anti-satellite test against its own Cosmos 1408 satellite created over 1,500 trackable debris fragments in low Earth orbit, many in the same orbital band as U.S. military reconnaissance and signals intelligence satellites. These fragments will remain in orbit for 5-25 years, creating a persistent collision risk that forces satellite operators to perform costly and fuel-consuming avoidance maneuvers. China's 2007 ASAT test created over 3,500 trackable fragments, and more than 2,800 remain in orbit nearly two decades later. The immediate impact is operational: every collision avoidance maneuver burns finite onboard propellant that was budgeted for the satellite's entire 10-15 year mission. The Space Force's 18th Space Defense Squadron issues approximately 25,000 conjunction warnings per day across all operators. Each maneuver shortens a satellite's useful life by weeks or months. For a classified reconnaissance satellite that cost $3-5 billion, losing even one year of operational life represents hundreds of millions in wasted capability. Beyond individual satellites, the deeper concern is Kessler syndrome escalation in militarily critical orbits. If debris density in sun-synchronous orbits (600-900 km) reaches a tipping point, cascading collisions could render those orbits unusable for decades. These are precisely the orbits used for persistent ground surveillance — the intelligence, surveillance, and reconnaissance backbone that combatant commanders depend on for targeting and situational awareness. This problem persists because there is no enforceable international law prohibiting ASAT tests that generate debris. The 1967 Outer Space Treaty predates the ASAT era and says nothing about debris. The U.S. unilaterally committed to not testing destructive direct-ascent ASATs in 2022, but Russia and China made no such commitment. There is no verification mechanism, no enforcement body, and no consequences for generating orbital debris. At the structural level, space governance is trapped in a Cold War framework designed for two superpowers with limited space assets. Today, over 70 nations operate satellites, but the Committee on the Peaceful Uses of Outer Space (COPUOS) operates on consensus, meaning any single nation can block binding rules. The nations most likely to conduct ASAT tests are the same ones with veto power in governance forums.

defense+20 views

If an adversary destroys or disables a critical U.S. military satellite — say, a SBIRS missile-warning bird or a Wideband Global SATCOM node — the Space Force cannot launch a replacement in less than 12-18 months under current operations. The fastest the U.S. has ever gone from launch order to orbit was the Tactically Responsive Launch demonstration in 2023, which took months of preparation for a single small payload. Meanwhile, China demonstrated the ability to surge-launch 11 missions in 22 days in late 2023. This gap matters because modern warfare assumes persistent space-based capabilities. If China destroys two of the five SBIRS geosynchronous satellites with a direct-ascent ASAT weapon, the U.S. loses missile warning coverage over the entire Pacific theater. Combatant commanders would have roughly 6-8 minutes less warning of an incoming ballistic missile strike — the difference between intercepting a warhead and absorbing a hit on a carrier strike group. The cascading effect is devastating: without missile warning, Aegis destroyers must keep radars in continuous 360-degree search mode, which burns through their SPY radar components at 3x the normal rate. This reduces the lifespan of each ship's combat system from years to months. The fleet cannot sustain combat operations in a contested environment without space-based early warning. The structural reason this persists is that U.S. launch infrastructure was built for peacetime cadence. Vandenberg and Cape Canaveral have fixed pad counts, each requiring weeks of reconfiguration between launches. Rocket manufacturing is optimized for cost-per-kilogram, not speed-to-orbit. United Launch Alliance and SpaceX build to commercial schedules, not wartime surge requirements. There is no 'hot standby' rocket sitting fueled on a pad waiting for a launch order. The defense industrial base lacks both the physical infrastructure and the contractual mechanisms to surge. Responsive launch requires pre-positioned rockets, pre-integrated payloads, and pre-cleared range schedules — none of which exist today because no one in the acquisition chain has the authority or budget to maintain idle capacity 'just in case.'

defense+20 views

GPS III satellites currently in orbit lack the ability to receive over-the-air software updates to their signal-generation payloads. When adversaries develop new jamming or spoofing waveforms — which Russia and China do on roughly 18-month cycles — the only countermeasure is to launch an entirely new satellite with updated hardware. Each GPS III satellite costs approximately $529 million and takes 3-5 years from contract to orbit. This matters because GPS underpins not just military navigation but precision-guided munitions, drone operations, and logistics synchronization across every branch of the U.S. military. When Russia jammed GPS signals across northeastern Europe during NATO exercises in 2024, allied forces lost precision strike capability for hours. Troops on the ground reverted to paper maps and compass navigation, which slowed convoy movements by 40% and introduced friendly-fire risk from unguided weapons. The deeper pain is that the entire kill chain — from target identification to weapon guidance to battle damage assessment — assumes persistent, accurate GPS. When that assumption breaks, commanders lose confidence in their own weapons, pilots abort sorties, and artillery units cannot fire for fear of hitting civilians. The tactical paralysis cascades up to operational and strategic levels. This problem persists because the GPS program was designed in the 1970s-80s around a hardware-centric philosophy where satellites were built to last 15 years without modification. The Space Force inherited this architecture and its procurement contracts, which specify fixed signal structures years before launch. Reprogrammable payloads exist in prototype form (Lockheed's NTS-3 experiment), but transitioning from prototype to operational constellation requires renegotiating multi-billion-dollar contracts with entrenched prime contractors who profit from the current build-new-satellite model. In the first place, the defense acquisition system treats satellites as one-time capital expenditures rather than software platforms requiring continuous updates. Until the procurement model shifts to treat space assets like smartphones — expecting regular firmware updates — every new jamming technique will enjoy a multi-year window of effectiveness against U.S. forces.

defense+20 views

U.S. Cyber Command and the National Security Agency are co-located at Fort Meade and share a dual-hatted commander, workforce, and infrastructure. This 'dual-hat' arrangement was intended to leverage NSA's intelligence capabilities for Cyber Command's operational missions. In practice, it creates a persistent organizational conflict: NSA's mission is to collect intelligence by maintaining persistent access to adversary networks, while Cyber Command's mission is to disrupt adversary operations by acting on those same networks. These missions are fundamentally at odds -- taking action on a network typically burns the access that intelligence collection depends on. This matters because this tension paralyzes decision-making at the operational level. When a Cyber National Mission Team identifies a Russian hacking group operating inside a U.S. critical infrastructure network, they face a choice: disrupt the adversary now (Cyber Command mission) or continue monitoring to understand the full scope of the operation (NSA mission). This debate can take days or weeks to resolve through the interagency process, during which the adversary continues operating. The real-world consequence was visible in the response to Russian interference in the 2016 election. Intelligence agencies had access to Russian operations but debated for months about whether and how to act, partly because action would reveal collection capabilities. By the time decisions were made, the damage was done. Similar tensions have played out in responses to Chinese espionage campaigns, where the intelligence value of watching the adversary compete with the operational imperative to stop them. At the working level, individual operators who serve both missions experience this as impossible prioritization. The same analyst who discovers a vulnerability in an adversary network must decide whether to report it as an intelligence opportunity or as a target for offensive operations. Their career advancement may depend on which organization -- NSA or CYBERCOM -- their rating chain falls under, biasing their recommendations. The structural cause is that Congress and the executive branch have repeatedly deferred the decision to separate NSA and Cyber Command into fully independent organizations. The dual-hat arrangement was supposed to be temporary when Cyber Command was established in 2009, but bureaucratic inertia, cost concerns, and genuine operational dependencies have kept it in place. Multiple studies have recommended separation, but each time the decision is delayed because separating two organizations that share 30,000+ personnel, billions in infrastructure, and decades of institutional culture is enormously complex.

defense+10 views

When a critical vulnerability is discovered in software used by the Department of Defense, the patch cannot simply be applied. Every software change to a DoD system must go through the Security Technical Implementation Guide (STIG) process, which requires testing the patch against the system's security baseline, verifying it does not break functionality, and obtaining authorization from the system's Authorizing Official. For weapons systems, this includes additional operational testing. The result is that patches that take commercial organizations days to deploy take DoD weeks or months. This matters because adversaries are weaponizing vulnerabilities within hours of public disclosure. The Cybersecurity and Infrastructure Security Agency (CISA) tracks Known Exploited Vulnerabilities (KEV) and issues binding operational directives requiring federal agencies to patch within specific timeframes -- typically 14 days for critical vulnerabilities. DoD systems routinely miss these deadlines. When a Chinese or Russian hacking group begins exploiting a vulnerability on Day 1, and DoD cannot patch until Day 45, there is a 44-day window during which military systems are knowingly vulnerable. The downstream effect is that DoD network defenders must resort to compensating controls -- firewall rules, network segmentation, monitoring -- to protect unpatched systems. These workarounds consume defensive resources, add complexity that creates its own vulnerabilities, and degrade system performance. Operators on the ground experience this as systems that are slow, unreliable, and locked down to the point of being difficult to use for their intended purpose. The problem is especially acute for weapons systems with embedded software. Patching the fire control computer on an F-35 requires regression testing the entire weapons system to ensure the patch does not affect targeting, navigation, or flight safety. This testing can take months and costs millions of dollars, creating a perverse incentive to defer patches indefinitely. The structural cause is that the DoD's Risk Management Framework (RMF) treats every software change as a potential security event requiring re-evaluation. This made sense when software updates were infrequent and manual, but it is incompatible with the modern reality of continuous vulnerability disclosure and rapid adversary exploitation. The process conflates change management (ensuring patches don't break things) with security authorization (ensuring the system meets security requirements), making both slower.

defense+20 views

U.S. military cyber operators train on cyber ranges -- simulated network environments -- that are years behind the actual networks they must attack or defend. The Persistent Cyber Training Environment (PCTE) and service-specific ranges like the Army's Cyber Battle Lab use virtualized networks that approximate adversary infrastructure, but these environments are static snapshots that do not reflect the constantly changing configuration of real-world targets. An operator who trains for six months on a simulated Chinese telecom network arrives at their unit to find the actual target network has been reconfigured three times since the training scenario was built. This matters because cyber operations are exquisitely sensitive to environmental details. A penetration technique that works against Windows Server 2016 fails against 2019. An exploit that succeeds when a firewall rule is configured one way fails when it is changed. Training on the wrong environment builds false confidence and muscle memory for scenarios that do not exist in reality. The operational cost is that operators arriving at their units require months of additional on-the-job training before they are mission-capable. During this ramp-up period, they consume the time of experienced operators who must mentor them, reducing the unit's overall capacity. A Cyber National Mission Team that should have 39 fully qualified operators might have only 25 who can actually execute operations, with the rest still learning the real environment. Attempts to build more realistic ranges run into classification problems. Accurate representations of adversary networks contain intelligence about those networks that is classified at TS/SCI or above. Building a training range at that classification level restricts who can access it, where it can be located, and how it can be maintained. Most training ranges operate at the Secret level or below, which means they cannot accurately represent the targets operators will face. The structural cause is that the intelligence community and the operational community have different equities. Intelligence agencies want to protect sources and methods by restricting access to target network details. Operational commanders want their operators to train on the most realistic environment possible. There is no mechanism to efficiently declassify or sanitize target network intelligence for training use, so ranges default to generic, unclassified approximations.

defense+20 views

The DoD's classified networks (SIPRNet, JWICS) operate in isolation from the commercial internet by design. This air gap protects them from direct attack but creates a critical problem: threat intelligence generated by commercial cybersecurity firms, CERTs, and allied nations cannot flow into classified defensive systems in real time. When CrowdStrike, Mandiant, or Microsoft publishes indicators of compromise (IOCs) for an active Chinese hacking campaign, those indicators must be manually reviewed, reformatted, classified at the appropriate level, and then loaded into classified defensive tools. This process takes hours to days. This matters because modern cyber attacks move in minutes. A nation-state adversary who has compromised a defense contractor's unclassified network can pivot to classified enclaves through trusted connections, VPN tunnels, or supply chain access. If the indicators of compromise for that adversary's tools are available commercially but not yet loaded into SIPRNet's defensive sensors, the attack succeeds during the gap. The real-world pain is that DoD cyber defenders often learn about attacks on their own networks from commercial security firms and news reports rather than from their own defensive tools. The Cyber Protection Teams monitoring SIPRNet may be watching for yesterday's threats while today's attack walks past their sensors. This information latency means that classified networks are paradoxically less well-defended than many commercial networks that receive threat feeds in real time. Attempts to solve this with cross-domain solutions (CDS) have been slow and limited. Every CDS must go through a years-long accreditation process, and each one only handles specific data formats and classification levels. The result is a patchwork of narrow pipelines rather than a broad, real-time threat intelligence flow. The structural cause is that the classification system was designed to prevent information from leaking out of classified networks, not to enable information to flow in. The security architecture assumes that anything entering a classified network could be a Trojan horse, so every inbound data flow requires extensive review. This defensive posture made sense for documents and files but is fundamentally incompatible with the speed requirements of real-time cyber defense.

defense+20 views

When U.S. Cyber Command or NSA deploys an offensive cyber tool -- an exploit, implant, or access technique -- against a target, the useful lifespan of that tool is measured in months, not years. Adversaries, security researchers, and antivirus companies discover and patch vulnerabilities rapidly. The Shadow Brokers leak in 2016-2017 exposed NSA's elite hacking tools, many of which were immediately weaponized by criminals (WannaCry, NotPetya) and then patched by vendors, burning years of development effort. This matters because developing a reliable offensive cyber capability costs millions of dollars and months or years of effort. A zero-day exploit for a modern operating system requires teams of reverse engineers, vulnerability researchers, and exploit developers working for 6-18 months. When that capability is burned after a single use -- or worse, before use due to a leak -- the entire investment is lost and the operation must start over. The operational consequence is that Cyber Command faces a constant tension between using capabilities and preserving them. Commanders are often reluctant to authorize offensive operations because doing so consumes irreplaceable tools. This creates a paradox where the U.S. builds cyber weapons it is afraid to use, while adversaries who care less about stealth (like Russia and Iran) use theirs freely. The result is that the U.S. has the world's most sophisticated cyber arsenal but often cannot employ it at the speed of operational need. The deeper pain is that there is no cyber equivalent of restocking ammunition. When an infantry unit fires its bullets, the supply chain can manufacture and deliver more identical rounds. When a cyber unit burns a zero-day, there is no way to produce another one for the same target on a predictable timeline. Each capability is artisanal and one-of-a-kind. This persists because the vulnerability discovery-to-patch cycle has compressed from years to days. Bug bounty programs, automated fuzzing, and machine learning-assisted vulnerability detection mean that the same bugs U.S. operators discover are independently found by others faster than ever. The structural advantage once held by well-funded intelligence agencies is eroding as commercial offensive security firms and adversary states invest in the same techniques.

defense+10 views

A qualified cybersecurity professional who accepts a job at NSA, Cyber Command, or a defense contractor requiring TS/SCI clearance will wait an average of 12-18 months before they can actually begin work. During this period, they either sit idle in an unclassified role, take a different job, or simply withdraw their acceptance. The Defense Counterintelligence and Security Agency (DCSA) processes over 2 million background investigations annually but has a persistent backlog that spikes to 200,000+ cases. This matters because the cybersecurity job market moves in weeks, not months. A candidate who accepts a cleared position in January may receive three competing offers from Google, CrowdStrike, or Amazon by March -- none of which require a clearance or a wait. Studies show that 20-30% of candidates withdraw during the clearance process, and these are disproportionately the most skilled candidates who have the most options. The downstream effect is that defense cyber organizations are systematically selecting for candidates who have fewer alternatives rather than the most talented. The people willing to wait 18 months tend to be those without better offers. Meanwhile, the most capable hackers and engineers -- the ones who would have the greatest impact on national security -- are the least willing to endure the wait because they are the most in demand. Once cleared, the problem compounds. Cleared cyber workers are effectively locked into the defense ecosystem because their clearance is their most valuable career asset, creating a two-tier labor market where cleared mediocrity is valued above uncleared excellence. The structural reason is that the clearance investigation process was designed for a Cold War era when the primary concern was loyalty and foreign contacts. The process has been digitized but not fundamentally redesigned. Investigators still conduct in-person interviews with references and neighbors, verify employment history manually, and check databases sequentially rather than in parallel. DCSA's Trusted Workforce 2.0 initiative promises continuous vetting to replace periodic reinvestigations, but the initial investigation bottleneck remains unsolved.

defense+20 views

Modern military software systems depend on thousands of open-source libraries and third-party components, many of which are maintained by anonymous individuals or small teams in foreign countries. A single military application can have 500+ transitive dependencies, and no one in the program office has audited more than a fraction of them. The SolarWinds attack in 2020 demonstrated that a single compromised vendor update could infiltrate the networks of the Pentagon, DHS, and the Treasury Department simultaneously. This matters because adversary intelligence services have recognized software supply chains as a high-leverage attack vector. Rather than attacking hardened military networks directly, they can compromise a single widely-used library and wait for it to be pulled into defense systems through routine updates. The XZ Utils backdoor discovered in 2024 showed that a patient adversary can spend years building trust in an open-source project before inserting a backdoor. The operational impact is that military systems could be compromised before they are ever deployed. If a backdoor exists in a cryptographic library used by a satellite communications system, every message sent through that system is potentially readable by the adversary. If a compromised logging library phones home from a classified network, the adversary gets real-time visibility into military operations. Software Bills of Materials (SBOMs) are supposed to solve this, but in practice they are generated once at delivery and never updated. The software keeps pulling new dependencies, and the SBOM becomes stale within weeks. Even when SBOMs exist, program offices lack the tools and personnel to analyze them for risk. The structural cause is that the defense acquisition system was designed to vet hardware suppliers (checking if a chip fab is in a friendly country) but has no equivalent process for software dependencies. DFARS and NIST 800-171 require cybersecurity practices from prime contractors, but those requirements do not cascade effectively to the anonymous maintainers of open-source libraries that prime contractors bundle into their deliverables.

defense+20 views

U.S. Cyber Command's Cyber Mission Force was designed around 133 teams -- 13 National Mission Teams for defending critical infrastructure, 68 Cyber Protection Teams for defending DoD networks, and others for combat support. In 2022, Congress authorized growth to 147 teams. The problem is that the pipeline for producing offensive cyber operators (those who conduct network exploitation and attack) cannot keep up with demand. The training pipeline at the Joint Cyber Training Center takes 6-12 months, and the attrition rate during training is roughly 30%. This matters because offensive cyber operations require highly specialized skills that take years to develop. An operator needs deep knowledge of specific target networks, operating systems, languages, and tooling. When an experienced operator leaves -- whether to the private sector, to a staff assignment, or to another service -- their institutional knowledge of specific adversary networks walks out with them. The replacement starts from scratch. The operational consequence is that Cyber Command must constantly triage which operations to staff. If there are 20 validated targets but only enough operators for 8, the other 12 go unaddressed. This means adversary command-and-control infrastructure stays online, espionage operations continue unimpeded, and pre-positioned access for contingency operations atrophies because no one is maintaining the implants. The deeper problem is that military career structures actively work against building deep cyber expertise. The Army, Navy, and Air Force all require mid-career officers to rotate through command and staff positions to be competitive for promotion. A cyber officer who spends 3 years becoming expert in Chinese telecommunications networks will be pulled away to command a company or serve on a general's staff -- duties that have nothing to do with their cyber skills. By the time they return, their technical knowledge is stale and their target access has been burned. This persists because the military promotion system was built for maneuver warfare leaders, not technical specialists. Despite creating cyber-specific career fields, the services have not fully separated the cyber promotion track from the traditional officer career model. The few officers who try to stay technical are passed over for promotion by peers with broader career experiences.

defense+20 views

The majority of the Pentagon's major weapons systems -- F-15s, Abrams tanks, DDG-51 destroyers, Patriot missile batteries -- were designed and fielded before cybersecurity was a design consideration. These platforms have serial buses, unencrypted data links, default passwords, and no intrusion detection. They were built on the assumption that physical security (being on a military base or in a war zone) was sufficient protection. This matters because these systems are now being networked together in ways their designers never anticipated. The Army's Project Convergence and the Joint All-Domain Command and Control (JADC2) initiative require legacy platforms to share data across networks. Every time an engineer connects a 1990s-era fire control system to a modern IP network, they create an attack surface that the original system has zero ability to defend. The real pain is that a single compromised legacy subsystem can cascade across an entire kill chain. If an adversary can tamper with the targeting data on a legacy radar feed, every downstream system that consumes that data -- missile batteries, fighter aircraft, command posts -- acts on corrupted information. Operators have no way to verify data integrity because the original system has no authentication or checksums. Retrofitting cybersecurity onto legacy weapons is prohibitively expensive and often technically impossible. The Government Accountability Office found that some systems use custom processors and proprietary software that the original manufacturers no longer support. Rewriting the software would require re-certifying the entire weapons system, a process that costs hundreds of millions of dollars and takes 5-10 years. The structural reason this persists is that the DoD acquisition system treats cybersecurity as a separate compliance requirement rather than an engineering constraint. Program managers are incentivized to hit cost and schedule milestones, and cybersecurity testing is typically the last gate before fielding -- the point at which schedule pressure is highest and willingness to fix problems is lowest.

defense+10 views

The Department of Defense cannot fill its cyber workforce positions. As of 2024, over 30,000 cyber-related billets across the military services and defense agencies remain vacant, and the gap has persisted for over a decade despite dozens of initiatives, bonuses, and pipeline programs. This matters because every unfilled position represents a network, system, or mission that is either undefended or defended by someone doing double duty. When a Cyber Protection Team has 60% manning, the remaining operators are triaging which systems to monitor and which to ignore. That means adversaries like China's Volt Typhoon can dwell in critical infrastructure networks for months or years without detection, because there simply aren't enough analysts watching the dashboards. The downstream effect is that commanders lose confidence in their networks and start reverting to manual, analog processes -- paper maps instead of digital C2 systems, phone calls instead of chat, physical couriers instead of email. This slows decision-making in combat by hours or days, which in a peer conflict against China or Russia could be the difference between winning and losing an engagement. The problem persists structurally because DoD compensation cannot compete with private sector salaries. A GS-12 cyber analyst makes $90,000-$110,000; the same person commands $180,000-$250,000 at a defense contractor or $300,000+ at a tech company. Retention bonuses of $20,000-$60,000 barely close the gap. Meanwhile, the security clearance process takes 6-18 months, during which candidates accept other offers. The military also insists on broad career progression (command tours, staff assignments) that pulls skilled operators away from keyboards for years at a time. In the first place, the root cause is that the federal pay scale was designed for an industrial-age workforce and has never been fundamentally restructured for digital-age talent competition. Congress has authorized various special pay authorities, but the bureaucratic overhead of using them means most hiring managers default to standard GS scales.

defense+20 views