BSL-3 and BSL-4 pathogen samples are routinely shipped between research institutions internationally using commercial carriers (FedEx, DHL) in UN3373-rated packaging. The carrier knows the package contains a biological substance but not which pathogen -- that information is on the shipper's documentation, not the carrier's manifest. A lost or diverted package containing H5N1, Ebola, or engineered pathogen samples would be indistinguishable from any other biological shipment until opened. This persists because the international shipping framework (IATA Dangerous Goods Regulations) classifies all infectious substances into two broad categories without requiring pathogen-specific tracking, and creating a real-time pathogen tracking system would require coordination between shippers, carriers, customs agencies, and receiving institutions across every country.
Real problems worth solving
Browse frustrations, pains, and gaps that founders could tackle.
Community biology labs (Genspace NYC, BioCurious, Counter Culture Labs) are member-funded spaces where hobbyists conduct genetic engineering experiments. Some work with BSL-2 organisms (E. coli K-12 derivatives, yeast expression systems) under voluntary safety guidelines with no federal inspection or licensing requirement. A biohacker ordering CRISPR kits from Addgene and working in a community lab has fewer regulatory requirements than a university undergraduate doing the same work in an institutional lab. This persists because the NIH Guidelines for Research Involving Recombinant DNA only apply to NIH-funded institutions, and community labs receive no federal funding, placing them entirely outside the biosafety regulatory framework.
Startups engineering microbes for agriculture (nitrogen fixation), materials (spider silk), and bioremediation release engineered organisms into the environment with minimal regulatory oversight. The EPA's TSCA process for engineered microbes takes 90 days and requires only a pre-manufacture notice, not an environmental impact assessment. Once released, engineered organisms can horizontally transfer genes to wild populations through mechanisms that are poorly understood. This persists because TSCA was designed for chemical substances, not self-replicating organisms, and updating the regulatory framework for synthetic biology has stalled in Congress for a decade.
AlphaFold and RoseTTAFold can predict protein structures from amino acid sequences with atomic accuracy, and inverse folding models (ProteinMPNN) can design novel amino acid sequences that fold into specified structures. Combined, these tools enable designing proteins with targeted toxic functions (enzyme inhibitors, ion channel blockers, receptor agonists) from scratch, without needing a natural toxin as a starting template. A 2022 study showed an AI drug discovery model could generate 40,000 candidate chemical warfare agents in 6 hours. This persists because the same AI tools that enable life-saving drug discovery also enable toxin design, and there is no way to restrict the dual-use capability without crippling legitimate pharmaceutical research.
An AI targeting model trained on 6 months of adversary behavior performs well until the adversary changes tactics -- new vehicle types, different movement patterns, altered communication methods. The model's accuracy degrades silently because neural networks fail confidently (high confidence on wrong classifications) rather than flagging uncertainty. Operators continue receiving target recommendations without knowing the model's accuracy has dropped from 90% to 40%. This persists because out-of-distribution detection for military AI is an unsolved research problem -- no deployed system can reliably detect when it is operating outside its training distribution and alert the operator that its recommendations should not be trusted.
Cloud lab platforms (Emerald Cloud Lab, Strateos) allow researchers to remotely design and execute wet lab experiments through a web interface, with robotic systems performing the physical work. A user can order experiments involving pathogenic organisms without the physical access controls, training requirements, and biosafety oversight that an in-person lab requires. The cloud lab's compliance depends on its internal policies, not the user's institutional biosafety committee. This persists because cloud lab regulation falls into a gap -- the user's institution assumes the cloud lab handles biosafety, the cloud lab assumes the user has institutional approval, and no regulatory body oversees the interface between the two.
A coalition AI targeting system that works across US, UK, and Australian forces needs to be trained on combined intelligence data from all three nations. But each nation's intelligence is classified at levels that prohibit sharing with allies -- US TS/SCI data cannot be shared with UK systems without specific bilateral agreements that take years to negotiate. This means each coalition partner runs its own AI targeting model trained on its own subset of data, producing different target recommendations from the same sensor feed. This persists because intelligence classification is national sovereignty, AI model training requires raw data access (not just finished intelligence), and no Five Eyes framework exists for sharing AI training datasets across classification boundaries.
There are approximately 60 BSL-4 labs worldwide (handling the most dangerous pathogens) across 30 countries, but no international body has authority to inspect them. Each country self-certifies its labs' biosafety compliance. The US CDC inspects domestic BSL-4 labs, but labs in China, Russia, India, and elsewhere operate under whatever standards their national government sets. After the COVID-19 origin controversy, calls for an international inspection regime have gone nowhere. This persists because biosafety is considered a matter of national sovereignty, and countries operating BSL-4 labs (often for biodefense research they want to keep secret) refuse external inspection for the same reason they refuse nuclear inspections -- sovereignty and intelligence protection.
PLA doctrine explicitly positions AI-enabled decision-making speed as a strategic advantage over adversaries constrained by human deliberation. China's military AI development aims for 'intelligent warfare' where AI systems compress the observe-orient-decide-act loop beyond human reaction time. This creates a strategic stability problem: if one side deploys AI targeting that acts in seconds and the adversary's response requires minutes of human review, the human-review side faces a permanent speed disadvantage. This persists because AI targeting speed is an arms race dynamic -- whichever side slows down for human oversight loses the engagement, creating pressure for all sides to minimize human involvement.
CRISPR-based gene drives can force a genetic modification through an entire wild population within 10-20 generations, potentially eliminating malaria-carrying mosquito species. But mosquitoes are food for bats, birds, fish, and dragonflies -- removing them from the ecosystem could trigger trophic cascades that collapse dependent species. No ecological model can predict the full cascade because the interaction networks are too complex. Once released, a gene drive cannot be recalled. This persists because gene drive research is funded by global health organizations (Gates Foundation) focused on malaria deaths (600K/year), creating institutional pressure to deploy before ecological risk is fully characterized. The irreversibility of release means the first real-world experiment is also the final one.
Studies from aviation and medical AI show that when an AI system recommends an action, human operators accept the recommendation 92-97% of the time, even when the recommendation contradicts their own assessment. Applied to military targeting, this means an operator who sees an AI-flagged target is psychologically predisposed to approve the strike even if their own analysis suggests the target may be civilian. The 'human in the loop' becomes a formality. This persists because automation bias is a fundamental cognitive phenomenon -- humans defer to systems they perceive as more capable -- and no military training program has developed effective countermeasures to automation bias in targeting decisions.
After the COVID-19 lab leak debate, the US updated its policy on enhanced potential pandemic pathogen (ePPP) research in 2024, but compliance relies entirely on researchers self-reporting whether their work meets the ePPP definition. No federal agency audits lab notebooks, reviews experimental protocols, or independently assesses whether research meets the gain-of-function threshold. A researcher who believes their work is exempt simply does not report it. This persists because the enforcement framework treats biosafety as a researcher ethics issue rather than a regulatory compliance issue, and the NIH funding mechanism provides no budget for independent lab auditing.
AI-powered pattern-of-life analysis ingests weeks of drone surveillance footage and cell phone metadata to identify individuals whose behavior patterns match 'militant' signatures (meeting locations, movement times, phone contacts). But in dense urban conflict zones, civilians who happen to follow similar patterns -- a baker who wakes at 4am and drives to multiple locations, a doctor who visits the same injured people -- get flagged as targets. The AI cannot distinguish between correlation and causation in behavioral patterns. This persists because pattern-of-life algorithms optimize for pattern matching, not causal understanding, and the training data labels (who is actually a militant) come from prior intelligence that itself may be wrong, creating a feedback loop of confirmation bias.
Desktop DNA printers (e.g., Syntax Bio, DNA Script) now cost under $20K and can synthesize gene-length sequences in-house, completely bypassing the commercial synthesis companies that perform biosecurity screening. A researcher, biohacker, or malicious actor with a benchtop synthesizer can print any DNA sequence -- including select agent sequences -- with no screening, no reporting, and no regulatory oversight. This persists because DNA synthesis regulation is voluntary (International Gene Synthesis Consortium guidelines are not law), the US has no federal regulation of DNA synthesis equipment sales, and the biosecurity screening framework was built around the assumption that synthesis would remain centralized at commercial providers.
AI-based automatic target recognition (ATR) systems using computer vision are vulnerable to adversarial patches -- physical patterns painted on rooftops or vehicles that cause the neural network to misclassify a hospital as a military headquarters or vice versa. Researchers have demonstrated that a 2x2 meter adversarial patch visible from overhead imagery can cause >90% misclassification rates in standard object detection models. A defender could paint patches on civilian infrastructure to protect it, or an attacker could paint patches on military targets to make them appear civilian. This persists because adversarial robustness remains an unsolved problem in computer vision -- there is no neural network architecture that is provably immune to adversarial inputs, and the attack/defense asymmetry means patches are cheap to create but expensive to defend against.
Commercial DNA synthesis companies (Twist, IDT, GenScript) screen orders against known pathogen genome sequences to prevent bioweapon synthesis. But LLMs trained on biology literature can suggest novel gene modifications that produce toxic proteins or enhance transmissibility without matching any known pathogen sequence in the screening database. The screening catches known threats but is blind to novel designs. This persists because sequence screening uses exact and fuzzy matching against a static database of ~60 known threat agents, while the space of possible dangerous sequences is combinatorially vast and the LLM can explore it at zero marginal cost.
When an autonomous targeting system identifies and engages a target that turns out to be civilian, the accountability chain is unclear: the software developer wrote the algorithm years before deployment, the commander authorized autonomous mode but didn't approve the specific target, the operator was monitoring but the system acted faster than human intervention. No individual made the specific decision to strike. International humanitarian law requires individual criminal responsibility for unlawful attacks, but distributed AI decision-making diffuses responsibility across so many actors that no one is legally culpable. This persists because IHL was written for human combatants making discrete decisions, and no legal framework has been adopted for accountability in human-machine teaming where the 'decision' is an emergent property of the system.
HMN Technologies (formerly Huawei Marine) builds approximately 25% of the world's new submarine cable systems, including cables serving Southeast Asia, Africa, and Latin America. The optical amplifiers and network management systems HMN installs along these cables run proprietary firmware that the cable operator cannot audit. A firmware backdoor could enable remote traffic interception or cable disabling. Western intelligence agencies have warned about this risk, but HMN's prices are 20-30% below SubCom and NEC, and developing nations cannot afford the premium for Western-built cables. This persists because no international standard mandates firmware transparency or third-party security auditing for submarine cable equipment, and HMN's pricing advantage reflects Chinese government subsidies that Western manufacturers cannot match.
AI targeting systems are trained on historical engagement data -- past targets that were struck and their characteristics. If historical targeting disproportionately struck certain building types, neighborhoods, or demographic patterns, the AI learns to replicate that bias. The training data is classified, so no independent review can assess whether the model perpetuates discriminatory targeting. This persists because military AI development happens behind classification barriers that prevent the adversarial auditing, bias testing, and red-teaming that commercial AI systems undergo. The organizations building these systems have no institutional incentive to discover bias that would slow deployment.
The 1884 Convention for the Protection of Submarine Telegraph Cables prohibits cable cutting in peacetime but explicitly exempts wartime. The Geneva Conventions and Law of Armed Conflict have no specific provision protecting submarine cables as civilian infrastructure (unlike hospitals or water treatment). A belligerent can legally sever an adversary's cables as a military objective. This means the backbone of the global internet and financial system has less legal protection than a hospital. This persists because when these treaties were drafted, cable cutting was a minor inconvenience affecting telegraph traffic, not an existential threat to a nation's economy. No diplomatic effort to create a new cable protection treaty has gained traction because major powers want to preserve cable cutting as a wartime option.
Traditional kill chains (F3EAD: Find, Fix, Finish, Exploit, Analyze, Disseminate) take hours and involve multiple human decision points. AI-enabled kill chains compress this to minutes by automating target identification, weapon-target pairing, and engagement sequencing. But the Law of Armed Conflict requires proportionality and distinction assessments that cannot be performed in minutes -- a JAG officer reviewing a target packet needs the context of civilian population, structural analysis, and proportionality calculation that takes 30-60 minutes per target. This persists because military doctrine incentivizes speed ('decision advantage'), and AI compression of the kill chain creates pressure to compress the legal review that was designed for a slower operational tempo.
Global financial transactions (SWIFT, FX trading, settlement systems) depend on submarine cables for inter-exchange connectivity, with architects assuming N+2 redundancy. But at geographic chokepoints (Strait of Malacca, Red Sea, English Channel), all redundant cables pass through a corridor narrow enough that a single event (earthquake, sabotage, anchor drag) can sever all of them simultaneously. The 2006 Taiwan earthquake severed 8 cables simultaneously, disrupting Asian financial markets for weeks. This persists because cable operators route through chokepoints because they are the shortest path (cheaper to build), and financial regulators do not audit the physical diversity of their members' underlying telecom infrastructure.
The IDF's Gospel (Habsora) AI system reportedly generates 100+ bombing targets per day in Gaza, but human analysts reviewing each target for legal compliance and civilian harm assessment spend an average of 20 seconds per target before approving strikes. At that speed, 'human in the loop' is a rubber stamp, not a genuine review. Analysts describe feeling pressure to approve targets at the rate the AI generates them or be seen as bottlenecking operations. This persists because AI target generation speed is unconstrained while human review capacity is fixed, and no doctrine specifies a minimum review time per target. The operational pressure to maintain strike tempo incentivizes faster approval, not more careful review.
Accidental anchor strikes from commercial shipping cause approximately 70% of all submarine cable faults. Ships routinely anchor over cable routes because their navigation systems do not display cable positions, and cable protection zones (where anchoring is prohibited) are poorly enforced. A single container ship anchor can sever multiple co-located cables in a cable corridor. The 2024 Red Sea cable damage was caused by a Houthi-attacked ship dragging its anchor across a cable corridor. This persists because the IMO has no mandatory requirement for ships to integrate cable route data into their navigation systems, cable operators have no authority to enforce no-anchor zones in international waters, and AIS vessel tracking is not linked to cable route databases to provide real-time alerts.
Climate change is opening Arctic shipping and cable routes (e.g., the Far North Fiber project) that reduce latency between Asia and Europe by 30-40%. But these routes pass through or near Russian territorial waters in the Arctic, creating a dependency on Russian goodwill for cable maintenance access. An Arctic cable severed in Russian waters requires Russian permission for repair ships to enter -- permission that could be withheld during a conflict. This persists because Arctic cable routes are commercially attractive (shorter distance = lower latency = higher value for financial trading), and the companies building them prioritize commercial returns over geopolitical risk. No alternative Arctic routing avoids Russian-influenced waters entirely.
A nation-state with submarine capability can tap an undersea fiber optic cable by bending the fiber to extract light without cutting it, reading the data through photonic tapping techniques that introduce signal loss below the noise floor of the cable's optical monitoring. The tapped cable continues operating normally, and the cable operator has no indication that data is being intercepted. This persists because fiber optic monitoring systems (OTDR) measure signal loss to detect breaks, not the sub-0.1dB loss from a sophisticated photonic tap, and no commercial cable operator deploys the quantum-level monitoring that could theoretically detect such taps.
Russian oceanographic vessels (Yantar, Admiral Vladimirsky) have been repeatedly observed loitering near NATO submarine cable routes, likely mapping cable positions for potential wartime sabotage. Under UNCLOS, oceanographic research in international waters is legal, and no treaty prohibits surveying cable locations. NATO cannot prevent this mapping activity because freedom of navigation in international waters is a principle that NATO itself depends on. This persists because the legal framework governing undersea cables (1884 Convention, UNCLOS Articles 112-115) was written for telegraph cables and grants no nation the right to establish exclusion zones around submarine cables in international waters.
Many loitering munition prototypes from defense startups use commercial drone components (motors, ESCs, flight controllers, cameras) that originate from Chinese manufacturers, creating NDAA Section 848 compliance issues when the weapon transitions to production. Replacing each Chinese component with a NDAA-compliant alternative adds 40-60% to BOM cost and 6-12 months to development as engineers redesign around components with different specifications. This persists because the commercial drone ecosystem was built by Chinese manufacturers who achieved 80%+ market share in critical components, and the defense-grade alternatives that do exist were designed for $500K missiles, not $30K expendable munitions.
US military Switchblade training uses simulators with perfect GPS signal and uncontested datalinks. In Ukraine, operators report that real-world GPS jamming degrades terminal guidance accuracy from 1m CEP to 10-30m CEP, making precision strikes on vehicle-sized targets unreliable. Operators trained in clean environments freeze when their first real engagement occurs in a jammed environment and the weapon behaves nothing like the simulator. This persists because adding realistic electronic warfare conditions to simulators requires classified EW threat models that are not released to training commands, and live-fire training in GPS-denied environments is prohibited at most US ranges because it interferes with civilian GPS users in adjacent areas.
Submarine cables terminate at cable landing stations -- typically unremarkable buildings on coastlines protected by commercial security guards and chain-link fences. These stations are where fiber transitions from undersea to terrestrial networks, making them single points of failure for entire regions. An attacker with a truck bomb could sever a nation's international connectivity at the physical layer. Most landing stations are in publicly known locations (visible on Google Maps) with no military-grade perimeter security. This persists because cable landing stations were built decades ago when physical attack on telecom infrastructure was not a considered threat, and retrofitting hardened facilities at every coastal landing point would cost billions that no commercial operator will spend.