Real problems worth solving

Browse frustrations, pains, and gaps that founders could tackle.

The EPA, OSHA, NIOSH, and CDC have all declined to set a threshold limit value (TLV) for indoor mold spore concentrations. The EPA explicitly states it does not recommend mold testing standards for consumer use. This means there is no legally defensible number — no 'safe level' of mold spores per cubic meter — that a tenant can point to when demanding remediation, that a buyer can cite when backing out of a purchase, or that a court can use to determine liability. This matters because without a threshold, every mold dispute devolves into a battle of expert witnesses. A tenant's industrial hygienist says 2,000 spores/m3 of Aspergillus is dangerous; the landlord's expert says it is within normal range. Neither is wrong, because there is no standard. Judges and juries have no framework to evaluate these claims. The result is that tenants with legitimate mold exposure spend $5,000-$15,000 on legal and testing fees just to reach a settlement, while landlords with no actual mold problem face nuisance suits they cannot cheaply disprove. The absence of a standard hurts both sides. This vacuum persists because mold's health effects are highly individual — a spore count that triggers asthma in one person may be harmless to another. The EPA has stated that establishing a numerical standard would create a false sense of precision. But the practical consequence is that the entire mold dispute ecosystem — insurance claims, tenant complaints, real estate transactions, and litigation — operates without any objective measuring stick, forcing every case into expensive, subjective expert testimony.

housing0 views

Most standard homeowner insurance policies contain a mold remediation sublimit — a buried cap, typically between $1,000 and $10,000 — that applies even when the mold resulted from a covered peril like a burst pipe. The homeowner discovers the sublimit only after filing a claim, at which point they learn their policy will cover perhaps $5,000 of a $25,000 remediation job. That cap has to cover testing, professional removal, containment, and reconstruction. A moderate-sized project can exhaust a $10,000 sublimit before the rebuild phase even begins. This matters because the homeowner is now stuck choosing between paying $15,000-$20,000 out of pocket or leaving mold in the walls. Most families do not have $20,000 in liquid savings. So they either take on debt, do a partial remediation that leaves hidden contamination behind walls, or attempt DIY removal that spreads spores to unaffected areas. A partial fix means the mold returns within months, and a second claim gets denied entirely because the insurer classifies it as pre-existing. The reason this trap persists is that insurance companies added mold sublimits and exclusions after the 'mold explosion' of the early 2000s — notably after a $32 million jury verdict in the Ballard v. Fire Insurance Exchange case in Texas in 2001. Insurers responded by quietly inserting sublimits into renewals. Homeowners rarely read sublimit schedules, and agents almost never explain them at point of sale. There is no federal or state requirement to prominently disclose mold sublimits, so the gap between expected coverage and actual coverage remains invisible until the moment you need it most.

housing0 views

The Long-Term Care Ombudsman Program — the federally mandated system that advocates for nursing home and assisted living residents — has lost more than half its volunteer hours since 2016, dropping from over 600,000 annual volunteer hours to fewer than 300,000 in 2024. The ratio of residents to ombudsmen has worsened from roughly 1 ombudsman per 350 beds in 2016 to 1 per 600 beds in 2024. The entire national system is supported by approximately 2,044 paid staff and 3,598 licensed volunteers — for a population of over 2.5 million long-term care residents. In 2024, the program investigated over 205,000 complaints, a record high. This collapse matters because the ombudsman is often the only independent person who enters a facility, talks to residents without staff present, and can escalate problems to regulators. Residents who are cognitively impaired, physically dependent on staff for daily care, or afraid of retaliation cannot effectively advocate for themselves. Families who visit weekly see a curated version of the facility. State inspectors come roughly once a year. The ombudsman is the persistent, recurring, independent set of eyes. When there is one ombudsman covering 600 beds across multiple facilities, the practical result is that each facility gets visited rarely, complaints take weeks to investigate, and systemic problems — the kind that emerge only from pattern recognition across multiple visits — go undetected. The volunteer workforce collapsed for interconnected reasons. The COVID-19 pandemic locked ombudsmen out of facilities for months, and many older volunteers (the core of the program) never returned. Recruiting replacements is difficult because the role requires state certification, background checks, and significant training — all for an unpaid position. The paid staff are government employees in state agencies with frozen budgets and competing priorities. The GAO identified staffing shortfalls, investment limitations, and increasingly complex resident needs as the primary threats to the program. Congress has not meaningfully increased Older Americans Act funding (which funds the ombudsman program) to match the growing resident population or the growing complexity of complaints, which now include PE ownership disputes, Medicaid billing issues, and psychotropic medication concerns that require expertise beyond what most volunteers have.

healthcare0 views

An HHS Office of Inspector General audit found that CMS did not accurately report deficiency data on Care Compare — the federal government's primary nursing home comparison tool — for 67 out of 100 sampled nursing homes. The inaccuracies included health deficiencies for 34 facilities, fire safety deficiencies for 52 facilities, and emergency preparedness deficiencies for 2 facilities. Care Compare is the website families, hospital discharge planners, and ombudsmen use to evaluate and compare nursing homes. It is the authoritative federal source. And for an estimated two-thirds of facilities, it is wrong. This matters because families making nursing home placement decisions are operating under extreme time pressure and emotional stress. Their parent has been hospitalized, the discharge planner says they cannot go home, and the family has 48-72 hours to choose a facility. They go to Care Compare, look at star ratings and deficiency histories, and choose the facility that appears safest. If Care Compare says a facility has no fire safety deficiencies when it actually has three, that family is making a life-and-death decision based on false information. They cannot independently verify deficiency data — they would need to request state survey records and read hundreds of pages of inspection reports, which is not feasible in a 48-hour placement window. The entire value proposition of Care Compare is that it provides an accessible, accurate summary. When it fails at accuracy, it fails at its only job. This inaccuracy persists because of a fragmented data pipeline. State survey agencies conduct inspections and enter findings into the federal ASPEN system. CMS then extracts and processes this data for display on Care Compare. The OIG found errors at multiple points in this pipeline: surveyor data entry errors, extraction errors, and processing errors. There is no automated reconciliation between what surveyors documented and what Care Compare displays. CMS does not audit the accuracy of its own public-facing data. The system was designed decades ago and has been patched rather than rebuilt. Meanwhile, 107,000 complaints were filed about nursing homes in fiscal year 2024 alone — complaints that generate inspection findings that may or may not make it accurately into the system families rely on.

healthcare0 views

Three out of five dementia residents in institutional settings wander, and when they successfully leave the facility unsupervised — an event called "elopement" — roughly one-third of those incidents end in death. Since 2018, more than 2,000 people have wandered away from assisted living and memory care units or been left unattended outside, with nearly 100 dying. About 72% of residents who elope once will try again. Forty-five percent of elopement claims are filed within 48 hours of admission — meaning the highest-risk period is when staff know the resident the least. The consequences of elopement are catastrophic and fast. A person with moderate-to-severe dementia who exits a facility often cannot state their name, remember where they live, or recognize that they are in danger. They walk into traffic, fall into water features, wander into wooded areas in extreme heat or cold, or simply walk until they collapse from exhaustion or dehydration. The window for safe recovery is narrow — hours, not days. When a facility does not notice a resident is missing (which is common during shift changes, mealtimes, or nighttime checks performed at 2-hour intervals), the delay can be fatal. For families, the promise of a "secure memory care unit" was the entire reason they chose institutional care over home care — and discovering that their parent walked out of a locked unit and died of hypothermia in a parking lot is a betrayal that no apology or settlement can address. This problem persists because elopement prevention requires both technology and staffing, and most facilities underinvest in both. Door alarms exist but are frequently propped open, disabled during high-traffic periods, or set to a tone that blends into the ambient noise of a busy unit. Wander-guard systems (ankle or wrist bracelets that trigger alarms at exits) require consistent battery replacement and proper placement — and residents with dementia remove them. GPS tracking is rarely used because of cost and privacy concerns. The fundamental staffing issue remains: with CNA-to-resident ratios of 1:13 or worse, there is no one continuously watching the resident who is pacing the hallway at 2 AM. The 80% of elopement claims that involve "chronic wanderers" represent residents whose wandering behavior was known and documented but not adequately addressed in their care plan.

healthcare0 views

Assisted living facilities advertise a base monthly rate — the national median is $5,900/month — but families routinely discover thousands of dollars in additional charges after their loved one has already moved in. Common hidden fees include: medication management charged per pill per administration (five prescriptions can mean five separate charges multiple times daily), "care level" reassessments that bump residents to higher-cost tiers based on opaque criteria, community fees of $1,000-5,000 at move-in, charges for guests joining meals, laundry fees, transportation fees, and "coordination fees" charged when families bring in outside caregivers. One facility added a surprise $5,000 overnight health aide fee to a monthly bill and withdrew it via direct debit before the family could object. This pricing opacity creates a devastating trap for families. By the time the true cost becomes clear — often 2-3 months after move-in — the resident has been uprooted from their home, adjusted to a new environment, formed relationships with staff, and may be too fragile for another move. The emotional and physical cost of relocating an elderly person with dementia is enormous and measurable ("transfer trauma" increases mortality risk). So families pay the inflated rate rather than move their parent again. Some facilities exploit this lock-in further: they penalize residents for using outside therapists or private-duty aides (which would be cheaper than in-house services), charging "monitoring fees" or threatening eviction. The financial burden falls disproportionately on families: Medicare does not cover assisted living, and Medicaid covers it in some states through waivers but with long waitlists. This persists because there are no federal disclosure requirements for assisted living pricing. Unlike hospitals (which must now publish chargemasters) or nursing homes (which have Medicare-regulated rate structures), assisted living facilities can structure their pricing however they want. No state requires standardized all-in pricing disclosure. The industry's sales process is designed around the base rate — it is the number on the brochure, the number the sales director quotes on the tour — and the add-on fees emerge only in the fine print of a contract signed under time pressure when a parent has just been discharged from the hospital and needs placement within days.

healthcare0 views

Unlike nursing homes, assisted living facilities have no federal quality standards, no federal inspection requirements, and no standardized data collection. Over 1 million Americans live in these facilities, many of them increasingly frail and cognitively impaired, but oversight is entirely left to states — and states do it inconsistently at best. Maryland inspected only 25.7% of facilities in one recent year (improved to 55.6% in 2024). A 2018 GAO report found that most state Medicaid agencies did not even track "critical incidents" affecting Medicaid beneficiaries in assisted living. In January 2024, witnesses at a Senate hearing testified that inadequate training, improper communication from operators and state agencies, and limited data collection contribute to subpar conditions. The absence of federal oversight means families have no reliable way to compare facilities. There is no Care Compare equivalent for assisted living. There are no star ratings, no standardized quality metrics, no public inspection reports in most states. A family choosing between two assisted living facilities in different states is comparing entities governed by completely different rules — different staffing requirements, different training mandates, different definitions of what "assisted living" even means (states use terms like "residential care," "adult care home," and "personal care home" interchangeably). A facility that would fail inspection in one state might be fully compliant in another. This regulatory vacuum persists because assisted living was historically positioned as a housing option, not a healthcare setting. The industry lobbied to maintain this distinction precisely because it meant lighter regulation and lower costs. But the resident population has changed dramatically: people entering assisted living today are older, sicker, and more cognitively impaired than the population these facilities were designed to serve. Senators Warren, Gillibrand, and Wyden called for a GAO investigation into state oversight of assisted living in 2024, and proposed federal legislation (the ASSISTED in Assisted Living Act) would have created an advisory council and voluntary reporting — but even that modest bill stalled. The industry's trade groups argue that federal oversight would increase costs and reduce the supply of beds, which may be true but leaves residents in a setting where no one is systematically checking whether they are safe.

healthcare0 views

Private equity firms own between 5-13% of U.S. nursing homes, and the data on what happens after acquisition is grim. A 2021 JAMA Health Forum study found mortality rates were 10% higher in PE-owned facilities compared to non-PE homes. The share of one-star facilities (CMS's lowest quality rating) among PE-owned homes doubled from 10% in 2019 to 21% in 2024, while five-star facilities halved from 28% to 14% over the same period. PE ownership is consistently linked to higher deficiency citations, increased hospitalization rates, and worse clinical outcomes. The mechanism is straightforward: private equity's business model depends on short-term profit extraction over a 3-5 year hold period. Firms acquire facilities, sell the real estate to a separate affiliated entity, then charge the operating company inflated rent. They create management companies, staffing agencies, and supply companies — all related parties — and funnel facility revenue through them at above-market rates. An estimated 40 cents of every dollar paid to related parties is profit extraction. The money flows out of the facility and into the PE firm's returns, while the operating company — now burdened with debt from the leveraged acquisition and paying inflated costs to affiliates — cuts the only variable cost it can: staff. Fewer CNAs per shift, more agency temps who do not know the residents, deferred maintenance on the building. This persists because CMS's ownership transparency requirements have been weak. Until recently, it was difficult to trace the beneficial ownership of nursing facilities through layers of LLCs and holding companies. PE firms structure acquisitions specifically to obscure the ownership chain. Families shopping for a nursing home on Medicare's Care Compare website see the facility's name and star rating but cannot see who actually owns it, what related-party transactions are draining revenue, or whether the facility's debt load makes adequate staffing financially impossible. The 2024 staffing rule included Medicaid institutional payment transparency requirements that would have forced disclosure of ownership and related-party transactions, but the rule was rescinded.

healthcare0 views

In 2024, CMS finalized the first-ever federal minimum staffing standards for nursing homes: 3.48 hours per resident day (HPRD) of total nursing, including at least 2.45 HPRD of nurse aide care and 0.55 HPRD of RN care. As of May 2024, only 30% of nursing homes met the nurse aide minimum. Fewer than one in five facilities met all three requirements simultaneously. Then in 2025, the federal budget reconciliation bill prohibited HHS from enforcing the rule until October 1, 2034, and CMS formally rescinded it in December 2025. The minimum staffing standard that most facilities already could not meet was eliminated before it ever took effect. The nurse aide shortfall is the most consequential gap because nurse aides provide the vast majority of hands-on daily care: toileting, turning, feeding, bathing, dressing, and mobility assistance. When there are not enough aides, residents wait. They wait to be taken to the bathroom and end up incontinent. They wait to be repositioned and develop pressure ulcers. They wait for help eating and lose weight. They wait for someone to answer their call light — average response time is 8 minutes, but in understaffed facilities it stretches far longer. Each of these delays is individually small but cumulatively devastating: the resident loses dignity, physical function, and the will to engage. Families visit and find their parent sitting in a soiled brief, and they cannot tell whether it has been 20 minutes or 2 hours. The structural reason this persists is economic. Medicaid pays for roughly 62% of nursing home residents, and Medicaid reimbursement rates in many states do not cover the actual cost of care. In New York, Medicaid rates cover only 75% of costs, leaving a $74-per-resident-per-day gap. Facilities cannot hire enough aides at competitive wages when the revenue per resident does not cover the cost of staffing. CNAs earn an average of $20.16/hour with a 42% annual turnover rate, meaning facilities are constantly losing and replacing their workforce. The industry lobbied successfully to kill the staffing rule, arguing it was unachievable given current reimbursement — which is true, but leaves residents in the same understaffed facilities indefinitely.

healthcare0 views

Analysis of CMS Payroll-Based Journal data shows that registered nurse staffing in nursing homes drops 42% on weekends compared to weekdays. LPN/LVN staffing drops 17%, and CNA staffing drops 9%. One in five nursing facilities has at least one weekend day per quarter with literally zero RN hours — no registered nurse on the premises at all. The staffing pattern follows a predictable weekly curve: highest Tuesday through Thursday, declining on Friday and Monday, and cratering on Saturday and Sunday. This weekend staffing collapse has direct clinical consequences. Weekends are when acute changes in resident condition — chest pain, stroke symptoms, respiratory distress, sudden confusion — go unrecognized or are handled by staff without the clinical training to assess severity. Without an RN present, there is no one qualified to perform a clinical assessment, call a physician with a structured report, or make the judgment call between "monitor and recheck" and "call 911 now." The result is that conditions that could be treated early on a Tuesday become emergencies by Monday morning. A study in JAMDA found significant associations between daily nurse staffing levels and daily hospitalizations and ED visits — the lower the staffing, the higher the emergency utilization. This problem persists because weekend and night differential pay in nursing homes is minimal (often $1-2/hour extra), nowhere near enough to attract staff to undesirable shifts. Facilities budget for a weekday staffing level and treat weekends as a skeleton-crew operation because admissions (their revenue-generating activity) happen Monday through Friday. CMS's 2024 minimum staffing rule would have required 24/7 RN presence, which would have directly addressed the zero-RN weekend problem, but the rule was rescinded by CMS in December 2025 before enforcement began, and Congress prohibited HHS from enforcing it until October 2034.

healthcare0 views

A 2025 HHS Office of Inspector General report found that nursing homes failed to report 43% of falls that caused major injury and hospitalization among Medicare-enrolled residents. This is not a minor paperwork gap — falls are the single most common adverse event in nursing facilities, affecting 50-75% of residents annually (a rate 2-3 times higher than community-dwelling older adults). Roughly 65,000 nursing home residents suffer hip fractures from falls each year, and one in three residents who falls will fall again within the same year. The failure to report means the problem compounds invisibly. When a fall is not documented, the facility has no obligation to investigate root cause, update the resident's care plan, or implement prevention measures. The next fall becomes more likely, not less. In Massachusetts, the rate of falls with injury increased nearly 25% between 2018 and 2022 (from 27.9 to 34.8 per 1,000 residents). Families are not notified of unreported falls, so they cannot intervene, request care plan changes, or make informed decisions about whether the facility is safe for their loved one. For the resident, an unreported hip fracture often triggers a cascade: surgery, immobility, pressure ulcers, pneumonia, and death within 12 months. This underreporting persists because there is no independent verification system. Facilities self-report adverse events, and the incentive is to underreport: more reported falls mean lower quality scores, more regulatory scrutiny, and potential liability. CMS relies on periodic inspections (which happen roughly once per year and are often announced in advance) to catch discrepancies, but inspectors review a sample of records and cannot audit every fall. The 2024 staffing rule that would have required 24/7 RN presence — someone with the clinical authority and training to document and investigate falls properly — was rescinded before it took effect.

healthcare0 views

A March 2026 HHS Office of Inspector General report found that nursing homes are systematically adding false schizophrenia diagnoses to residents' medical records to mask their use of antipsychotic drugs. CMS tracks antipsychotic use as a quality measure that affects a facility's star rating, but residents diagnosed with schizophrenia are excluded from that metric. So facilities game the system: staff receive electronic health record alerts flagging residents on antipsychotics who lack a schizophrenia diagnosis, and nurses are instructed to add one. At one facility, a nurse practitioner added schizophrenia diagnoses to dozens of residents' records in a single day. This matters because antipsychotic drugs carry an FDA black-box warning that they increase the risk of death in elderly patients with dementia. These drugs are being used as chemical restraints — staff admitted to inspectors that antipsychotics were administered to "lighten the workload by quieting patients." The OIG cited a Pennsylvania facility that gave a woman over 100 years old an antipsychotic because she enjoyed caring for dolls. Roughly 250,000 nursing home residents receive antipsychotics every week, and more than one in five residents (21.3%) are on these drugs. The false diagnoses prevent families, regulators, and prospective residents from seeing the true rate of chemical restraint at a facility. This problem persists because the incentive structure is backwards. CMS created the antipsychotic quality measure to discourage overuse, but instead created a loophole: exclude schizophrenia patients from the count. Facilities exploit this loophole because the penalty for a low star rating (fewer admissions, lower revenue) is immediate and financial, while the penalty for a fraudulent diagnosis is rare and delayed. State survey agencies lack the clinical expertise to audit psychiatric diagnoses, and the residents being drugged — elderly people with dementia — cannot advocate for themselves. The 2024 federal staffing rule that would have required 24/7 RN presence (which might have provided clinical pushback against inappropriate prescribing) was rescinded in December 2025.

healthcare0 views

In the 2008 election, a poorly designed ballot layout in East St. Louis caused more than twice as many uncounted votes as comparable precincts with clearer designs. When Ohio split the U.S. Presidential contest across two columns instead of listing candidates in one continuous column, it resulted in 50% more uncounted votes. The most infamous example remains Palm Beach County in 2000, where the butterfly ballot's confusing layout caused an estimated 2,000+ Democratic voters to mistakenly vote for Pat Buchanan and over 19,000 voters to punch multiple holes—enough to change the outcome of a presidential election. These are not ancient history artifacts: research from the MIT Election Lab, the Center for Civic Design, and Duke University's Fuqua School of Business continues to document how ballot layout choices—column splitting, contest ordering, instruction placement, font size, bubble alignment—directly cause overvotes, undervotes, and voter confusion in every election cycle. The pain is specific and measurable. Every uncounted vote due to a design error is a voter who showed up, waited in line, and attempted to express their preference, only to have their ballot invalidated by a layout decision they had no control over. In down-ballot races where margins are thin—school board elections decided by dozens of votes, city council races decided by single digits—a ballot design that causes even a 1% increase in uncounted votes can change who governs a community. Voters do not know their ballot was not counted; they leave the polling place believing they voted successfully. There is no notification, no cure process, no recourse. The structural reason this persists is that there are no mandatory federal or state ballot design standards in most jurisdictions. The Center for Civic Design has published best practices, and some states like California have developed poll worker training standards that include ballot layout guidance, but compliance is voluntary. Each of the roughly 10,000 election jurisdictions in the U.S. designs its own ballots, often using whatever templates their voting equipment vendor provides, modified by a county clerk who may have no training in information design, typography, or usability testing. The vendor templates themselves are designed for flexibility, not usability—they need to accommodate thousands of different election configurations, so they optimize for technical compatibility rather than voter comprehension. There is no requirement to user-test a ballot before it goes to print.

legal0 views

In Gillespie County, Texas, the only two full-time election workers quit, completely emptying the county's election office less than 70 days before voters were set to start casting ballots. This is not an edge case—it is an inherent vulnerability in a system where election administration is handled at the county level by offices that are often staffed by 3-5 people. The National Association of Counties reports that in most smaller and medium counties, the election official is also the county clerk, recorder, or auditor, meaning election administration is one of several duties competing for their time. When even one person leaves a 3-person office, 33% of the institutional knowledge walks out the door. The consequences are immediate and concrete. Someone needs to know how to program the ballot definition files for the county's specific voting equipment. Someone needs to know which polling locations have accessibility issues and need temporary ramps ordered. Someone needs to know how to recruit and assign poll workers to 15 precincts and train them on the county's procedures. Someone needs to know the deadlines for ballot printing, logic and accuracy testing, and early voting setup. This knowledge is largely undocumented and lives in the heads of the people who do the work. When those people leave suddenly, the county scrambles to find replacements—often borrowing staff from neighboring counties or relying on the state election office to send emergency support—while simultaneously trying to meet immovable statutory deadlines. This persists because there is no national or state-level redundancy framework for election administration. Unlike the military, which has clear succession plans and documented standard operating procedures for every role, county election offices have no requirement to maintain continuity-of-operations plans, no cross-training mandates, and no centralized knowledge base. The assumption baked into the system is that the county clerk will serve for years and train a successor—but in an era of 41% annual turnover, threats, and burnout, that assumption is broken. There is no "election administration reserve corps" that can be deployed when a county's staff collapses.

legal0 views

Colorado state law sets the minimum compensation for election judges at $5 per day. While many Colorado counties pay more than this floor, the statute itself signals how little the role is valued in law. Across the country, poll worker compensation is a patchwork: Delaware pays a $300 stipend, Alaska pays $20/hour, New Jersey raised its minimum to $15/hour in 2024, and South Carolina's Election Commission recently requested budget increases to fund a $40 daily raise. Meanwhile, the job itself requires a minimum of 13 hours on Election Day, often starting before 6 AM and ending after 8 PM, plus mandatory training sessions that may or may not be compensated. The EAC's 2022 survey found that 54.1% of jurisdictions reported difficulty recruiting poll workers, with the problem persisting in every major election since 2018. The downstream effect is that polling places cannot open enough lines, wait times increase, and voters in high-traffic precincts—disproportionately in urban and minority communities—bear the cost. In Pennsylvania, election officials in Montgomery, Bucks, and Chester counties reported needing to appoint people to 30-50% of poll worker positions because not enough volunteers signed up. When positions go unfilled, the remaining workers are stretched thinner, mistakes increase, and the voter experience degrades. A 2024 Chester County incident demonstrated the connection directly: two inexperienced poll workers, inadequately trained, selected the wrong option when printing poll books, excluding all non-major-party voters. Over 12,000 voters were forced to cast provisional ballots as a result. The reason this problem persists is that poll worker compensation is set by state legislatures that have little incentive to raise it. The political cost of raising poll worker pay is real—it requires either tax increases or cuts to other budget items—while the benefit is diffuse and invisible (shorter lines, fewer errors, better-run elections). The average poll worker is 61 or older, and 16.7% are first-timers, meaning the workforce is aging out and being replaced by people with no institutional knowledge. Younger workers who might be interested cannot afford to take a full day off work for $100-200 in compensation, especially in states without laws requiring employers to give time off for poll work.

legal0 views

The Electronic Registration Information Center (ERIC) is the only system that allows states to cross-reference voter registration records across state lines to identify voters who have moved, died, or are registered in multiple states. Between 2022 and 2023, nine states with Republican leadership withdrew from ERIC after it became a target of conspiracy theories: Louisiana, Alabama, Florida, Missouri, West Virginia, Iowa, Ohio, Texas, and Virginia. These states collectively represent tens of millions of registered voters whose records are now maintained without cross-state verification. The immediate consequence is that these states' voter rolls are becoming less accurate over time, not more. Alabama and Missouri took months after leaving ERIC to develop alternative plans for cleaning their voter rolls, and the plans they came up with are less rigorous than what ERIC provided. Without cross-state data sharing, a voter who moves from Texas to California may remain on the Texas rolls indefinitely, which feeds exactly the kind of "dead voters" and "double registration" narratives that motivated leaving ERIC in the first place. It is a self-fulfilling prophecy: states left a system that prevented list accuracy problems because of conspiracy theories about list accuracy problems, and now they actually have list accuracy problems. Meanwhile, the remaining ERIC member states (24 plus DC) lose the data contribution from those nine states, degrading the system's effectiveness for everyone. This problem persists because voter roll maintenance in the U.S. is decentralized by design—there is no national voter registration database, and the Constitution gives states primary authority over elections. ERIC was a voluntary workaround to this structural fragmentation, built as a nonprofit consortium. But because participation is voluntary, states can leave whenever they want, and because ERIC became politically coded, the decision to stay or leave became a partisan signal rather than a technical one. A new alternative system that some states explored had its server attacked and was temporarily brought down, illustrating how difficult it is to build reliable cross-state election infrastructure outside of established institutions.

legal0 views

In February 2025, CISA withdrew federal support for the Election Infrastructure Information Sharing and Analysis Center (EI-ISAC), which had grown from 25 analysts in 2018 to over 135 regional experts who conducted more than 1,000 vulnerability scans of election systems. In March 2025, CISA cut $10 million in funding for the broader Multi-State ISAC. The proposed fiscal year 2026 budget would eliminate CISA's Election Security Program entirely—$39.6 million and 14 positions. Between February and November 2025, an estimated 1,000 CISA employees were terminated, including specialists who directly assisted state election officials. The impact falls hardest on rural and small counties that relied on free EI-ISAC services because they cannot afford cybersecurity on their own. In Washington state alone, 15 of 39 counties were designated as "cyber-underserved" by the EI-ISAC. These are counties where the election office might be two people who also handle recording, licensing, and vital records. They do not have an IT security team. They were using free EI-ISAC tools for network monitoring, vulnerability scanning, and threat intelligence briefings. Now those tools are gone, and these same counties are still targets for ransomware gangs and foreign intelligence services. Arizona Secretary of State Adrian Fontes said election officials are "effectively flying blind" without CISA's assistance. Maine Secretary of State Shenna Bellows said the federal government "pulled the rug out from under" officials who relied on free EI-ISAC services. The structural problem is that election cybersecurity was never funded as a permanent line item. It was built on top of CISA as a program that could be expanded or eliminated by executive branch decisions, with no dedicated congressional appropriation. County election offices have budgets set by county commissioners who are balancing roads, jails, and social services—a $50,000-per-year cybersecurity contract is a hard sell when the last bridge inspection failed. The result is that election cybersecurity for thousands of small jurisdictions depended on a single federal program that turned out to be politically vulnerable, and when it was cut, there was no fallback.

legal0 views

In Florida, mail ballots cast by Hispanic voters face a rejection risk 2.6 times that of white voters. In North Carolina, the rejection risk for Black voters is three times that of white voters. In Georgia, Asian voters' mail ballots are flagged at nearly three times the rate of white voters' ballots. These are not small margins on a rare event—in the 2016 and 2018 elections combined, over 750,000 mail ballots were rejected nationally, with non-matching signatures accounting for about a third of all rejections. Each rejected ballot is a citizen who went through the process of requesting, completing, and mailing a ballot, only to have their vote silently discarded. A 2026 study published in State Politics & Policy Quarterly used a controlled experiment where volunteers were randomly assigned to sign either a Latino-coded or White-coded name, ensuring that the racialized name was not linked to the signer's actual background. This eliminated the possibility that cultural or demographic differences in handwriting could explain the gap. The result: signature evaluators were significantly more likely to reject Hispanic-named signature pairs compared to White-named ones. The study concluded that evaluator bias—not voter-side factors—is the primary driver of racial disparities in mail ballot signature verification. A separate study in the American Journal of Political Science found that election workers are more likely to wrongly reject valid ballots than to correctly reject invalid ones, meaning the system's error rate tilts toward disenfranchisement. This problem persists because signature verification is inherently subjective—there is no national standard for what constitutes a "match," and training for the workers who make these calls varies wildly by jurisdiction. Some counties give evaluators a few hours of training; others give them days. Some use automated signature matching software as a first pass; others rely entirely on human judgment. The ballot curing process—which lets voters fix a rejected ballot—exists in only 33 states, and the rules differ so dramatically (Arizona says "make reasonable efforts to contact," California requires notification "a minimum of eight days prior to certification") that whether your vote counts after a false rejection depends heavily on where you live.

legal0 views

An investigation by the Houston Chronicle found that of 701 voting locations in Harris County, Texas—the third most populous county in the nation, home to 4.7 million people—just two were fully compliant with the Americans with Disabilities Act. Three hundred of those 701 sites were not only noncompliant but could not be made accessible even through temporary modifications like portable ramps or signage. Of the 68 early voting sites, not a single one was fully compliant. This is not a Harris County anomaly: a GAO study of 178 polling places nationwide found that 60% had potential barriers to entry and 65% had problems with voting apparatus accessibility for wheelchair users. In Detroit and 14 surrounding suburbs, 84% of voting locations had barriers for voters with disabilities. The downstream impact is that voters with disabilities simply do not vote. The U.S. Census Bureau's Current Population Survey consistently shows a turnout gap of 6-7 percentage points between voters with and without disabilities. That gap represents millions of citizens. For a wheelchair user who arrives at their assigned polling place and finds three steps with no ramp, the options are: attempt to find the curbside voting setup (if one exists and if a poll worker is stationed outside), travel to a different location that may or may not be accessible, or give up. Many give up. This is not a hypothetical—it is a pattern documented in DOJ enforcement actions, GAO audits, and disability rights litigation across dozens of jurisdictions. The structural reason this persists is that polling places are typically hosted in buildings the county does not own—churches, community centers, schools, VFW halls—and the county has no authority or budget to renovate them. When an election official surveys sites, they are choosing from whatever buildings are available in each precinct, and in many precincts there is no fully accessible option. The ADA requires "program access," meaning the county must make the voting program accessible, but the enforcement mechanism is reactive: the DOJ investigates after a complaint is filed, not proactively. Counties that lack staff to conduct accessibility audits—which is most of them—do not know how bad their compliance is until someone sues.

legal0 views

In 2024, 41% of local chief election officials turned over—the highest rate in at least 25 years. In the year following the 2024 election alone, 53 chief local election officials in Western states left their jobs, nearly matching the 55 who departed after the 2020 election. These are not interchangeable bureaucrats; they are the people who know how to configure ballot templates for their county's specific equipment, how to recruit poll workers from local civic organizations, which polling locations have ADA issues that require temporary ramps, and how to troubleshoot the county's 15-year-old tabulator when it jams at 2 AM on election night. The human cost is severe and specific. In Shasta County, California, the Clerk and Registrar of Voters retired in 2024 after nearly two decades, citing heart failure from job-related stress. Her successor resigned less than a year later for similar health reasons. In Clark County, Nevada, the longtime Registrar reported that threats escalated to the point that police were checking on his home hourly and that his family was also targeted. These are not isolated incidents—a Brennan Center survey found that one in three election officials has experienced threats, harassment, or abuse. The people being driven out are disproportionately the most experienced ones, the ones who have run enough elections to know what can go wrong. The structural reason this persists is that election administration in the U.S. was designed for an era when it was an unglamorous, apolitical clerical function. Salaries are set by county pay scales that treat the election clerk the same as any other department head, despite the fact that the job now involves death threats, 80-hour weeks during election season, cybersecurity responsibilities, and intense public scrutiny. There is no federal certification, career ladder, or professional development pipeline for election officials. When an experienced official leaves, the replacement is often a political appointee or the next person willing to take the job—and they are learning on the fly how to run an election that serves hundreds of thousands of voters.

legal0 views

In the November 2024 election, 44.6 million registered voters lived in jurisdictions using principal voting equipment that was first fielded more than 10 years ago. The expected lifespan of electronic voting machine core components is 10-20 years, and experts say most systems are closer to the 10-year end. Some counties are still running equipment purchased in the early 2000s—machines whose processors, memory, and operating systems are two decades behind current technology. Replacing all in-person voting equipment first fielded in 2014 or earlier would cost approximately $203 million nationally; replacing equipment that is no longer even manufactured costs another $150 million. The real pain is not just that the machines are old—it is what happens when they fail. Aging touchscreens become unresponsive, causing longer wait times at the polls. Thermal printers jam. Battery backups degrade, making machines vulnerable to power interruptions. Memory card readers fail mid-count. When a machine breaks on Election Day, there is often no spare available because the manufacturer discontinued that model years ago. The county clerk scrambles to consolidate voters onto fewer machines, lines grow, and some voters leave without voting. In a close local race, a few hundred voters who gave up in line can change the outcome. This problem persists because there is no sustained federal funding mechanism for election infrastructure. The last major federal investment was the $380 million in HAVA grants in 2018, and before that, the 2002 HAVA appropriation. Election officials cannot plan multi-year equipment replacement cycles because they never know if or when federal money will arrive. County budgets, meanwhile, are competing election equipment against road repairs, law enforcement, and public health—and a $3 million voting system replacement in a county with a $50 million total budget is a massive capital expenditure that requires years of political will to approve. So the machines keep aging.

legal0 views

ES&S, Dominion Voting Systems, and Hart InterCivic together provide voting equipment to roughly 92% of the U.S. voting population. ES&S alone controls about 50% of the market. This oligopoly formed after the Help America Vote Act of 2002 drove a wave of acquisitions—Dominion bought Sequoia Voting Systems and Premier Election Solutions (formerly Diebold), while ES&S absorbed competitors until forced to divest on antitrust grounds. The reason this matters is not abstract market theory—it directly drives up costs and slows down innovation for the counties that actually buy the equipment. When only three vendors exist, a county negotiating a $2 million voting system replacement has almost no leverage. Vendors can charge premium prices for maintenance contracts, replacement parts, and software updates because the switching costs are enormous: a new system requires retraining every poll worker, redesigning every ballot template, and re-integrating with the county's election management software. Counties are effectively locked in for a decade or more once they choose a vendor. The structural reason this persists is the EAC's federal certification process. There are only two accredited testing laboratories (VSTLs) in the entire country, and each vendor can only have one system under federal certification at a time. Certification campaigns routinely take 12-18 months and cost millions of dollars. A startup building a better, more secure, or more accessible voting system faces years of testing and millions in costs before it can sell a single unit. This barrier to entry is why the market consolidated in the first place and why it stays consolidated—the certification regime that was designed to ensure security has the side effect of freezing out competition and locking counties into expensive, aging systems from a handful of incumbents.

legal0 views

An AI-based analysis of 2,647,471 cancer research publications flagged 261,245 papers -- 9.87% -- as probable paper mill products. Separate estimates across biomedical research put the paper mill share between 2% and 20% of published papers, with one researcher warning that within a decade, more than half of annually published studies could be fraudulent. Wiley's own screening tool found that up to 1 in 7 submissions to hundreds of its journals showed signs of paper mill activity. The entities producing these papers are large, resilient, and growing rapidly, according to a 2025 PNAS study. In cancer research specifically, the consequences of paper mill contamination are not abstract. Oncologists designing clinical trials rely on published preclinical data to determine which drug targets, biomarkers, and treatment combinations to test in humans. If 10% of the preclinical literature is fabricated, researchers are building clinical trial designs on a foundation where roughly 1 in 10 supporting studies is fake. A Phase II clinical trial costs $10-50 million; a Phase III trial costs $50-300 million. Each trial designed around fabricated preclinical evidence wastes years of patient enrollment time, millions in funding, and -- most critically -- exposes cancer patients to experimental treatments justified by data that never existed. Failed trials also create 'negative evidence' that discourages future investigation of approaches that might actually work, because the field incorrectly concludes that the preclinical rationale was sound but the biology did not translate. Paper mills thrive because the incentive structure of academic publishing rewards publication volume, and the detection infrastructure cannot keep pace with production volume. Paper mills use AI to generate novel-looking western blot images, flow cytometry plots, and statistical tables that pass automated screening. They rotate author names, affiliations, and email domains to avoid pattern detection. They exploit the special issue model to bypass rigorous editorial oversight. And they serve a real market demand: researchers in systems where career advancement requires a minimum number of publications per year will pay $1,000-5,000 per paper to a mill rather than risk career stagnation. Until the incentive to publish-or-perish is decoupled from the incentive to fabricate, paper mills will continue to scale faster than detection tools.

education0 views

In 2025, Clarivate suppressed impact factors for 20 journals due to excessive self-citation and citation stacking -- a significant escalation from 4 suppressions in 2023 and 17 in 2024. Citation manipulation takes multiple forms: editors explicitly demand that authors add 6-8 references to recent articles in the editor's own journal as a condition of acceptance, with no scholarly justification offered. Citation cartels form where editors of different journals agree to mutually inflate each other's citation counts. And individual authors engage in systematic self-citation rings, citing their own prior work regardless of relevance. The damage goes far beyond inflated metrics. Impact factors directly determine where researchers submit their best work, which journals libraries subscribe to, and how tenure committees evaluate candidates. When a journal's impact factor is artificially inflated through coercive citation, researchers submit papers there believing it to be a high-quality venue. Their work is then associated with a journal that is subsequently flagged for manipulation. Libraries pay subscription fees calibrated to a fraudulent metric. And hiring committees comparing candidates from different fields use impact factors as a cross-disciplinary yardstick, meaning a candidate who published in a citation-manipulated journal appears more productive than a candidate who published in an honest one. The metric that was supposed to measure quality becomes a tool for gaming quality. Coercive citation persists because it is nearly impossible to prove and carries no individual penalty. When an editor emails an author saying 'please consider citing recent work in our journal,' the line between legitimate editorial suggestion and coercive manipulation is subjective. Authors comply because refusing risks rejection. The author cannot report the behavior without jeopardizing their submission. Even when Clarivate suppresses a journal's impact factor, the suppression is temporary (typically one year), the manipulating editors face no personal consequences, and the journal can resume normal operations once the metric recovers. There is no equivalent of a 'ban' for editors caught manipulating citations, and no whistleblower protection for authors who report coercive demands.

education0 views

Studies of retraction timelines reveal a stark disparity: when junior researchers are implicated in misconduct, the median time from publication to retraction is 22 months. When senior researchers are implicated, the median balloons to 79 months -- over 6.5 years. The overall average time to retraction across all cases is approximately 33 months (about 2.7 years). During every month that a fraudulent paper remains in the literature, it accumulates citations, informs other research, and may influence clinical or policy decisions. The asymmetry is not just a statistical curiosity -- it reflects a structural power imbalance with real consequences. A senior researcher who fabricated data in a highly cited paper has 6.5 years of continued citations, continued grant funding justified by those citations, continued supervision of graduate students building on fabricated findings, and continued influence on the field's direction. Their graduate students and postdocs, whose careers depend on the senior researcher's reputation and whose own papers cite the fraudulent work, are trapped: they cannot publicly challenge their mentor without destroying their own careers, and they cannot build on the work without perpetuating the fraud. By the time the retraction finally arrives, the senior researcher may have retired with full honors, while the junior researchers who depended on them face a contaminated publication record. The disparity persists because the retraction process depends on institutional investigations, and institutions have powerful incentives to protect senior faculty. A university that investigates a prominent, well-funded professor risks losing grant overhead revenue, damaging its rankings, and generating negative press. Investigations are conducted by committees of the accused's peers and colleagues, creating inherent conflicts of interest. Journals defer to these institutional investigations rather than acting independently, and institutions slow-walk investigations because the costs of confirming fraud (reputational damage, returned grant money, legal liability) far exceed the costs of delay (which are externalized onto the scientific community at large). Junior researchers, by contrast, have no institutional protection, no political capital, and no leverage to delay proceedings.

education0 views

In December 2024, Finland's Publication Forum (JUFO) downgraded all 193 MDPI journals to its lowest level 0 rating -- the same category as popular press articles and publications without peer review. This means Finnish researchers who publish in any MDPI journal receive zero points toward funding applications and career evaluations. The Chinese Academy of Sciences followed suit in its 2025 revision, removing all MDPI journals from its recommended list. The trigger was MDPI's industrial-scale use of special issues: the publisher went from 388 special issues in 2013 to nearly 40,000 in 2021, averaging 500 special issues per journal. In 2023, the Journal of Molecular Sciences alone had 4,216 open special issues -- 11.5 closing every single day. The immediate victims are the tens of thousands of researchers who published in MDPI journals in good faith. A Finnish postdoc who published three papers in an MDPI journal in 2023, believing it was a legitimate indexed venue, now finds those publications are worth zero in funding evaluations. Their competitors who published in non-MDPI journals of similar quality have an insurmountable advantage. Researchers in fields where MDPI journals were dominant venues (sustainability science, environmental engineering, materials science) face a retroactive devaluation of years of published work. The papers are not retracted -- they still exist -- but they carry the institutional stigma of a publisher that an entire country's funding system has declared untrustworthy. The structural problem is that the open-access APC model creates a direct financial incentive to maximize the number of published papers, and special issues are the most efficient mechanism to do so. Each special issue generates a call for papers that produces submissions, each submission generates APC revenue upon acceptance, and guest editors are incentivized to accept rather than reject because their CV benefits from a successfully completed special issue. MDPI's model is the logical endpoint of a system where the publisher is paid per paper published rather than per paper rejected. Quality control is a cost that reduces revenue. The market rewards volume, and MDPI optimized for volume more aggressively than any competitor.

education0 views

As of late 2025, the Retraction Watch Hijacked Journal Checker maintained by researcher Anna Abalkina lists over 400 journals that have been hijacked -- meaning scammers created fraudulent websites that clone the appearance, title, ISSN, and metadata of legitimate journals. In late 2024, a new wave of hijacking targeted journals from Elsevier, Springer Nature, and other major publishers with remarkable fidelity. A company called 'Springer Global Publication' was linked to many of these scams. Papers submitted to hijacked journals are published on the fake site, often using recycled content, and assigned fake DOIs (starting with '16' or '20' instead of the legitimate '10' prefix). The victims are researchers who believe they have published in a legitimate, indexed journal. They pay APCs (often $500-2,000), list the publication on their CV, and cite it in grant applications -- only to discover months or years later that their paper appeared on a fraudulent website that has no connection to the real journal. For researchers in countries where publication counts directly determine salary, promotion, and funding, a hijacked journal publication can mean the difference between career advancement and stagnation. The paper is not indexed, not discoverable, and not citable. The money is gone. And the researcher may face institutional suspicion of having knowingly published in a fraudulent venue, even though they were the victim of a sophisticated scam. Hijacked journals persist because the domain name system and web infrastructure make it trivially easy to create convincing clones. When a legitimate journal lets its domain registration lapse, scammers buy it. When a journal uses a generic web template, scammers copy it. There is no centralized registry that authoritatively maps journal titles and ISSNs to verified URLs. ISSN.org, Crossref, and publisher websites each maintain separate records that are not cross-referenced in real-time. Researchers have no single source of truth to verify whether the submission portal they are looking at is the real one. And the scammers operate across jurisdictions where enforcement is impractical.

education0 views

A January 2025 analysis of citation data from Clarivate's Web of Science Core Collection found that 58.28% of all citations to retracted papers occurred after the retraction was issued. Some retracted papers actually received more citations post-retraction than pre-retraction. A separate study found that of 88 papers citing retracted work, 39 drew conclusions that would be substantially weakened if the retracted papers were removed from the analysis -- yet journals flagged only 4 of these 39 weakened studies. A 1998 JAMA investigation found that 94% of 299 citations to retracted articles in MEDLINE did not note the retraction, and decades later, the problem has barely improved. This is not an abstract bibliometric curiosity. When a retracted paper on Alzheimer's drug mechanisms continues to be cited as valid evidence in grant applications, clinical trials are designed around fabricated findings. When a retracted nutrition study is cited in dietary guidelines, public health policy is built on fraud. The citing authors are not malicious -- they simply have no practical way to know that a paper they read and cited has since been retracted. Google Scholar, PubMed, and most reference managers provide inconsistent or absent retraction notifications. A researcher building a literature review encounters the paper, sees it was published in a reputable journal, cites it, and moves on. The retraction notice exists as a separate publication that almost nobody reads. The structural failure is that the scholarly communication system treats publication and retraction as separate, disconnected events. There is no technical standard that propagates retraction status to every copy of a paper across every database, repository, and reference manager. Crossmark exists but is not universally implemented. Publishers have no obligation to notify authors who cited a retracted paper. The citing authors have no obligation to issue corrections. And journal editors reviewing new submissions have no automated tool that checks whether any of the references in a manuscript have been retracted. The entire chain from retraction to downstream correction is manual, voluntary, and therefore almost never completed.

education0 views

Between 2023 and 2024, Wiley retracted more than 11,300 papers from its Hindawi portfolio -- more retractions than any single publisher had ever issued. In 2024, Wiley shut down 19 Hindawi-branded journals entirely. The company disclosed $35-40 million in lost revenue from the cleanup. The attack vector was specific: paper mills exploited the guest-edited special issue model, where external academics are invited to curate themed collections. Paper mill operators either became guest editors themselves or bribed existing guest editors, then funneled fabricated manuscripts through compromised peer review. Hindawi's integrity team found duplicated review text, reviewers who turned in reports within hours, and systematic misuse of reviewer databases. The downstream damage extends far beyond Wiley's balance sheet. Thousands of researchers who legitimately published in these journals now have retracted or tainted publications on their CVs -- not because they committed fraud, but because they happened to publish in a journal that was simultaneously being exploited by paper mills. For early-career researchers in developing countries who published in Hindawi journals because APCs were affordable, a mass retraction on their record can be career-ending. Their papers are delisted from indexes, their citation counts drop, and their grant applications are weakened -- all through no fault of their own. They are collateral damage of a publishing model that prioritized volume over integrity. The root cause is the perverse incentive structure of the special issue model. Publishers discovered that special issues generate more submissions (and therefore more APC revenue) than regular issues. MDPI published nearly 40,000 special issues in 2021 alone. Hindawi adopted the same playbook. But special issues delegate editorial gatekeeping to guest editors who have no contractual accountability, no training in fraud detection, and no financial stake in the journal's long-term reputation. The publisher profits from every accepted paper; the guest editor gets a CV line; and nobody has an incentive to reject manuscripts. Paper mills recognized this as the weakest point in the system and exploited it at industrial scale.

education0 views

A team of scientific integrity investigators identified nearly 300 papers by Japanese physicians (the Sato-Iwamoto case) bearing clear signs of fabrication and contacted 78 journals to request retractions. Journals took action on 136 papers -- retracting 121, correcting 3, and issuing 12 expressions of concern. But 107 papers across 41 journals from 21 publishers, including Elsevier and Springer Nature, received no response whatsoever. The journals simply ghosted the whistleblowers. It took 3.5 years and a threat of sanctions from the Committee on Publication Ethics (COPE) before Elsevier finally retracted just 7 of the flagged papers. The human cost is direct and measurable. These were medical papers -- studies that inform clinical practice. Every month a fabricated paper on drug efficacy or surgical outcomes remains in the literature, it risks being incorporated into clinical guidelines, systematic reviews, and meta-analyses that doctors use to make treatment decisions. A physician in a rural hospital who looks up a treatment protocol has no way of knowing that the evidence supporting it was fabricated by a fraud ring that was identified years ago but whose papers were never retracted because the journal could not be bothered to respond to an email. The patients treated based on this evidence bear the ultimate cost of editorial inaction. The structural problem is that journals have no obligation, deadline, or penalty for responding to integrity complaints. There is no regulatory body that can compel a journal to investigate. COPE can threaten sanctions, but membership is voluntary and sanctions are toothless -- the worst outcome is being removed from COPE, which has no material financial consequence for the publisher. Journal editors are typically unpaid academics with no dedicated staff for investigations. Publishers treat integrity complaints as a cost center that generates no revenue, and the rational economic decision is to ignore them. The whistleblowers bear all the costs (time, reputation risk, legal exposure) while the publisher bears none of the consequences of inaction.

education0 views