Panic Politics maintains a hand-curated database of 99 news outlets, each classified by political lean: progressive/left, independent/center, or conservative/right. These are real outlets — newspapers, broadcast networks, digital publications, and wire services — that Americans actually read. No fringe sites. No anonymous blogs.
For every story, the AI agent queries outlets from all three groups specifically. It does not pull random headlines and sort them later. It goes directly to left-leaning outlets, center outlets, and right-leaning outlets and asks: what is each group actually reporting about this story right now?
This means the three perspective columns are grounded in real coverage — not hypothetical interpretations of what each side might say. When Fox News says something, we report what Fox News said. When NPR reports something, we report what NPR reported. The sourcing is explicit and traceable.
Every morning at 6:00 AM Pacific, an automated AI agent runs through the following pipeline:
The agent scans the top US political headlines of the day using NewsAPI, identifying the most-covered stories across all outlets. Stories that appear across multiple sources — left, center, and right — are prioritized, because cross-coverage means the story matters.
For each story, the agent queries the 99-outlet database and pulls actual headlines and article content from left-leaning, center, and right-leaning outlets separately. Each perspective column is built from what those specific outlets are reporting — not from what the AI imagines they might say.
The AI writes three 150-180 word summaries in active reporting voice: 'CNN reports...', 'Fox News said...', 'Reuters noted...' The word 'would' is banned from all output. Every claim is attributed to a named outlet. The goal is to report what is being said, not to editorialize about it.
Each perspective is scored 0-4 on four criteria: factual accuracy, selective framing, rhetorical inflation, and source integrity. The score is calculated algorithmically and reviewed for consistency. Both sides of the political spectrum are held to the same standard.
After the three perspectives are written, the AI synthesizes all three into a single 120-150 word plain-English summary. This is the Panic Politics editorial voice — no sides taken, no jargon, no spin. Just the core facts, where the sides agree, where they genuinely differ, and why the story matters to ordinary people.
Completed stories are placed in a draft queue for human review before publication. The site owner reviews each story, can edit any section, and approves publication. The AI does the sourcing and drafting. A human makes the final call.
Every perspective is evaluated against four criteria. A perspective must fail on multiple dimensions to reach the higher end of the scale. The same rubric applies to every story, every side, every time.
Are the core claims supported by verifiable evidence? This includes statistics, quotes, and attributed statements. Fabricated or demonstrably false claims immediately push a score toward 4. Minor factual errors that do not materially change the argument are noted but weighted less heavily.
Does the perspective omit significant context that would complicate its argument? A claim can be literally true while being deeply misleading through omission. We evaluate whether a reasonable, informed reader would feel they received a complete picture of the issue.
Does the language used exceed what the evidence supports? This includes catastrophizing, loaded terminology, false equivalences, and emotional manipulation that substitutes for argument. We distinguish between passionate advocacy — which is legitimate — and language designed to bypass critical thinking.
Are sources credible, current, and used in context? Cherry-picked studies, out-of-date statistics presented as current, and sources quoted out of their original meaning all factor into this criterion. We also flag circular sourcing — where one outlet's claim becomes another's 'evidence.'
The five-level scale is designed to be granular enough to be useful without creating false precision. A score is not a verdict — it is a signal. Here is what each level represents.
The perspective is factually grounded, uses proportionate language, acknowledges complexity, and does not omit material context. Advocacy is present — this is a perspective, not a wire service report — but it is honest advocacy.
"The proposed budget reduces Medicaid expansion funding by 12% over five years, according to the CBO. Supporters argue this encourages state-level flexibility; critics note it will reduce coverage for an estimated 2.3 million low-income adults."
The perspective is mostly accurate but applies noticeable framing bias — emphasizing certain facts while downplaying others, or presenting one credible interpretation as the only reasonable one.
"The budget's Medicaid cuts are a pragmatic step toward fiscal sustainability. States have long asked for more flexibility, and this delivers it — while keeping the core safety net intact for the most vulnerable."
Selective use of facts is evident and material. Important counterevidence is omitted. The language consistently favors one interpretation in ways that a neutral observer would flag.
"This budget is a direct attack on working families. By gutting Medicaid, Republicans are choosing tax breaks for corporations over healthcare for children — a moral failure that will cost lives."
The perspective relies heavily on misleading framing, out-of-context statistics, or rhetorical techniques that substitute for evidence. A reader relying solely on this account would be materially misled.
"The radical socialist budget proposal would destroy 4 million jobs and collapse the American healthcare system within a decade — exactly what the far-left has always wanted."
The perspective contains demonstrably false claims, fabricated statistics, or a pattern of deception so pervasive that the overall argument cannot be salvaged by charitable interpretation.
"Internal documents prove that the budget was written by a foreign lobbying group and that the vote was rigged — something the mainstream media refuses to report because they are complicit."
After every story is analyzed from three perspectives, Panic Politics produces a fourth section: What It All Means. This is the Panic Politics editorial voice — the one place on the page where we stop reporting what others are saying and tell you directly what the story is actually about.
The summary is written to the standard of the best reporter in the room. It states the verified facts. It names where the sides genuinely agree. It identifies where they actually differ — and why. It does not hedge, does not editorialize, and does not take sides. It ends with one sentence explaining why the story matters to ordinary people living ordinary lives.
This is not conjecture. It is not opinion. It is the common-sense synthesis that a good journalist produces after reading everything — the version you would want to read if you only had two minutes and needed to actually understand what happened.
Clarity about scope is as important as the methodology itself.
A lower score does not mean a perspective is correct. It means it is more honest about what it knows and does not know.
Value judgments are not scored. We only score factual and evidentiary claims.
We are a synthesis and scoring layer. We rely on and credit the journalists who do primary reporting.
We have a process designed to counteract editorial bias, but we do not pretend to be view-free. We aim for fairness.
Angry language is not automatically spin. Calm language is not automatically honest. We score substance, not style.
Scores reflect the information available at the time of publication. Corrections are published openly — never quietly revised.
Panic Politics accepts no advertising from political campaigns, advocacy organizations, political action committees, or government entities. Operating revenue comes from reader subscriptions. No funder has any editorial input, approval rights, or advance access to scoring decisions.
The AI agent is a tool, not an editor. Every story it produces is reviewed by a human before publication. The AI handles sourcing, drafting, and initial scoring. A human approves what goes live. That is the line.
We publish our scoring rubric in full — this page — because transparency about process is the only honest basis for asking readers to trust our scores. If you believe a score is wrong, we want to hear from you. Use the contact link in the footer. We read every message.
Yes — and we are transparent about it. An AI agent sources articles from our 99-outlet database, synthesizes what each group of outlets is reporting, and produces an initial draft. A human editor reviews every story before it goes live. The AI does the research and drafting. The human makes the final call on publication.
The 99 outlets in our database were hand-selected based on readership, editorial track record, and political lean classification. We use established media bias research as a starting point and apply our own editorial judgment. The list is reviewed periodically and updated when outlets change their editorial direction.
Yes. The same four-criterion rubric — factual accuracy, selective framing, rhetorical inflation, and source integrity — is applied to every perspective on every story, regardless of which side it comes from. The left and right are held to identical standards.
Yes. A score of 4 is reserved for demonstrably false or deeply deceptive content, and that standard applies equally regardless of ideological direction. If you notice a pattern of 4s clustering on one side over time, that is worth scrutinizing — and we scrutinize it ourselves.
It is a plain-English synthesis of all three perspectives — the common-sense summary a good journalist would write after reading everything. It is produced by the AI agent using a strict prompt that bans speculation, bans the word 'would', requires present or past tense only, and demands that every claim be grounded in the actual coverage. It is reviewed by a human editor before publication.
Send us your argument. We have a formal correction process: submit your disagreement with specific evidence, and a senior editor will review it. If we find the score was in error, we publish a correction with full explanation. We do not quietly revise scores without acknowledgment.
Every story on Panic Politics is sourced from this hand-curated database of 99 outlets, classified by political lean. We use established media bias research as a starting point and apply our own editorial judgment. The list is reviewed periodically and updated when outlets change their editorial direction.
Want Your Source Added?
If you believe a news outlet deserves a place in our database, send us the name, URL, and your reasoning for its political lean classification.
Submit a Source → [email protected]