Most websites in the EMF space do one thing.
They post opinions.
Or they dump studies.
Or they sell products.
Or they give generic “turn off Wi-Fi at night” advice.
RFSafe.org is different. It’s a full platform: a live research stream, a structured EMF evidence index, a large-scale SAR phone directory with comparison/ranking tooling, a kid-focused SAR visualizer, an RSS feed generator, and a guided exposure assessment that can generate an AI report.
In other words: research → tools → decisions → mitigation—all in one place.
And it’s big. The research database currently lists 6,578 papers, with 6,275 showing extraction coverage, plus 441 archive posts you can browse and search.
What follows is a feature-by-feature tour through the menu—so readers can immediately understand what this site does, why it matters, and where to start.
What rfsafe.org actually is (in one sentence)
RFSafe.org positions itself as “Machine-Enhanced Logic for EMF research & news,” combining an AI-assisted research stream with navigation tools and SAR utilities designed to make RF exposure legible to normal people—not just specialists.
1) 🏠 Home: A living “research stream” you can filter in seconds
The Home page isn’t a static homepage—it’s a live stream of EMF-related papers with one-screen filtering for:
-
Effect bucket (Harm / Mixed / No effect / Unclear / Benefit / Unknown)
-
Evidence strength (High → Insufficient)
-
Year (1970 → 2026 + unknown year)
Each listing shows:
-
A classification label (e.g., “Harm manual” or “Harm pubmed”)
-
Journal + year
-
A tight summary
-
Direct jump-outs to the original source (PubMed/DOI when available)
It’s fast, and it’s designed for discovery: scroll the stream, click a bucket, jump into the details.
Important note: infinite scroll requires JavaScript, but the site provides a fallback path to browse papers via the Reviewed Papers page (with pagination).
2) 📊 Stats: The dashboard that makes 6,578 papers understandable
If you want the “big picture” in under 60 seconds, Stats is where it happens.
Effect classification (counts + percentages)
The Stats page breaks the database into buckets (and these are clickable to browse those exact subsets):
-
Harm: 1,396 (21.2%)
-
Mixed: 1,871 (28.4%)
-
No effect: 622 (9.5%)
-
Unclear: 1,657 (25.2%)
-
Benefit: 729 (11.1%)
-
Unknown / no extraction: 303 (4.6%)
Evidence strength distribution
It also shows the strength distribution (again: clickable buckets):
-
High: 71 (1.1%)
-
Moderate: 386 (5.9%)
-
Low: 3,800 (57.8%)
-
Very low: 428 (6.5%)
-
Insufficient: 1,590 (24.2%)
-
Unknown/invalid: 303 (4.6%)
Year-by-year timeline with toggles
Stats includes a stacked counts-per-year chart with toggles (include/exclude buckets) and click-to-browse behavior. It explicitly notes that items without a publication year are excluded from the year chart.
It also publishes a year table with YoY deltas (example: 2026 currently shows 88 papers in the year table at time of viewing).
Finally—and this matters—Stats includes a blunt “automated note” reminding users the classifications are navigational, not medical advice, and can be wrong.
3) # Hubs: Curated topic portals + a live tag cloud
Hubs is where the site stops being “a list” and becomes a map.
The page describes hubs as “curated, AI-maintained landing pages,” with a tag cloud generated from tagged stories (and optionally linked papers).
9 topic hubs (updated)
At time of viewing, the page shows 9 total hubs including:
-
5G / 6G Policy & Regulation (updated Feb 17, 2026)
-
Cancer & Epidemiology
-
Consumer Products / Shielding claims
-
Mechanisms & Bioeffects (Non-thermal)
…and more.
The tag cloud: discovery at scale
The page also reports 250 tags shown and notes tags open a preview modal and can be opened in the full archive.
If you want “show me everything related to oxidative stress / ion channels / kids / Wi-Fi / etc.” this is the fastest way to get there.
4) 📝 Reviewed Papers: The full database, filterable down to the exact slice you need
Reviewed Papers is the core directory interface: the same filter system as the Home stream, but focused on browsing the full set.
Two practical details matter:
-
It explicitly supports pagination via
?page=2,?page=3, etc. when infinite scroll is unavailable. -
It surfaces database totals (papers, extractions, archive count).
This is what researchers and journalists want: filters, scale, and predictable navigation.
5) 🧪 Evidence Lab: Where research meets interpretation (and the site shows its work)
Evidence Lab is labeled: “Research notes & weekly hub briefs. Not medical advice.”
What’s interesting here is that it separates types of content:
-
Some items are clearly framed as advocacy/strategy or policy arguments (and the notes say so).
-
Some are scientific paper summaries (e.g., modeling studies) with uncertainties discussed.
-
Some are meta-credibility discussions (e.g., rebuttal to a rating), again clearly labeled as communications rather than new scientific evidence.
Even if a reader disagrees with the framing, the structure is valuable: it helps people distinguish “study,” “policy,” and “advocacy” content instead of mixing everything into one feed.
6) 🗂 Archive: 441 searchable posts that turn a database into a library
The Archive is a separate content layer: 441 posts with a search field at the top, and a feed of entries with categories (e.g., “Research Paper Discussions”).
This is where the platform builds memory: older research threads, policy references, and curated discussions that don’t get lost in the endless scroll of “latest.”
7) 🛠 SAR Levels: Live per-phone SAR + specs lookup (powered by public JSON)
SAR Levels is the phone lookup tool: choose a phone model and it renders SAR (W/kg) values and a “full specs breakdown.” It also explains SAR and the U.S. limit (1.6 W/kg averaged over 1g).
What makes it unusually transparent: the page tells you exactly where the data comes from—per-phone JSON files in:
/phones/<slug>/<slug>_cache.json
That kind of explicit data plumbing is rare on consumer-facing sites.
8) SAR Compare: Two ways to compare phones (site tool + PWA tool)
You effectively have two SAR comparison experiences:
A) The main-site compare tool
The MEL compare page lets you pick Phone A and Phone B and it renders a live comparison from:
-
/cache/sar2_cache.json -
and per-phone cache JSON files.
B) The SAR Compare PWA (offline-ready)
The dedicated SAR Compare tool is explicitly labeled “offline-ready,” and includes a killer feature:
You can import the dataset JSON into your browser localStorage and it states “No upload occurs.”
That’s not just convenience—that’s privacy and portability. It also offers a “Use server dataset” mode and an “Open server JSON” link.
9) SAR Mods: A modular “tool loader” per phone
SAR Mods is a different concept: it’s essentially a module loader that can render multiple SAR-related components for a selected phone.
The page includes a large phone selector and explains it will reload as ?phone=<slug> and “render all active modules.”
It also exposes the menu of modules (SAR hero, SAR kids viewer, SAR tests, specs viewer, related models, etc.), making it clear you’re building a composable SAR UI system—not just a single page.
10) SAR Kids: The child vs adult SAR visualizer (the “wake up” feature)
SAR Kids is one of the most emotionally effective tools on the site because it translates a technical metric into a child-relevant visual.
It describes a six-panel image system with:
-
Two rows: Cellular-Only and Simultaneous (Wi-Fi + Cellular)
-
Three ages per row: 5-year-old, 10-year-old, Adult
-
A pill showing the measured SAR (W/kg), plus a shaded fill illustrating % of the 1.6 W/kg limit “with child weighting.”
Even readers who don’t understand SAR immediately understand relative vulnerability from the visual design.
11) SAR Ranking: Six test positions, lowest-to-highest, generated from a public dataset
SAR Ranking (“RANK”) is the directory’s “leaderboard” view.
It offers:
-
Brand filter + time windows (“All time / Last 3 years / Last 5 years”)
-
A selector for six FCC test positions:
-
Head/body/hotspot (cellular only)
-
Head/body/hotspot (Wi-Fi + cellular)
-
It also tells you the rankings are computed client-side from:
/phones/cache/all-phones.json
This is a huge differentiator: it’s not a hand-curated list; it’s reproducible from the published dataset.
12) SAR App: The “consumer mode” interface (fast, shareable, mobile-first)
The navigation links to a SAR App experience alongside the compare/rank tooling—designed as a more app-like interface for normal users who want fast results.
(Practically: “if someone’s shopping for a phone, don’t send them into research filters—send them here.”)
13) RSS Feeds: A feed generator for the entire research database
Most sites have one RSS feed.
RFSafe.org has an RSS feed map, with feeds organized and tagged by:
-
effect
-
evidence
-
year
…and a separate tag-feeds browser.
It even notes that if a user needs a specific combination, an admin can create a custom feed via “Manage RSS.”
Example: the “Reviewed Papers (Latest)” feed shows a build timestamp (“built: 2026-02-17 23:05 UTC”).
For researchers, journalists, and advocates, this is gold: you can subscribe to exactly the slice of the literature you care about and let the database push updates to you.
14) Exposure Assessment: A guided questionnaire → prioritized action plan → optional AI report
The Exposure Assessment is designed to bridge the gap between “evidence” and “what do I do in my actual house?”
It explains the flow clearly:
-
Answer questions about devices/home/environment
-
It prioritizes “high-impact, low-friction changes first” (distance, duration, wired where possible)
-
It produces a printable report with a prioritized action plan
Two credibility details worth highlighting:
-
It explicitly says it uses your answers “(not measurements)”—so it’s an exposure source prioritizer, not a meter replacement.
-
It includes privacy guidance about not sharing sensitive personal info in free-text fields.
And importantly: it states you must log in to generate an AI report.
15) Blog: A focused policy/research notes section (tagged + filterable)
The blog section (/rfs-blog/) is a tight, structured set of posts with:
-
Search
-
Sort options (newest, oldest, A→Z, random)
-
Category filters
…and it currently shows 13 total posts.
This is where you publish “the argument”—policy, standards, interpretation—without burying it inside research listings.
The “hidden” value: transparency + guardrails
Across the platform, you repeatedly include guardrails:
-
“Not medical advice” appears in Stats, Evidence Lab, Rankings, Exposure tools.
-
You consistently point users back to original sources (PubMed/DOI links in listings).
-
You distinguish “advocacy/policy framing” content from scientific evidence inside Evidence Lab notes.
That combination—scale + tooling + transparency—is exactly what most sites lack.
Who this platform is for (and where each person should start)
If you’re a parent:
Start with SAR Kids → then Exposure Assessment.
If you’re shopping for a safer phone:
Start with SAR Ranking → then SAR Compare (or the offline-ready PWA).
If you’re a researcher or journalist:
Start with Stats → Hubs → RSS Feeds.
If you’re doing advocacy / policy work:
Start with Evidence Lab → Archive → Blog.
Bottom line
RFSafe.org isn’t “a page with studies.”
It’s a full EMF literacy platform with:
-
a 6,578-paper classified directory
-
a stats dashboard that turns that scale into clarity
-
hubs + tags for discovery
-
a serious SAR toolchain (lookup, compare, rankings, mods, kids visualizer)
-
a feed generator that lets the database push updates to your reader
-
and an exposure assessment that converts “research” into “what should I change first?”—with optional AI report generation behind login
If you want people to grasp the value instantly, the single best framing is:
“This is the bridge between the EMF literature and real-world decisions—especially phones, kids, and home exposure.”
