Ganzfeld Experiments: Can Sensory Isolation Prove Telepathy Is Real?

Researchers at Princeton, Edinburgh, and Lund have spent five decades running the same basic experiment: isolate someone’s senses, show them random images, and see if they can describe a picture being “sent” to them from another room. The hit rate across 200+ studies is above chance. Not by much. The question everyone fights over is whether that matters.

TL;DR: The Ganzfeld experiment is the most replicated test of telepathy in laboratory history. Across multiple meta-analyses spanning 1983 to 2024, participants correctly identify a hidden target at rates around 30-33%, well above the 25% expected by chance. Skeptics argue the effect is too small to matter and the methodology has unresolved flaws. Proponents counter that the consistency across dozens of independent labs over 40 years is itself the evidence. The latest meta-analysis (2024) used a registered report design and still found statistically significant results. Neither side has convinced the other.

What Is the Ganzfeld Experiment?

The word “ganzfeld” comes from German: “ganz” means “whole” and “Feld” means “field.” A ganzfeld is a uniform perceptual field. When your brain receives no meaningful sensory input, it starts generating its own. The technique was originally described by Gestalt psychologist Wolfgang Metzger in the 1930s as a way to study how the brain processes perception. It was not designed for parapsychology. Parapsychology found it later.

The experimental setup is deliberately simple. A participant, called the “receiver,” sits in a comfortable chair wearing halved ping-pong balls over their eyes. A red light is shone at their face. Through headphones, they hear white noise or pink noise. Within about 10 minutes, the combination of visual uniformity and auditory monotony produces a state of mild sensory deprivation. The receiver’s brain begins to fill in the gap. They report images, impressions, feelings, fragments of scenes. This is the ganzfeld state.

Meanwhile, in another room, a “sender” watches a randomly selected video clip or image. This is the target. The receiver describes whatever impressions come to mind. After 20-30 minutes, the receiver is shown four options and asked which one most closely matches what they experienced. One is the actual target. Three are decoys. By chance alone, a receiver should pick the correct target 25% of the time.

That 25% is the number everything hangs on.

If receivers consistently score above 25% across multiple studies, it means something is happening. The argument is over what that something is.

The Early Experiments: Maimonides to the Joint Communique

The first serious laboratory investigations of telepathy in a ganzfeld-like setting came from Charles Honorton at Maimonides Medical Center in Brooklyn, New York, during the 1970s. Honorton was a parapsychologist who took methodology seriously. His initial experiments followed a simple design: a target location was randomly selected from a set of possibilities, a receiver would describe the target while the experimenter recorded verbal descriptions, and an analyst would then compare descriptions to actual targets and score the matches.

Honorton’s early work was promising. His 1985 meta-analysis of 28 ganzfeld studies reported a hit rate of approximately 38%, well above the 25% chance expectation. The combined statistical significance was strong.

But Ray Hyman, a skeptical psychologist at the University of Oregon, wasn’t convinced. Hyman examined Honorton’s database and identified methodological flaws in a 1985 critical appraisal: inadequate randomization procedures, possible sensory leakage through the experimental setup, and incomplete reporting of procedures.

What happened next is one of the more unusual episodes in the history of science. Honorton and Hyman, who disagreed sharply on the interpretation of the data, collaborated on a document called the Joint Communique, published in 1986. In it, they agreed on several points: there was an overall significant effect in the ganzfeld database that could not be reasonably explained by selective reporting or multiple analysis alone. They also agreed that the existing studies had methodological problems that needed to be fixed. And they jointly specified what those fixes should be.

The Joint Communique is worth pausing on because it’s rare. Two scientists who fundamentally disagree about whether telepathy is real sat down and agreed on what a better experiment would look like. The result was the autoganzfeld.

The Autoganzfeld: Automation as a Response to Criticism

The autoganzfeld was designed to address every methodological concern Hyman had raised. At the Psychophysical Research Laboratories (PRL) in Princeton, New Jersey, Charles Honorton and his colleague Daryl Bem implemented a computer-automated system. Target selection was handled by a random number generator. Target presentation to the sender was automated. The receiver’s verbal descriptions were recorded by the computer. Judging was standardized.

The autoganzfeld studies (1983-1989) produced results that were consistent with the earlier database. Across 11 studies conducted by eight different experimenters, the overall hit rate was 32%, compared to the 25% expected by chance. The effect size was small but statistically significant (p less than 0.001). Crucially, the effect size achieved by a replication was significantly correlated with the degree to which it adhered to the standard ganzfeld protocol. Labs that followed the protocol more closely got better results.

Bem and Honorton published their analysis in 1994 in the Psychological Bulletin, one of psychology’s top journals. Hyman responded, noting that while the overall hit rate was significant, the effect was driven almost entirely by dynamic targets (video clips). Static targets (pictures) produced hit rates consistent with chance. This was a legitimate concern. It also showed that the science was genuinely being argued about, not simply accepted or rejected on ideological grounds.

The Milton and Wiseman Challenge

In 1999, Julie Milton and Richard Wiseman published their own meta-analysis of ganzfeld studies in Psychological Bulletin. They examined 30 studies published between 1987 and 1997, conducted after the methodological improvements specified in the Joint Communique. Their conclusion was blunt: “The authors conclude that the ganzfeld technique does not at present offer a replicable method for producing ESP in the laboratory.”

The Milton and Wiseman result was significant. Psychological Bulletin is a top-tier journal. Wiseman is one of the most prominent skeptical psychologists in the world. And their database included a large number of recent studies that followed the improved methodology. If the effect were real, it should have shown up here.

It didn’t. The overall hit rate in the Milton and Wiseman database was not significantly different from chance. This became one of the most cited negative results in parapsychology and gave skeptics a powerful reference point.

But the story didn’t end there.

Storm and Ertel (2001) pointed out that Milton and Wiseman’s database was not a fair test of the ganzfeld hypothesis. Many of the studies in their database didn’t actually follow the standard ganzfeld protocol. They were free-response studies that used some ganzfeld elements but not the full procedure. When Storm and Ertel restricted the analysis to studies that actually followed the standard protocol, they found a statistically significant effect.

More importantly, Storm and Ertel went on to compile a 79-study database that had a statistically significant average standardized effect size of 0.138. This was a direct challenge to the Milton and Wiseman conclusion.

The disagreement centered on a fundamental question: what counts as a “ganzfeld study”? Milton and Wiseman included any study that used sensory reduction techniques. Storm and Ertel argued that only studies following the full standard protocol should count. The answer to that question determines whether the effect exists.

The Protocol Adherence Problem

This is worth examining in detail because it gets at something important about how science works, or doesn’t work, in controversial fields.

Bem, Palmer, and Broughton (2001) conducted a careful analysis of 40 replication studies. They asked a simple question: does the degree of adherence to the standard ganzfeld protocol predict the outcome? The answer was yes. Replications that followed the standard protocol more closely produced higher effect sizes. Replications that deviated from the protocol produced lower effect sizes.

This is either evidence for the effect or evidence for methodological artifacts, depending on your starting position. If you think psi is real, protocol adherence matters because subtle deviations might disrupt whatever mechanism is at work. If you think psi doesn’t exist, protocol adherence might be a proxy for experimenter expectancy effects. Labs that want positive results follow the protocol more carefully, and their expectations bias the outcomes.

Both interpretations are consistent with the data. The data doesn’t tell you which one is correct.

Storm, Tressoldi, and Di Risio (2010): The Modern Meta-Analysis

In 2010, Lance Storm, Patrizio Tressoldi, and Lorenzo Di Risio published what was at the time the most comprehensive meta-analysis of ganzfeld studies. They analyzed 29 studies published between 1997 and 2008, all conducted after the methodological improvements specified in the Joint Communique.

The results were consistent with the earlier Bem and Honorton database. The average standardized effect size was approximately 0.14. This is a small effect by any standard. For comparison, the average effect size in social psychology is around 0.40. The ganzfeld effect is roughly one-third the size of typical findings in mainstream psychology.

But statistical significance and effect size are different things. A small effect can be real. The question is whether the effect is large enough to be meaningful and whether the methodology is sound enough to trust the result.

The Storm et al. meta-analysis was published in Psychological Bulletin, the same journal that published the Milton and Wiseman critique. This gave it a level of credibility that earlier meta-analyses in parapsychology-specific journals didn’t have. The finding was taken seriously by the field, even by skeptics who disagreed with the interpretation.

Hyman (2010) published a critique titled “Meta-analysis that conceals more than it reveals,” arguing that Storm et al. eliminated outliers, combined databases whose combined effect sizes were not significantly different, and used statistical methods that inflated the apparent significance of the results. The debate was technical and unresolved.

The 2024 Registered Report: The Strongest Test Yet

The most important development in ganzfeld research came in 2024, with the publication of a Stage 2 Registered Report meta-analysis covering more than 40 years of investigation. This was published in F1000Research after passing a Stage 1 peer review before the analysis was conducted.

Registered reports are the gold standard for reducing bias in scientific research. The researchers submit their methodology and analysis plan for peer review before they see the data. If the plan is approved, they’re guaranteed publication regardless of whether the results are positive or negative. This eliminates the most common forms of bias: p-hacking, selective reporting, and publication bias.

The 2024 meta-analysis found significant effects. The overall hit rate exceeded chance expectation. The authors concluded that “there is sufficient evidence to claim that it is possible to observe a non-conventional (anomalous) perception in a Ganzfeld environment.”

Several moderators were identified. Selected participants, people who scored well on screening tests or who had prior experience with meditation or altered states, showed an effect size almost three times larger than non-selected participants. Tasks that simulated telepathic communication showed a two-fold effect size compared to tasks requiring participants to simply guess a target.

The 2024 registered report is significant because it addressed the most common methodological criticisms of earlier meta-analyses. By committing to the analysis plan in advance, the researchers eliminated the possibility of data-dredging. The peer reviewers could verify that the methodology was sound before the results were known. And the journal was obligated to publish regardless of the outcome.

The fact that a registered report still found significant effects is either strong evidence that the effect is real or strong evidence that there’s something wrong with the ganzfeld paradigm itself. Skeptics and proponents interpret it differently.

Key Researchers

Name Affiliation Role Key Contribution
Charles Honorton Princeton (PRL) Pioneer Developed the autoganzfeld; first systematic meta-analysis (1985)
Daryl Bem Cornell University Researcher Co-developer of autoganzfeld; published 1994 meta-analysis
Ray Hyman University of Oregon Skeptic Primary methodological critic; co-author of Joint Communique (1986)
Julie Milton University of Edinburgh Skeptic Co-author of 1999 meta-analysis showing no significant effect
Richard Wiseman University of Hertfordshire Skeptic Co-author of 1999 replication-failure meta-analysis
Lance Storm University of Adelaide Researcher Led 2010 meta-analysis; challenged Milton & Wiseman findings
Jessica Utts UC Irvine Statistician Statistical evaluation of ganzfeld and broader psi database
Etzel Cardena Lund University Meta-analyst Published 2018 review in American Psychologist

The Evidence For

The case for the Ganzfeld experiment rests on several pillars.

1. Statistical Consistency Across Meta-Analyses. Five major meta-analyses spanning from 1985 to 2024 have found statistically significant above-chance hit rates. The effect sizes range from approximately 0.14 to 0.20. While individually each meta-analysis can be critiqued, the pattern across all five is consistent. The probability that all five would show significant results by chance alone is very small.

2. The Joint Communique. The 1986 agreement between Honorton and Hyman is unusual in science. Two researchers who disagree fundamentally about interpretation nevertheless agreed that the database showed an effect that could not be explained by selective reporting or multiple analysis alone. Hyman continued to believe there were methodological problems, but he acknowledged the statistical finding. This is not the behavior of someone who thinks the data is meaningless.

3. Protocol Adherence Correlation. The finding that effect sizes correlate with protocol adherence is important. If the effect were due to experimenter error or methodological artifacts, you would expect sloppy experiments to produce larger effects, not smaller ones. The correlation with protocol adherence suggests either a real effect that is disrupted by deviations, or that the protocol itself is creating a systematic bias. Both explanations are possible, but the data pattern is more consistent with a real effect.

4. The Registered Report. The 2024 meta-analysis used a registered report design, the strongest method for reducing bias. The analysis plan was peer-reviewed and approved before the data was analyzed. The results were still significant. This eliminates most forms of publication bias, p-hacking, and selective reporting.

5. Selected Participants. The moderator analysis showing that selected participants perform roughly three times better than non-selected participants is notable. If the effect were due to chance or methodological artifacts, there’s no reason participants with specific psychological profiles should consistently outperform others. The selected-participant finding suggests something real is being measured, even if we don’t know what it is.

6. The Consistency Problem. If the ganzfeld effect were due to noise, you would expect it to fluctuate wildly across studies. Instead, the effect sizes are remarkably consistent. Different labs, different experimenters, different countries, different decades, and the hit rate stays around 30-33%. This consistency is itself evidence. Random noise doesn’t behave this way.

The Case Against

Intellectual honesty requires presenting the strongest objections. Here’s what the skeptical literature says about the Ganzfeld experiment, and where the critique is strongest.

1. The Effect Is Tiny. The average effect size across all ganzfeld meta-analyses is approximately 0.14. This means the difference between the observed hit rate (about 30-33%) and the chance expectation (25%) is very small. A typical participant who is “receiving” telepathic information performs only marginally better than someone picking at random. This raises a serious question: even if the effect is real, does it matter? An intelligence agency that needed actionable information from remote viewing would find a 5-8% improvement over chance useless. The effect, if real, is not practically useful.

2. Sensory Leakage Has Never Been Fully Ruled Out. The autoganzfeld was designed to prevent sensory leakage, but Hyman and other critics have identified potential pathways that were not fully eliminated. Acoustic leakage from the sender’s room to the receiver’s room was investigated by Bem and Honorton, but the investigation was limited. Electromagnetic shielding was not always standard. And the possibility of experimenter cues, subtle changes in the experimenter’s behavior that are not recorded, cannot be completely eliminated in any experiment that involves a human experimenter.

3. The File Drawer Problem. Despite the 2024 registered report addressing publication bias, the earlier meta-analyses are vulnerable to the file drawer problem: studies that don’t find significant results are less likely to be published. Storm et al. (2010) addressed this statistically, arguing that it would take a large number of unpublished negative studies to bring the overall effect to non-significance. But “a large number” is a statistical estimate, not a verified fact. Nobody has audited every parapsychology lab to see how many null results were filed away.

4. No Mechanism. There is no known physical mechanism for telepathy. The brain does not emit electromagnetic signals at meaningful distances. There is no identified channel through which information could travel from the sender to the receiver. Parapsychologists have proposed quantum entanglement, morphic resonance, and other exotic mechanisms, but none of these proposals have empirical support. The absence of a mechanism doesn’t prove the effect doesn’t exist, but it does mean the effect, if real, would require a fundamental revision of physics.

5. Experimenter Expectancy Effects. Labs that want positive results might unconsciously create conditions that produce them. The correlation between protocol adherence and effect size could reflect experimenter expectancy: labs that follow the protocol more carefully are also labs that believe more strongly in psi, and their expectations could influence the outcome through subtle cues that aren’t recorded.

6. The 25% Baseline Might Be Wrong. Some researchers have questioned whether 25% is actually the correct chance expectation. If participants have systematic biases in how they evaluate targets, preferring certain types of images, for example, the expected hit rate could be slightly above 25% even without psi. This has been investigated and the effects are small, but the concern hasn’t been fully resolved.

7. Replication Is Not Universal. While the overall effect is consistent, specific labs have failed to replicate the result. Milton and Wiseman’s 1999 meta-analysis is the most prominent failure, but individual labs have also reported null results. The fact that some well-designed studies fail to find the effect while others succeed is a legitimate concern. If the effect were robust, it should show up in every well-designed study.

The Statistics: Why 32% Is a Fight

Understanding why a 7-percentage-point difference generates decades of debate requires understanding what that difference means statistically.

In a standard Ganzfeld experiment with 40 trials, each trial has a 25% chance of hitting. The expected number of hits is 10. A hit rate of 32% on 40 trials would mean about 13 hits. The difference between 10 and 13 hits across 40 trials sounds trivial. But when you aggregate across hundreds of trials, that small difference becomes statistically significant.

The effect size (measured as Cohen’s h or the standardized difference from chance) in the ganzfeld literature is approximately 0.14 to 0.20. For context, here’s how that compares to effect sizes in other areas of psychology:

Phenomenon Effect Size Source
Aspirin preventing heart attacks 0.03 Steering Committee (1989)
Ganzfeld ESP effect 0.14 – 0.20 Storm et al. (2010), Bem & Honorton (1994)
Psychotherapy effectiveness 0.40 Smith & Glass (1977)
Smoking and lung cancer 0.20 US Surgeon General (1964)
Average social psychology effect 0.40 Richard et al. (2003)

The ganzfeld effect is roughly the same size as the link between smoking and lung cancer. That’s either reassuring (it means the effect is real and significant, just like the smoking-cancer link) or alarming (it means the effect is small enough that it could be a subtle artifact). The aspirin comparison is striking: the ganzfeld effect is five to six times larger than the effect of aspirin on heart attack prevention, yet nobody questions whether aspirin works.

The difference is that aspirin has a known mechanism. Aspirin inhibits cyclooxygenase enzymes, which reduces platelet aggregation, which reduces blood clots, which reduces heart attacks. Every link in that chain is understood and independently verifiable. The Ganzfeld effect has no known mechanism. A small effect without a mechanism is much harder to believe in than a small effect with a clear mechanism.

This is the core tension in the ganzfeld literature. The statistical evidence is strong. The mechanism is absent. Both of these things are true simultaneously.

Theoretical Perspectives

Several theoretical frameworks have been proposed to explain the Ganzfeld effect, if it exists.

Quantum Approaches. Some researchers have proposed that psi effects might operate through quantum non-locality. In quantum mechanics, entangled particles can influence each other instantaneously over any distance. If the brain could exploit quantum entanglement, it might be possible for one brain to receive information from another. However, quantum effects are typically destroyed by thermal noise at the scale of neurons. There’s no established way for quantum effects to survive in the warm, wet environment of the brain. Roger Penrose and Stuart Hameroff have proposed that microtubules in neurons might sustain quantum coherence, but this remains highly speculative.

Morphic Resonance. Rupert Sheldrake proposed that natural systems inherit a collective memory from previous similar systems. In this framework, the Ganzfeld effect might reflect a morphic resonance between the sender and receiver, amplified by the sensory deprivation state. Sheldrake’s hypothesis is creative but has not been empirically supported in controlled experiments. Most mainstream scientists consider it unfalsifiable.

Information-Theoretic Approaches. Some researchers have proposed that psi might reflect an information-theoretic phenomenon rather than a physical signal. In this view, the brain might be capable of accessing information that isn’t transmitted through any physical channel, similar to how quantum computers can access information through superposition. This is intriguing but lacks any experimental support beyond the Ganzfeld results themselves.

The Null Hypothesis. The simplest explanation is that the effect doesn’t exist. In this view, the consistent hit rates reflect subtle methodological artifacts, experimenter expectancy effects, or a combination of file drawer effects and cognitive biases in how studies are selected for publication. The absence of a mechanism supports this interpretation. The consistency of the effect is then evidence of how deeply embedded these artifacts are in the experimental paradigm.

The Honorton Interpretation. Charles Honorton proposed that psi might function like a signal detection problem. In his view, the signal (psi information) is always present, but it’s overwhelmed by noise (mental activity, environmental stimulation, anxiety). The Ganzfeld state reduces noise, allowing the signal to emerge. This would explain why sensory deprivation enhances the effect and why selected participants (meditators, experienced receivers) perform better: they’re better at reducing internal noise.

None of these theories has been confirmed or falsified. The absence of a confirmed theoretical framework is one of the things that keeps the debate going. If someone could demonstrate a mechanism, the debate would shift from “does this exist?” to “how does this work?”

The Judging Process: How Targets Are Scored

One aspect of the Ganzfeld experiment that doesn’t get enough attention is the judging process. After the receiver describes their impressions, they’re shown four options and asked to pick the one that best matches what they experienced. This seems straightforward, but the details matter enormously.

In the standard protocol, the receiver ranks all four options from most to least likely. The first choice is the primary score. But some analyses have looked at the second choice as well, and some have used a weighted scoring system that considers the entire ranking. The choice of scoring method can affect the results.

Dynamic targets (video clips) produce larger effects than static targets (still images). This is consistent across multiple meta-analyses. The reason isn’t clear. Video clips are more vivid, more emotionally engaging, and contain more information than still images. If psi is real, the richer the target material, the easier it might be to detect. If psi isn’t real, video clips might be easier for receivers to unconsciously guess because they can infer the type of content from subtle cues in the experimental setup.

The judging process is also where experimenter effects can creep in. If the person running the judging session knows which option is the real target, they might unconsciously guide the receiver’s choice through subtle cues. The autoganzfeld was designed to eliminate this by automating the judging process, but not all studies use automated judging.

The Implications

The implications of the Ganzfeld experiment, regardless of interpretation, are significant.

For Psychology: The Ganzfeld debate is a case study in how science handles controversial findings. The pattern of critique, response, replication, and counter-replication mirrors debates in other areas of psychology (priming effects, ego depletion, the replication crisis). The Ganzfeld literature has been more rigorously scrutinized than many “mainstream” psychological findings, which is either a sign of how important the claims are or a sign of how the field discriminates against certain types of research.

For Scientific Methodology: The registered report format, now considered the gold standard for reducing bias, was partly developed in response to debates like the Ganzfeld controversy. The 2024 meta-analysis demonstrates that this format can produce significant results even in a controversial field. This has implications for how all controversial science should be conducted.

For Philosophy of Science: The Ganzfeld experiment forces a confrontation with a fundamental question: what do we do with a consistent statistical finding that has no theoretical explanation? In normal science, a consistent finding is accepted and a theory is built to explain it. When the finding contradicts established physical law, the scientific community is split. The Ganzfeld literature is a textbook example of this split.

For Funding and Institutional Support: The lack of institutional support for parapsychology research has created a small, tight-knit community of researchers. This has both advantages (close collaboration, shared protocols) and disadvantages (limited external scrutiny, potential for groupthink). The small size of the field means that the same researchers often conduct studies, review each other’s work, and publish in the same journals.

For the Public Understanding of Science: The Ganzfeld experiment is one of the most accessible scientific controversies for the general public. The setup is simple, the question is clear, and the stakes are obvious. It’s a good example of how scientific evidence doesn’t speak for itself. Interpretation depends on context, prior beliefs, and methodological standards.

Sources

  1. Honorton, C. (1985). “Meta-analysis of psi ganzfeld research: A response to Hyman.” Journal of Parapsychology, 49, 51-91.

    Funding: Maimonides Medical Center. COI: Honorton was the primary researcher and advocate for ganzfeld psi. 🟡
  2. Hyman, R. (1985). “The Ganzfeld psi experiment: A critical appraisal.” Journal of Parapsychology, 49(1), 3-49.

    Funding: University of Oregon. COI: Hyman was a well-known skeptic of parapsychology. 🟡
  3. Hyman, R. & Honorton, C. (1986). “A Joint Communique: The psi ganzfeld controversy.” Journal of Parapsychology, 50, 351-364.

    Funding: Joint publication. COI: Both parties had established positions on psi. 🟡
  4. Bem, D.J. & Honorton, C. (1994). “Does psi exist? Replicable evidence for an anomalous process of information transfer.” Psychological Bulletin, 115(1), 4-18.

    Funding: Psychophysical Research Laboratories, Princeton. COI: Bem was co-developer of the autoganzfeld system. 🟡
  5. Milton, J. & Wiseman, R. (1999). “Does psi exist? Lack of replication of an anomalous process of information transfer.” Psychological Bulletin, 125(4), 387-391.

    Funding: University of Edinburgh. COI: Wiseman is a well-known skeptic. 🟡
  6. Bem, D.J., Palmer, J., & Broughton, R.S. (2001). “Updating the ganzfeld database: A victim of its own success?” Journal of Parapsychology, 65, 207-218.

    Funding: Cornell University. COI: Bem was co-developer of the autoganzfeld. 🟡
  7. Storm, L., Tressoldi, P.E., & Di Risio, L. (2010). “Meta-analysis of free-response studies, 1992-2008: Assessing the noise reduction model in parapsychology.” Psychological Bulletin, 136(4), 471-485.

    Funding: University of Adelaide; University of Padova. COI: Authors have published previous pro-psi research. 🟡
  8. Storm, L. & Tressoldi, P.E. (2024). “Stage 2 Registered Report: Anomalous perception in a Ganzfeld condition – A meta-analysis of more than 40 years investigation.” F1000Research, 10, 234.

    Funding: University of Adelaide. COI: Registered Report design eliminates most forms of bias. 🟢
  9. Hyman, R. (2010). “Meta-analysis that conceals more than it reveals: Comment on Storm et al. (2010).” Psychological Bulletin, 136(4), 486-490.

    Funding: University of Oregon. COI: Hyman is a long-standing skeptic of psi research. 🟡
  10. Utts, J. (1996). “An assessment of the evidence for psychic functioning.” Journal of Parapsychology, 60, 289-306.

    Funding: AIR (American Institutes for Research). COI: Independent statistical review. 🟢

FAQ

1. How does the Ganzfeld experiment work?

A receiver sits with halved ping-pong balls over their eyes and white noise in their ears, creating a uniform sensory field. A sender in another room watches a randomly selected video clip. The receiver describes whatever images or impressions come to mind, then chooses from four possible targets. By chance, they should pick correctly 25% of the time. Across studies, the hit rate averages 30-33%.

2. Is this the same as remote viewing?

No. Remote viewing (covered in our article on the CIA’s Stargate Project) typically involves describing a physical location or object without a sender. The Ganzfeld experiment specifically tests telepathy: the transmission of information from one person’s mind to another. The sensory deprivation component is also unique to the ganzfeld.

3. Why does the 25% number matter so much?

Because the experiment uses four choices. If you pick at random, you’ll be right 25% of the time. If participants consistently pick correctly more than 25% of the time across many studies, something is happening. The 25% is the null hypothesis. Everything above it is the puzzle.

4. Has anyone replicated the effect independently?

Yes. The ganzfeld effect has been replicated by independent labs in multiple countries: the United States, the United Kingdom, Sweden, Germany, Italy, Australia, and others. The consistency across independent labs is one of the strongest arguments for the effect. However, some independent labs have also failed to replicate, which is why the debate continues.

5. Why did Milton and Wiseman get a null result?

Their database included studies that didn’t follow the standard ganzfeld protocol. When the analysis was restricted to studies that actually followed the standard procedure, the effect reappeared. The debate is partly about what counts as a “ganzfeld study” and partly about whether protocol adherence correlates with experimenter belief.

6. What is a registered report and why does it matter?

A registered report is a study design where the methodology and analysis plan are peer-reviewed and approved before the data is analyzed. This eliminates p-hacking, selective reporting, and publication bias. The journal guarantees publication regardless of the results. The 2024 ganzfeld meta-analysis used this design and still found significant results, which is significant.

7. If telepathy exists in the lab, why can’t anyone demonstrate it reliably?

The effect is small, about 5-8% above chance. This means you need large sample sizes and controlled conditions to detect it. It’s not the kind of thing you can demonstrate at a dinner party. Whether this means the effect is real but weak, or real but dependent on laboratory conditions, or not real at all, is the central question.

8. Is there a plausible mechanism for telepathy?

No known physical mechanism explains how one brain could receive information from another without any physical channel. Proposals involving quantum entanglement, morphic resonance, or electromagnetic fields have not been empirically supported. The absence of a mechanism doesn’t disprove the effect, but it does make it harder to take seriously within the current scientific framework.

9. What would it take to settle the debate?

A large-scale, multi-site registered report with pre-registered analysis plans, conducted by independent labs with no prior commitment to either side, would go a long way toward settling it. The 2024 registered report is a step in this direction, but skeptics argue it still used a pro-psi research team. A truly independent replication effort with skeptics and proponents collaborating would be ideal. The Joint Communique of 1986 came close to this, but the follow-through was incomplete.

10. Should I believe in telepathy based on this evidence?

That’s not a question this article can answer. What the data shows is a small, consistent, statistically significant effect that has been replicated across multiple labs and decades. Whether that effect is “telepathy” or something else entirely is a matter of interpretation. What you should do is read the evidence, evaluate the methodology, and make up your own mind. That’s what science is supposed to be about.

Watch: Related Research

CIA Stargate Project

The US government’s secret remote viewing program, funded from 1972 to 1995. A parallel research track using different methods.

Read the full article

Presentiment Experiments

Detecting physiological responses to future stimuli before they occur. A different approach to testing psi using the body instead of the mind.

Read the full article

Global Consciousness Project

Does collective human attention affect random number generators? A worldwide network of RNGs monitoring for deviations during major events.

Read the full article

Scroll to Top