What if seventy random number generators spread across the planet started producing less random results at the exact moment millions of people shared the same emotional experience? What if the effect was measurable, replicable across thousands of events, and showed odds of one in a trillion against chance? That is the claim made by the Global Consciousness Project, a twenty-five-year experiment run out of Princeton University that monitors a worldwide network of hardware random number generators for deviations correlated with major global events.
The project was founded in 1998 by Roger D. Nelson, an experimental psychologist who spent two decades as the Coordinator of Research at one of Princeton’s most controversial laboratories, the Princeton Engineering Anomalies Research (PEAR) lab (according to Psi Encyclopedia). Its hypothesis is straightforward: if human consciousness can influence physical systems, and if many minds acting together produce a stronger effect than one mind alone, then a network of random devices running continuously around the world should register anomalies when something big enough to capture the collective attention of humanity occurs.
The data is publicly available at noosphere.princeton.edu. The methods are published. The statistical analyses have been independently critiqued and partially replicated. And the debate about what it all means is far from settled.
If a random device is truly random, it should produce 50% ones and 50% zeros over time. The Global Consciousness Project asks: do these devices produce fewer ones and zeros than expected at moments when millions of people share the same experience? The data says yes. The debate is about why.
What Is the Global Consciousness Project?
The Global Consciousness Project (GCP) is a long-running parapsychology experiment that monitors random number generators around the world for statistically anomalous outputs during major global events. The project is an international collaboration of approximately 100 scientists and engineers, funded through the Institute of Noetic Sciences (according to Wikipedia), a research organization founded by Apollo 14 astronaut Edgar Mitchell in 1973.
The project’s hardware consists of approximately 70 devices, called “EGG” units (short for electrogaiagrams, a portmanteau of electroencephalogram and Gaia), distributed across host sites on every continent (according to Wikipedia). Each device contains a hardware random number generator that produces 200 random bits per second. The bits are summed into trials once per second, and the data is transmitted to a central server in Princeton, New Jersey, creating a continuous, synchronized, global database of random number sequences.
The hypothesis is that events which elicit widespread emotional engagement or draw the simultaneous focused attention of large numbers of people may cause the output of these random devices to deviate from expected randomness in a statistically significant way. The project does not claim to know the mechanism. It claims only to measure the deviation and to test whether the pattern of deviations correlates with events that engage the collective mind.
The PEAR Lab: Where It Started
The Global Consciousness Project did not appear from nowhere. It grew directly from twenty years of experiments at the Princeton Engineering Anomalies Research (PEAR) laboratory, one of the most controversial research programs in the history of mainstream American science (according to Wikipedia).
The PEAR lab was founded in 1979 by Robert Jahn, a professor of aerospace engineering and dean of Princeton’s School of Engineering (according to Wikipedia). Jahn’s interest was sparked by anecdotal reports from his engineering students that their mental states seemed to affect the output of sensitive electronic equipment. Rather than dismiss the reports, Jahn did what engineers do: he built an experiment.
The PEAR lab’s primary experiment was simple. A random event generator (REG) produced truly random sequences of binary data. A human operator sat in front of the device and attempted to mentally influence the output, either toward more ones or more zeros. Over hundreds of thousands of trials conducted across two decades by dozens of operators, the PEAR lab found a small but statistically significant effect. Operators who intended for more ones got slightly more ones. Operators who intended for more zeros got slightly more zeros. The effect was tiny, approximately one bit per thousand shifted in the intended direction, but it was consistent.
The cumulative result across all PEAR experiments was a Stouffer Z-score of approximately 5.3, corresponding to odds of about one in 10 million against chance (according to Psi Encyclopedia). The effect appeared regardless of operator gender, age, or prior belief about psi. It appeared whether the operator was in the same room as the device or operating remotely. And it appeared even after the lab tightened its protocols in response to criticism.
The PEAR lab closed in 2007, not because it was debunked, but because Jahn retired and the university chose not to continue funding the program (according to Wikipedia). Princeton never formally disowned the results. It simply stopped paying for them.
From Individual to Collective
Roger Nelson was the PEAR lab’s Coordinator of Research from 1980 to 2002 (according to Psi Encyclopedia). Trained in experimental cognitive psychology, Nelson spent two decades running the same kind of REG experiments as Jahn. But in the mid-1990s, Nelson began asking a different question: if one person can slightly influence a random device, what happens when many people focus their attention on the same thing at the same time?
Nelson called this “field consciousness,” and he began testing it using portable REG devices at group events. He brought his equipment to psychotherapy sessions, theater performances, religious ceremonies, meditation groups, and sporting events (according to Psi Encyclopedia). The pattern he found was that during moments of intense collective focus, the REG outputs showed greater deviations from randomness than during ordinary periods.
The pilot studies that led to the GCP came in 1997. In January 1997, twelve independent REGs in the United States and Europe ran during a web-promoted “Gaiamind Meditation” event. In September 1997, the same twelve devices ran during and after the death of Diana, Princess of Wales, an event that produced an extraordinary outpouring of collective grief around the world (according to Psi Encyclopedia). Both events showed deviations that Nelson considered suggestive enough to justify a permanent, global network.
The Global Consciousness Project went online in August 1998.
How the Network Works
Understanding the GCP requires understanding what the devices actually do and how the data is analyzed. The description above is simplified. The reality is more specific.
Each EGG unit contains a commercially available hardware random number generator, a device that produces random bits from physical processes such as quantum tunneling or thermal noise, not from software algorithms. This is an important distinction. Software-based random number generators are actually pseudorandom, they produce sequences that look random but are determined by an initial seed value. Hardware RNGs produce genuinely unpredictable outputs from physical processes that are, as far as anyone can tell, truly random.
Each device records 200 bits per second, summed into one trial per second. With 70 devices running simultaneously, the network produces approximately 70 trials per second, or roughly 6 million trials per day. The data streams to a server at Princeton, where it is archived and made available for analysis (according to the official GCP site).
The Three-Step Protocol
The GCP uses a formal experimental protocol designed to prevent the kind of after-the-fact data selection that critics worry about. The protocol has three steps.
Step one: An event is formally registered in advance. The registration specifies the event’s start time, end time, and the calculation algorithm to be used. Events are selected based on a criterion: they must be expected to engage the emotions or focused attention of large numbers of people. Events are registered before or at the time they occur, not after the data is examined. The formal registry is publicly accessible (according to noosphere.princeton.edu).
Step two: After the event window closes, the data for that period is extracted from the database and analyzed using the pre-specified algorithm. The primary analysis computes a Z-score, which measures the degree of deviation from the expected random behavior. A Z-score of zero means the data behaved exactly as expected by chance. A positive Z-score means the data was less random than expected. A negative Z-score means it was more random than expected.
The GCP primarily looks for deviations in two measures: the mean of the accumulated trials (did the numbers tend higher or lower than expected?) and the variance (did the numbers vary more or less than expected?). The most common test is a Stouffer method, which combines the Z-scores from all individual devices into a single composite Z-score for the event.
Step three: The individual event Z-score is combined with all previous event Z-scores to produce a cumulative result. This cumulative result is the key number. Any single event can produce a significant deviation by chance. The question is whether the pattern of deviations across hundreds of events is itself non-random.
The Cumulative Result
As of 2026, the GCP has registered over 500 formal events. The cumulative Stouffer Z-score across all registered events is approximately 7.25 (according to Psi Encyclopedia). A Z-score of 7.25 corresponds to a probability of approximately 4 times 10 to the negative 13th power, or roughly one in two and a half trillion. This is the headline number that proponents cite as evidence that something real is happening.
To put this in context: in standard scientific research, a result is considered statistically significant if the probability of it occurring by chance is less than 5 percent (p less than 0.05). A Z-score of 7.25 is not just significant. It is astronomically significant. If this were a drug trial, the FDA would approve the drug based on the statistics alone.
But statistics and interpretation are different things. A number can be real without meaning what its proponents think it means.
The Most Famous Event: September 11, 2001
On the morning of September 11, 2001, terrorists flew commercial airliners into the World Trade Center towers in New York City, the Pentagon in Arlington, Virginia, and a field in Shanksville, Pennsylvania. Nearly 3,000 people died. An estimated 2 billion people watched the events unfold on live television. It was the most widely witnessed event in human history up to that point.
The GCP had been running for three years. The September 11 attacks were not formally registered in advance, obviously, but the event was registered retrospectively with a specified time window from 8:35 AM to 12:45 PM Eastern Time, covering the period from shortly before the first impact through the collapse of the towers.
Roger Nelson’s analysis of the September 11 data showed a cumulative deviation from randomness that reached what appeared to be statistical significance. The deviation appeared to begin around the time of the first impact and continued through the morning. Nelson published the results in the Journal of Scientific Exploration in 2002, reporting that the formal analysis for the specified time window produced a Z-score of approximately 3.15, corresponding to odds of roughly one in 1,000 against chance (according to Nelson et al., 2002).
Additionally, Nelson reported that the combined data from all 70 devices over the following 24 to 48 hours showed further unusual patterns, including what he described as a “staircase” effect in the cumulative deviation trace, where the data appeared to shift to a new level of non-randomness and remain there for an extended period.
The Independent Critique
The September 11 data attracted independent scrutiny. Edwin May and James Spottiswoode, both physicists with backgrounds in parapsychology research, conducted their own analysis of the same data (according to May & Spottiswoode, 2003). Their conclusion was blunt: there was no statistically significant change in the randomness of the GCP data during the attacks when analyzed using alternative methods.
May and Spottiswoode noted several concerns with Nelson’s analysis. First, the time window for the formal analysis was specified retrospectively, not prospectively. In a rigorous prospective protocol, the window would have been defined before the event. Since it was defined after, the choice of window could have been influenced by knowledge of when the deviation appeared to be largest. Second, when May and Spottiswoode applied different statistical methods to the same data, the apparent significance disappeared. This suggested that the result was sensitive to the choice of analytical method, which is a red flag in any statistical analysis (according to Wikipedia).
Nelson responded that the time window was chosen based on reasonable criteria (the actual duration of the attacks) and that the formal registration system was designed to prevent exactly this kind of after-the-fact manipulation. He also noted that the significance of the September 11 result was not the main evidence for the GCP hypothesis. The main evidence was the cumulative result across all events, not any single event.
The debate over September 11 is important because it illustrates the fundamental tension in the GCP data: the cumulative result is statistically overwhelming, but any individual event can be questioned, and the cumulative result is only as strong as the individual events that compose it.
Other Notable Events
September 11 is the most famous GCP event, but it is far from the only one. The project has registered over 500 events since 1998. Some of the more notable include:
The death of Princess Diana (September 1997). This was one of the pilot events that preceded the GCP’s formal launch. The death of Diana, Princess of Wales, on August 31, 1997, produced a massive outpouring of collective grief worldwide. The twelve REG devices running during this period showed deviations that Nelson considered suggestive (according to Psi Encyclopedia). It was one of the events that convinced him a permanent network was worth building.
The 2004 Indian Ocean tsunami (December 26, 2004). A magnitude 9.1 earthquake triggered a tsunami that killed approximately 230,000 people across 14 countries. The GCP registered the event and reported a significant deviation. However, critics noted that the deviation did not begin until hours after the earthquake, when news coverage reached global audiences. This timing difference is important: if the effect is caused by collective attention or emotion, it should correlate with when people learn about the event, not when the event occurs in a location where few people are aware of it.
The 2008 Mumbai attacks (November 26, 2008). A series of coordinated terrorist attacks across Mumbai, India, lasting four days. The GCP registered the attacks and reported deviations during the period of heaviest media coverage.
Various New Year’s Eve celebrations. The GCP has registered each New Year’s Eve since 1998. The results have been mixed. Some years show significant deviations, others do not. New Year’s Eve is interesting because it is a coordinated global event, the countdown happens at midnight in each time zone, but the emotional content varies widely across cultures and individuals.
The funeral of Pope John Paul II (April 8, 2005). Watched by an estimated 2 billion people worldwide. The GCP reported a significant deviation during the funeral broadcast.
The inauguration of Barack Obama (January 20, 2009). Watched by an estimated 1.8 billion people worldwide. The GCP reported a deviation, though critics noted that inaugurations are anticipated events and the registration window could have been influenced by expectations.
The pattern across these events is mixed. Some emotionally charged events produce strong deviations. Others produce nothing. Some events that seem emotionally minor produce strong deviations. And the timing of the deviations relative to the events is not always clean.
Nelson acknowledged this inconsistency. He noted that the effect is small and probabilistic, not deterministic, and that any single event is unreliable. The evidence, he argued, comes from the cumulative pattern across hundreds of events, not from any individual event.
The Statistical Methods
The GCP’s primary statistical tool is the Stouffer method, which combines Z-scores from individual events into a single cumulative measure. The method works as follows:
For each registered event, the deviation from expected randomness is computed across all active EGG devices during the specified time window. This produces a Z-score for that event. The Z-scores from all events are then combined using the Stouffer method: they are summed and divided by the square root of the number of events. The result is a single cumulative Z-score that represents the overall deviation from chance across the entire history of the project.
The Stouffer method is standard in meta-analysis and is well-established in statistics. There is nothing inherently controversial about the method itself. The controversy is about what the result means.
What the Numbers Mean
A cumulative Z-score of 7.25 means that the pattern of deviations across all registered events is extremely unlikely to have occurred by chance, assuming the individual events are independent and the statistical model is correct.
But the assumption of independence is where the debate lives. If the events are not truly independent, if, for example, the selection of events is influenced by knowledge of the data, or if the time windows are chosen to maximize the apparent effect, then the Stouffer method will overstate the significance of the result.
The GCP’s defenders argue that the formal registration protocol prevents this kind of manipulation. Events are registered in advance, and the calculation algorithms are pre-specified. The data is publicly available, and the analyses are transparent. If someone wanted to challenge the result, they could download the data and run their own analysis (according to GCP independent analyses).
The GCP’s critics argue that the registration protocol is not rigorous enough. Events are sometimes registered after they begin, and the selection of which events to register is itself a form of data selection. If you register 500 events and only 200 of them produce significant deviations, but you combine all 500 into a cumulative result, the cumulative result may be driven by a subset of events that were more carefully chosen than the rest.
The “What Counts as an Event” Problem
Perhaps the most fundamental criticism of the GCP is about event selection. The project registers events that are expected to engage the emotions or focused attention of large numbers of people. But this criterion is subjective. Who decides what qualifies? How is “large numbers” defined? Is a regional news event enough, or does it have to be global?
The GCP has registered events ranging from the September 11 attacks (billions of viewers) to local meditation groups (a few hundred participants). If the cumulative result is driven primarily by the large events, that is more interesting than if it is driven by the small ones. But the GCP’s analyses do not typically separate the two.
Peter Bancel, a physicist who has worked with the GCP data, published a 2017 analysis that found that the cumulative result was driven primarily by a subset of events, and that the pattern was more consistent with a goal-oriented effect (something is causing the deviations, but it may not be the events themselves) than with the specific global consciousness hypothesis (according to Wikipedia).
The Case Against
Intellectual honesty requires presenting the strongest objections. Here is what the skeptical literature says about the Global Consciousness Project, and where the critique is strongest.
1. Pattern matching, not pattern finding. The most common criticism is that the GCP is engaged in pattern matching, not pattern finding. The project collects massive amounts of random data and then looks for correlations with events. With enough data and enough events, some correlations will appear by chance. The cumulative Z-score may look impressive, but if you ran the same analysis on the same data without correlating it to any events, would you still find deviations? Critics say yes, random data has fluctuations, and the GCP is selectively reporting the ones that happen to coincide with events (according to The Skeptic’s Dictionary).
2. The event selection problem. As noted above, the selection of which events to register is subjective. If the GCP registers events after seeing preliminary data, or if the time windows are chosen to maximize the apparent effect, the formal registration protocol is compromised. The GCP says events are registered in advance, but the records show that some registrations are made after events have already begun.
3. No theoretical mechanism. The GCP has no proposed physical mechanism by which human consciousness could influence hardware random number generators. Quantum mechanics does not provide one. Classical physics does not provide one. The project’s defenders argue that the absence of a mechanism does not invalidate the observation, and that many scientific discoveries preceded their theoretical explanations. Critics respond that an extraordinary claim without a mechanism is just an anomaly, and anomalies are not the same as evidence.
4. The September 11 problem. As detailed above, the most famous GCP result was challenged by independent analysts who found no significant deviation when different statistical methods were applied (according to May & Spottiswoode, 2003). If the flagship event does not hold up under alternative analysis, what does that say about the rest of the database?
5. Robert Matthews’ critique. British science writer Robert Matthews, who has written extensively about statistical fallacies, described the GCP as “the most sophisticated attempt yet” to prove psychokinesis, but concluded that “the only conclusion to emerge from the Global Consciousness Project so far is that data without a theory is as meaningless as words without a narrative” (according to Wikipedia). Matthews’ point is that the GCP has data but no explanation, and data without explanation cannot be distinguished from noise.
6. The Stouffer Z may be inflated. If the individual events are not statistically independent, for example, if events that are close in time share characteristics that affect the data, the Stouffer method will produce an inflated Z-score. The GCP has attempted to address this concern by testing for independence, but the tests are themselves subject to debate.
7. Nelson’s own concession. In a 2007 interview with The Age, Roger Nelson himself conceded that “the data, so far, is not solid enough for global consciousness to be said to exist at all” (according to Wikipedia). He noted that it is not possible to look at the data and predict with any accuracy what the devices may be responding to. This is an important admission from the project’s founder: the data shows a statistical pattern, but the pattern does not clearly map onto the events it is supposed to reflect.
The only conclusion to emerge from the Global Consciousness Project so far is that data without a theory is as meaningless as words without a narrative. Robert Matthews, science writer.
GCP 2.0 and the Future
The original Global Consciousness Project continues to operate, though Roger Nelson stepped back from active management in his later years. In 2023, a successor project called GCP 2.0 was launched under the auspices of the HeartMath Institute, a research organization known for its work on heart rate variability and the physiological effects of emotions (according to HeartMath Institute).
GCP 2.0 expands the original concept in several ways. The new network uses purpose-built devices with four random number generators per unit, rather than the single RNG in the original EGG devices (according to GCP 2.0). The project aims to deploy 1,000 of these devices worldwide, a significant expansion from the original 70. The data is being collected with more granular temporal resolution, and the project is designed to be more transparent, with real-time data visualization available to the public.
The HeartMath Institute’s involvement is notable. HeartMath has a track record of published research on the physiological effects of emotions, and its institutional credibility is higher than the original GCP’s association with the now-closed PEAR lab. Whether this institutional change will lead to more convincing results remains to be seen.
GCP 2.0 also introduces what it calls “consciousness fields” and “coherent emotional energy” as explanatory concepts. This goes further than the original GCP, which avoided proposing specific mechanisms. Whether this theoretical ambition will strengthen the project or expose it to new criticism depends on whether the concepts can be tested and falsified.
Key Researchers
| Name | Affiliation | Role | Key Contribution |
|---|---|---|---|
| Roger D. Nelson | Princeton University (PEAR lab), GCP | Founder and Director, GCP | Created the GCP, developed the formal registration protocol, compiled the cumulative database |
| Robert Jahn | Princeton University | Founder, PEAR lab | Established the foundational REG research that led to the GCP |
| Dean Radin | Institute of Noetic Sciences | Contributing researcher | Conducted parallel REG experiments and contributed to GCP analysis |
| Peter Bancel | Independent physicist | Independent analyst | 2017 analysis found data supports goal-oriented effect but not specifically global consciousness hypothesis |
| Edwin May | Cognitive Sciences Laboratory | Independent critic | Independent analysis of September 11 data found no significant deviation under alternative methods |
| Robert Matthews | Aston University, UK | Science writer and critic | Criticized GCP as “data without a theory” |
Video: The Global Consciousness Project Explained
Sources
- Nelson, R.D. et al. (2002). “Coherent Consciousness and Reduced Randomness: Correlations on September 11, 2001.” Journal of Scientific Exploration, 16(4), 549-570. Available on ResearchGate.
Funding disclosure: GCP funded through Institute of Noetic Sciences. Nelson was GCP director.
COI: Author is the project director reporting on his own project. Evidence tier: 🟠 - Nelson, R.D. et al. (2011). “Effects of Mass Consciousness: Changes in Random Data during Global Events.” Explore, 7(6), 373-383. PubMed | ScienceDirect.
Funding disclosure: Institute of Noetic Sciences.
COI: Authors are GCP participants. Publication venue: Explore (alternative medicine journal, not top-tier). Evidence tier: 🟡 - Bancel, P. (2017). “Search for Lost Global Consciousness.” Foundations of Physics, 47, 1497-1513. Available on ResearchGate.
Funding disclosure: Independent analysis, no GCP funding.
COI: Author was previously affiliated with the GCP but conducted this analysis independently. Evidence tier: 🟡 - May, E.C. and Spottiswoode, J.P. (2003). “Global Consciousness Project: An Independent Analysis of The 11 September 2001 Events.” Full report (PDF).
Funding disclosure: Cognitive Sciences Laboratory.
COI: Authors have backgrounds in parapsychology research but are independent of the GCP. Evidence tier: 🟡 - Jahn, R.G. and Dunne, B.J. (1987). Margins of Reality: The Role of Consciousness in the Physical World. Harcourt Brace Jovanovich.
Funding disclosure: Princeton University, PEAR lab.
COI: Authors are PEAR lab directors reporting on their own research. Evidence tier: 🟠 - Nelson, R.D. (2001). “The EGG Story: The Global Consciousness Project.” Available at noosphere.princeton.edu.
Funding disclosure: Self-published project documentation.
COI: Project director describing his own project. Evidence tier: 🟠 - Carroll, R.T. “Global Consciousness.” The Skeptic’s Dictionary.
Funding disclosure: None (personal website).
COI: Author is a committed skeptic. Analysis is editorial, not empirical. Evidence tier: 🟠 - Matthews, R. (2007). Various commentary on the GCP. Reported in Wikipedia.
Funding disclosure: None.
COI: Science writer, no direct involvement in psi research. Evidence tier: 🟡 - HeartMath Institute. “Global Consciousness Project 2.0.” gcp2.net | HeartMath page.
Funding disclosure: HeartMath Institute.
COI: HeartMath is operating the successor project. Evidence tier: 🟠 - Bosch, H., Steinkamp, F., and Boller, E. (2006). “Examining Psychokinesis: The Interaction of Human Intention with Random Number Generators, A Meta-Analysis.” Psychological Bulletin, 132(4), 497-523.
Funding disclosure: University of Zurich, University of Edinburgh.
COI: Independent meta-analysis. Publication venue: Psychological Bulletin (top-tier psychology journal). Evidence tier: 🟢
Frequently Asked Questions
What is the Global Consciousness Project?
The Global Consciousness Project is a parapsychology experiment launched in 1998 that monitors approximately 70 hardware random number generators at locations around the world. The project tests whether the output of these devices deviates from expected randomness during major global events that engage the emotions or focused attention of large numbers of people.
Who started the Global Consciousness Project?
The project was founded by Roger D. Nelson, an experimental psychologist who served as Coordinator of Research at the Princeton Engineering Anomalies Research (PEAR) lab from 1980 to 2002 (according to Psi Encyclopedia). Nelson developed the GCP as an extension of the PEAR lab’s individual-level REG experiments to a global scale.
Does the Global Consciousness Project prove that collective consciousness is real?
The project’s cumulative statistical result across over 500 registered events shows a deviation from chance with odds of approximately one in a trillion. However, the interpretation of this result is debated. Critics argue that event selection bias, the absence of a physical mechanism, and sensitivity to analytical methods undermine the claim that the deviation is caused by collective consciousness. The data is suggestive but not conclusive.
What happened with the September 11 data?
The GCP’s analysis of the September 11, 2001 data showed a deviation that appeared statistically significant. However, independent analysts Edwin May and James Spottiswoode reanalyzed the same data using different statistical methods and found no significant deviation (according to their 2003 report). The disagreement illustrates the broader debate about whether the GCP’s results are robust or artifacts of analytical choices.
Is the Global Consciousness Project still running?
The original GCP continues to operate. In 2023, a successor project called GCP 2.0 was launched under the HeartMath Institute, expanding the network with new hardware and aiming for 1,000 devices worldwide (according to GCP 2.0). The original project and GCP 2.0 operate in parallel.
Can I see the data?
Yes. The GCP’s data is publicly available at noosphere.princeton.edu. Individual event analyses, the formal event registry, and the raw data can all be downloaded and independently verified.
What do skeptics say about the Global Consciousness Project?
Skeptics raise several concerns: that the event selection process introduces bias, that the statistical methods are sensitive to analytical choices, that the absence of a physical mechanism undermines the interpretation, and that the deviations may reflect ordinary statistical fluctuations rather than any effect of consciousness (according to The Skeptic’s Dictionary). The most rigorous critique comes from Peter Bancel’s 2017 analysis, which found that the data supports a “goal-oriented effect” but does not specifically confirm the global consciousness hypothesis (according to Wikipedia).
Is Remote Viewing Real? The CIA’s Stargate Project and What the Evidence Actually Shows
For twenty-three years, the United States government funded a program to see if trained psychics could spy on enemy targets using nothing but their minds.
Read the full article
Ganzfeld Experiments: Can Sensory Isolation Prove Telepathy Is Real?
A person sits in a room with halved ping-pong balls over their eyes, red light flooding their visual field, and white noise filling their ears. They describe images that come to mind.
Read the full article
Presentiment: Can Your Body Predict the Future?
Before you see the image, your palm sweats. Before you hear the sound, your heart rate changes. The body appears to respond to future stimuli seconds before they occur.
Coming soon