Pages Navigation Menu

Parapsychology articles and news

Dean Radin on skepticism

I’ve almost finished reading Dean Radin’s book Entangled Minds : Extrasensory Experiences in a Quantum Reality. I would post half the book here if it was legal. Anyway, near the end of the book, Dr. Radin summarizes his meta-analysis of the psi research, which he presented in the main part of the book. By the way, his combined odds against chance of all the 1019 studies that he analyzed is an astounding 1.3×10103 to 1 (which also equals to 1300 googles googols). That’s quite a number to disregard.

Then he writes on skepticism: “In spite of the evidence, many remain skeptical”. He then writes about three factors that contribute to “reasonable doubt” (page 278 in the book):

  1. No fool-proof recipe to guarantee 100% success of a psi experiment. Yet, he writes, that after billions of dollars spent on cancer research there’s also no guarantee of even a successful diagnosis of cancer, moreover no guarantee of a getting healed.
  2. Most scientist are not aware of the body o evidence regarding psi. Even though some articles on psi research do get published in some mainstream scientific journals, they are much outweighed by regular scientific literature and are easy to overlook.
  3. The principal reason for persistent skepticism, in Radin’s opinion, is that “scientific truths do not arise solely through the accumulation and evaluation of new evidence. In particular, consensus opinion advances through authoritative persuasion. This is not how it’s supposed to work in an ideal world… Use of rhetorical tactics like ridicule are especially powerful persuaders in science, as few researchers are willing to risk their credibility and admit interest in ‘what everyone knows’ is merely superstitious nonsense”.

How do you think is he close to the truth in this? Discuss in forums





  1. The work of Radin’s that I’ve studied is his set of on-line experiments at For results, the site points to his “Preliminary Analysis” dated January of 2002, covering data from August 2000 through October 2001. The paper reports collecting a massive amount of data, and states two results that would be extremely unlikely by chance: one Radin found to be from a bad random-number generator, but about the other he wrote:

    “Out of 170 users who contributed 20 or more trials, 36 had overall results significant at p < 0.05. Only 8.5 people are expected to be significant at this level by chance; this excess is associated with an exact binomial p~10^–13, suggesting that some individuals may have exhibited talent at this task.4" [] Radin is mistaken. The result is statistically invalid because the subjects saw how they did on the first 19 trials before they decided to do that 20'th trial that resulted in their inclusion. In the quote above, the "4" at the end is footnote marker; unfortunately the paper has no footnote four, as the footnotes jump from number three to five. Mistakes are inevitable and that was not the peer-reviewed version, so lets look at the corrected paper, and the reports of the data collected in the six and half years since. Hmmm... not as easy as it sounds. Can anyone help me with that? Radin's experiments are still running and they've collected an ocean of data, so where is it? Don't take my word: check it out. Are Radin's experiments still running at Does the root page still show just that one report? Is the reporting as out of date as I said? Has he even bothered to correct the errors? What further reporting do you find? Ah! Found something more: Richard Shoup, who runs, wants to argue that the September 11'th 2001 attack effected random-number generators. He reported about the card-guessing test: "The daily hit rate appears to behave statistically as expected except for the striking peak and deep notch beginning in July and ending in early September of 2001. [...] Then almost immediately after September 11, the hit rate rose steeply again, and returned to ostensibly random behavior, which continues to this day." [] So taken over all those years, the hit-rate was at chance, despite all the psi subject applied. The Parapsychologists decided not to report the chance results, except in the context of the short-term special-case they want to plead. The experiments collected a data set among the largest in Parapsychology. Radin did the design and programming, and Shoup runs the site; the data is theirs to report or not, in whatever context they see fit. Nevertheless, from a scientific standpoint there is no excuse, no justification, for withholding the data for so long. Whatever they claim about skeptical criticism being unjustified, derision of their selective reporting is well earned. I came to this site when I saw Jacob's announcement of the on-line psi tests. I've been critical of Jacob's experimental designs, but Jacob did at least one thing right -- one very important thing: he said what data he'd collect and report, then he collected that data and reported it. That puts him far above Dean Radin. --Bryan

    • Thanks for the compliment, Bryan. 😉

      It would be nice to get a reply from Dean Radin himself or from Richard Shoup, for that matter.

      • Believe it or not, just before I read your reply, I tried to write to Radin at the last e-mail address I had. I too thought it would be nice to get his reply, or maybe I just didn’t want to feel like I was taking a cheap shot behind his back.

        Well, now bounces: “host not found” . Worked before, but that was some time ago. I think Richard Shoup should still be reachable via

        But now I’m going to bed.


        • Re-reading my own post here, “” doesn’t make sense. And I find responds to ping, so I’m re-trying the e-mail.

          I’m a skeptic, and I’m also a person who stays up way past bed time. I stand by my positions as the former, and deeply regret the latter.


    • Let’s see, you have two complaints. 1) The experiment has flaws in it (according to you — I would have to review it myself) making it invalid; 2) Radin hasn’t published the experiment so that you can attack it as being of such poor quality that it should not have been published.

      Have I got that right?

      Parapsychologists generally believe that a strong, supportive environment between experimenter and percipient/agent is necessary to elicit measurable psi effects. They are willing to experiment with alternatives (perhaps the game-like atmosphere or the convenience or safety of the on-line environment, or just the sheer size of the sample will take the place of the personal interaction), and even convince themselves that it *should* work. But this would be a new discovery in parapsychology, it is not what previous experience would lead us to expect. In fact, there is a long history of failures of this idea, going back mass experiments using radio broadcasts in (I think) the 40s.

      I’m sure, Bryan, that you will feel that I’m just making excuses, but I think that conducting massive tests (at least conventional tests) over the internet is unlikely to produce good, reliable results.

      • Yes, you would have to review the paper to see if I’m right about the mistakes. Is the link not working for you?

        I’m not sure to what you are responding. Do you think anything in what you wrote justifies not reporting five years of data?

        • “Is the link not working for you?”

          Sorry, Bryan, I didn’t realize that when you make a comment, it is required that I immediately drop everything else in my life to check what you say.

          I guess I still don’t.

        • What is in your head? I didn’t demand you drop anything to respond to my post. You chose to reply, and to do so without knowing what you were talking about, despite that I made checking so easy.

        • That required less effort than I thought. What Bryan says is accurate except for “Radin is mistaken”. Note that Radin says, as Bryan even quotes, that the results are *suggestive*. He also says — conveniently ignored by Bryan — that “The main short-coming of these tests, at least for scientific hypothesis testing, is the extent to which optional stopping behavior permeates the database. This confounds any form of proof-oriented analysis on tasks where feedback is provided.”

          In other words, Radin understood the problem, recognized it, and felt that the data was therefore not useful for *proving* anything. In real science, experiments can be exploratory rather than strictly proof-oriented, and real scientists extract what value can be gained from a failed experiment.

          I don’t know whether Dean Radin still has anything to do with on-going data collection. Since it flawed no conclusion *positive or negative* can be extracted from this experiments (self-proclaimed Skeptics frequently seem to feel, apparently because *truth* is on their side, that the same standards of logic do not apply to them). I do not believe that a formal publication is called for. I do think that the existence of the data needs to be made public — which they have been. I do not think that a web-site left running after an experiment has terminated (if that is the case here — as it appears to be) has any requirements on it except any decision about what is to be done with it must be made independent of its results.

        • Obviously Radin is mistaken; it’s an invalid statistic. There’s no indication of “talent” at this task. Might be interesting to know what that missing footnote was supposed to say; without it, “talent” is the only explanation he suggests. I informed Radin and of the error a couple years ago.

          Optional stopping is only a problem if one chooses such invalid statistics. Use stats that count each trial before providing feedback.

          Where did you get the idea that the experiment has terminated? When do you think it ended?

          If there’s one thing we can conclude from the experiment, it is that selective reporting is alive and well in parapsychology.

          — Bryan

        • It may be “obvious” (at least to you) but it is not the least bit “true”. If he had said that this “proves” or “demonstrates” then he would most certainly have been mistaken. But he said “suggests” which means something else entirely. Scientific language has precise meanings and conventions. You can’t simply pretend a different word was used so you can make your point.

          The statistics used are invalid for proof-oriented research and for strong inference (where words like “proves” and “demonstrates” would be found) but are completely appropriate for exploratory analysis (“suggests”).

          As you know, once you choose as part of your design a particular method of analysis, one is bound to stick to it. Changing methods of analysis in mid-stream is invalid. If he had done so you would have been all over him for his invalid analysis (in that case, properly so). He made a poor choice of analysis, that is true. It makes the experiment invalid for proof-oriented analysis, that is true. *That* was a mistake on his part, that is true. The statement you cite, however, is not a mistake — it is an possibility that you disagree with backed up with some figures that do not prove that opinion but simply raises the possibility.

          Why do I think that the experiment was terminated? Because an elementary part of any experimental design is a stopping criteria — usually a certain number of trials or a time limit. Time limits in parapsychology are generally measured in months. No formal analysis is done — and no analysis of any kind is published — until the end of the experiment. Despite your apparent conviction that anyone who disagrees with you must be an idiot, I know that Dean knows and understands this — and has conformed to it in every experiment I know of. Analysis of further data would require a new formal design. Why would you assume that this experiment was *not* terminated when you have an analysis of the experiment in front of you?

          You seem to be confused about what “selective reporting” means. You are aware that these are technical terms that require some study beyond Stat 101 For Engineers and reading the Science Supplement in the local newspaper? Dean was instrumental in having the Parapsychological Association adapt the policy that all publications should commit to publishing all valid proof-oriented studies whether the results are positive or negative. It is one of (if not the) first scientific bodies (and still one of the few) to adopt such a measure. But a *failed* experiment is not the same as one with negative results — such an experiment proves nothing positive or negative, it is irrelevant. In this case, however, the experiment has been made part of the record for anyone — even those who know less about experimental science than they appear to think — to see what it suggests.

        • Are you actually arguing that an invalid statistic marked with a reference to a footnote that does not appear in the paper, is *not* a mistake? I wasn’t slamming Radin for a mistake; “mistakes are inevitable,” I noted. It’s the failure to correct it and to fully report what the data actually says that I find objectionable.

          You ask, “Why would you assume that this experiment was not terminated when you have an analysis of the experiment in front of you?” For one thing, because that very analysis says so. The abstract implies that data collection continued beyond the the time he’s reporting. The conclusion begins, “Preliminary analysis of the first year’s worth of data…” How do you get “terminated” from that?

          Now where was this “stopping criteria” that you say is an elementary part of experimental design? I could not find any indication that Radin had specified the data set on which he would report, nor what the statistics of record would be. Can you find it anywhere other than your imagination?

          I agree that Radin ought to know that a valid study requires pre-stating the metrics and data sets to be reported. (I think an on-going study can change these, as long as they do not apply retroactively.) Where was it? Kind of an unusual “Preliminary Analysis” that omits mention of any plan for the next analysis.

          And Topher, before you lecture me on requiring “some study beyond Stat 101 For Engineers,” you might remind yourself who keeps correcting your incompetent attempts to use statistics.


        • My last comment on this — unless Bryan can come up with something original worth discussing.

          First off, an apology. I said some things out of frustration that I shouldn’t have. Truly sorry.


          Its easy to “prove” stupidity when you assume stupidity.

          You need to understand the way science actually works (in all fields) as opposed to the prettied up version that appears in textbooks and most science reporting.

          The statement you quote implies that further experimental series could be designed and done. It does not imply that it was. One publishes a “preliminary analysis” when it is important to get information “out there” for whatever reason but that the resources are not (and may never be, though one can hope) available to do a full analysis and publication. One reason, as here, is that the experiment was, as here, a failure and that the results have no firm inferential use. Putting a lot of effort into saying “unfortunately no conclusions could be reached”.

          This report looks like a rushed attempt to get some information and suggestions down. It is unfortunate that the footnote is missing (or that the footnote was removed and that footnote marker was accidentally left in). As far as I know, Radin is no longer connected with the Boundary Institute.

          In exploratory analysis one frequently applies a statistical test whose assumptions are not strictly met — for example, assumptions of independence or normality. Nothing, therefore, can be concluded from the test therefore. What it provides is a rough idea of relationships. One makes a rough, frequently purely intuitive, estimate of the effects of the violation of assumptions — asking, e.g., “Is the magnitude of the difference seen here plausibly *not* due just to deviations from normallity” — and then uses that as a *suggestion* for something to be, perhaps, followed up on more rigorously later.

          So, lots of mistakes in the experiment. Some mistakes in presentation. No mistake in the statement you criticize.

          Wouldn’t it be more fruitful to try to understand an actual published, successful (whether positive or negative) experiment, rather than some brief notes on a failed experiment?

        • I’m feeling a bit pugnacious so I’ll comment upon your personal attack.

          You have indeed corrected me in some statistical errors. I certainly think that this gives you a reason to believe that I am careless at times with casually done calculations done in an informal setting as part of a discussion. I do not think you have any basis to judge my overall competency.

          All your corrections are in line with knowledge that could be expected from a solid introductory course. You have shown demonstrated no particular knowledge of practical exploratory data analysis, experimental design, meta-analysis or other subjects beyond the elementary. That your response to my implication that your education is indeed limited to such a level with a direct personal attack reinforces my suspicion that you have little statistical training beyond that level.

          That’s OK. None of us know everything. (When I was in my 20’s, of course, I did know everything. But despite all I’ve learned in the meantime, there are now huge areas that I now don’t know much about. Odd, isn’t it?)

        • The problem wasn’t usually your calculation; you simply didn’t know what you were talking about. Check out your statical analysis of Jacob’s first experiment. What was that “multinomial” thing? Shouldn’t someone who states a chi-squared statistic know what it measures?

          At the time I pointed out those mistakes, I had not yet read your claim:

          “I should give my qualifications — I’m a computer scientist who has been involved with parapsychology for somewhere around 35 years. My specific area of expertise is statistical analysis of psi experiments and statistical computing.”

          I wonder how many other parapsychologists are as deluded.

        • When someone starts in on personal attacks, as I believe I said elsewhere, it just means that they have exhausted logic and evidence. Ultimately, it’s the argument that counts not the source of the argument – even if it was typed at random by a monkey.

          I’ve made some errors. None of those was as egregious as either of the following two:

          1) Thinking that “DOT” is a top level ICANN domain. There are what, 10 general purpose “generic” top-level domains? Plus the country codes, of course, but they are all two letters. Not a whole lot to know.
          2) Publicly attacking someone’s professional competence on the Internet. That is something my son learned about the Internet in grade school. For the record, I hear by relinquish my right to sue and recover damages under national and international defamation law (specifically libel in the US) for this statement. I do not, of course, relinquish such rights for future statements (that’s not a threat, I can’t imagine enforcing them, but one has to protect oneself – I wouldn’t want to sign away my rights if you were to start up a systematic smear campaign or something). My consultees’ opinions about my competence is not likely to be affected by the statements of some random person on the Internet, so “no damage, no foul”.

          If I had a deep emotional need to prove my superiority, I could take either or both of these as reason to proclaim you lack of a modern technical education. All it really shows is that when we dash something out in a casual discussion without bothering to double check we sometimes make mistakes.

          As for the “multinomial” thing: Psychologists refer to it as “projection” when you attribute your own cognitive processes to others. That *you* didn’t understand what I was doing doesn’t mean that *I* didn’t understand what I was doing.

          When the results were posted I thought of a “clever” way to test the hypothesis – one that seemed more direct than the traditional chi-square test. What’s more it was easily computed. I whipped out my calculator and made the computation and posted it. Only later did I realize that it was exactly equivalent to a chi-square test. In fact, I had essentially recreated the justification for the chi-square test, which is a test of the internal variance of a multinomial distribution. The site is down as I write this, so I can’t check, but there may have been a computational error in there as well, but my main fault was using a sophisticated test where a simple one – a cookbook test taught without justification in most introductory stat courses — would have done.

          I’m not going to get into a name calling exchange, so why don’t you show some class and stop with the Ad Hominem attacks?

        • Topher, if you look at the discussion of Experiment 1 results, you’ll see my responses to your statistical blunders were purely factual and straightforward. At the time, you had not yet made it personal, at least not that I’d seen.

          If you did not want your statistical expertise to be at issue, why brag about it? If you didn’t want to hear about it in this thread, then lecturing on my need for study beyond Stat 101 was a mistake. That drew a single harsh sentence from me.

          Your next post began by announcing “My last comment on this — unless Bryan can come up with something original worth discussing,” followed by an apology for saying things out of frustration. You *so* could have left it there. Took you four hours to follow-up your own last comment and re-open the issue. You got “pugnacious”. Then you got slammed.

          You made it personal. You wouldn’t drop it. You got the thread you wanted.

        • I shouldn’t reply but I will. I had originally decided to “turn the other cheek” and ignore the personal attack, so I said nothing about it in my first note. Being human, it continued to bother me so I added to my original. What I meant was that I would try to avoid an exchange like this one. OK, I’m not, though I’ll work at it.

          I didn’t make it personal. I made a factual comment that there was more knowledge needed here than one got from a typical shallow education about statistics and the processes of science. It was intended as a comment for anyone who thought that technical language could be fully understood without education or experience. I had no idea whether or not it applied to you. I realized that though it was factual and relevant it was not necessary to have said it, and that it might be taken negatively by some. So I apologized.

          Apparently, I hit a nerve.

          I did not make it personal — you took it as personal and made it personal by degenerating the discussion to Ad Hominem. My comments did not, however, stoop to that level. There was no “slam” then or ever. I simply pointed out that everyone makes dumb mistakes including you — and that that is no big deal. I gave two examples one of which you had acknowledged. As for the other, do you really think that public libel is smart? Do you really believe that saying that you are not infallible is a “slam”?

          And I didn’t brag — at least there. I stated my qualifications.

          Most scientists and engineers do fine on cookbook stats from an introductory course or two, supplemented by stuff conveyed in articles and by mentors specific to their field. Its not an insult — there is too much knowledge out there to try to know everything. Sometimes, though, the simplifications and exclusions in such a background are inadequate — and may even be deceptive — in dealing with a problem not specific to a routine question in the specialized field one has trained in. It is reasonable to point out when that is the case, but not strictly necessary. I shouldn’t have bothered, given your demonstrated focus on the authority and qualifications of the speaker rather than the content of the argument. I apologize again.

        • Topher Claimed claimed:
          “It was intended as a comment for anyone who thought that technical language could be fully understood without education or experience. I had no idea whether or not it applied to you.”

          Yet in #, above, he called it, “my implication that your education is indeed limited to such a level.”

          Topher, have you considered the damage you could have done? What if people believed your claim of expertise, and there had been no one around with the background to correct your repeatedly incompetent analysis and false results?

        • I left out the adjective “unintended”.

          Well, most of them I would have caught, down the line a bit, I think. When I casually play with something I usually go back and look at it after a few days, and they were all just clumsy errors — nothing requiring much deep thought to spot.

          Furthermore few of my errors made much difference to the conclusions. We both agreed, for example, that no conclusions about psi could be reached from any of the experiments. You just wanted to be very emphatic about it and spent a lot of effort proving that they weren’t just beyond the threshold of evidence but far beyond the threshold of evidence.

          And have you considered the consequences of all the misinformation about the scientific process, logic and parapsychological history that you have stated with such great confidence? I realize that you haven’t acknowledged such flaws, but we both know that they are there.

        • Its like you learn my mind! You seem to grasp so much about this, like you wrote the guide in it or something. I believe that you can do with some p.c. to power the message home a bit, however other than that, that is fantastic blog. A great read. I will certainly be back.

  2. A trivial quibble: the very large number is spelled “googol” not “google”. The search engine company may be big, but its not *that* big.

    • Right. I remembered that its name was about the number but forgot that the actual number is differently named. Thanks for the correction.