Bell focuses on this later problem, highlighting in particular the problem of multiple comparisons. Essentially, the more we look, the more we are likely to find something by chance (i.e., some segment of random noise that doesn't look random - e.g., when eventually a thousand monkeys string together a few words from Hamlet). This is an extremely well known problem in neuroscience, and indeed any other science that is fortunate to have at its disposal methods for collecting so much data. Various statistical methods have been introduced, and debated, to deal with this problem. Some of these have been criticised for not doing what it says on the tin (i.e., overestimating the true statistical significance, e.g., see here), but there is also an issue of appropriateness. Most neuroimagers know the slightly annoying feeling you get when you apply the strictest correction to your data set and find an empty brain. Surely there must be some brain area active in my task? Or have I discovered a new form of cognition that does not depend on the physical properties of brain! So we lower the threshold a bit, and suddenly some sensible results emerge.
This is where we need to be extremely careful. In some sense, the eye can perform some pretty valid statistical operations. We can immediately see if there is any structure in the image (e.g., symmetry, etc), we can also tell whether there seems to be a lot of 'noise' (e.g., other random looking blobs). But now we are strongly influenced by our hopes and expectations. We ran the experiment to test some hypothesis, and our eye is bound to be more sympathetic to seeing something interesting in noise (especially as we have spent a lot of hard earned grant money to run the experiment, and under a lot of pressure to show something for it!). While expectations can be useful (i.e., the expert eye), they can also perpetuate bad science - once falsehoods slip into the collective consciousness of the neuroscientific community, they can be hard to dispel. Finally, structure is a truly deceptive beast. We are often completely captivated by it's beauty, even when the structure comes from something quite banal (e.g., smoothing kernel, respiratory artifact, etc).
So, we need to be conservative. But how conservative? To be completely sure we don't say anything wrong, we should probably just stay at home and run no experiments - zero chance of false positives. But if we want to find something out about the brain, we need to take some risks. However, we don't need to be complete cowboys about it either. Plenty of pioneers have already laid the groundwork for us to explore data whilst controlling for many of the problems of multiple comparisons, so we can start to make some sense of the beautiful and rich brain imaging data now clogging up hard drives all around the world.
Update - For even more related discussion, see: