Showing posts with label publish. Show all posts
Showing posts with label publish. Show all posts

Friday, April 17, 2020

Peer review problems

The Harmonic Oscillator
Peer reviewed articles are the “ gold standard” for readers seeking true facts- until they aren’t.

My tale of woe has its origins in 1965-6 when I published an article which I revisited last year preparatory to adding material to Wikipedia. Much to my shame and chagrin, the article had a severe error in it. Since it was in a peer-reviewed, referee-ed journal, the reader has the right to expect that its contents were correct, which they weren't! What to do?

An erratum could be submitted to the journal, but who would ever find it if they stumbled on the original article?

Since the article had only been cited once in 50+ years, it is safe to say it will die the natural death most unimportant published papers succumb to. But there is a lesson to be learned from it, one which, in this anti-science, anti-fact age might be worth learning. Peer review is fundamentally flawed.

Peer review is the process by which editors ask “learned” members of the same research community to read and assess the question- should this manuscript be published in our journal? For experimental work, the referee usually can not take the time or make the expenditure required to actually reproduce the findings, so s/he usually attempts an educated guess as to the veracity of the work based on, well, I’m not exactly sure what. Reasonableness? Plausibility? The author’s address and/or credentials?

For Computational submissions, such as modelling, the same lack of ability to reproduce the results in the submitted paper implies that there is an act of faith taking place when the referee accepts the submission. Even if given the source code, reading such code is a Herculean task, and debugging it and checking the assumptions hidden in the code is asking too much of volunteer referees and makes it impossible to certify that the code is “correct” and the results trust-able. Perhaps the results seem plausible. Perhaps the computer “predictions” match in some way the experimental data. Perhaps the “predictions” match the referee’s prejudices.

For traditional mathematics of the “paper and pencil” variety, refereeing means validating, i.e., re-proving that the assertions are true.

 Which brings me back to my (erroneous) paper. The paper “looks” right.  The error is not subtle, but hidden in an inverted plus-or-minus ($\mp$) sign. I think I proved the top sign equation but assumed the minus sign results; and the resultant equation pair seemed OK.
 Here is the equation:
 \begin{equation*}
[A_\pm  , L^2 ] =  \mp 2 \hbar^2 A_\pm  \mp 2 \hbar A_\pm L_z \pm
2 \hbar A_z L_\pm
\label{eqn1}
\end{equation*}
Since all the notes for writing this paper are gone, there is no way to reconstruct how this error occurred, but it is apparent that reading the equation does not raise suspicions about its veracity. As you can see, our error equation is the second half of equation 10.

However, further down in the paper, there’s a notational error which is apparent, obvious, and inexcusable. I wrote ``It can be shown that $A_-$ lowers $|\ell^*,\ell^*>$ to $|\ell^*-1,\ell^*-1>$'', where $\ell^*$ is the maximum value of $\ell$. This is absolutely false! That the referee missed it (and that I actually typed it) is just beyond belief. Even a casual reader of this particular paper with a shred of knowledge of the physics involved, will realize immediately that my notation is wrong.
Notice that a few lines above, I wrote $|\ell,-\ell> \rightarrow |\ell +1,-\ell - 1>$. So, how could it be that a referee didn’t catch this error.

Back then, copious publication was the requirement for promotion, tenure, and (to a certain degree) salary increases. At that time, the need for external funding of research was building and had not become the all consuming driving force for academic success that it now is. Then, publications indicated a measure of progress, while now, they reflect successful fulfillment of promises made in supported proposals. Since referee-ing of manuscripts was an unpaid, extracurricular activity, referees had as their sole reward the advanced knowledge of what was going to be published. Some scholars would ask their graduate students to read the manuscripts for them

Although not onerous, refereeing was not rewarded or appreciated, and was not usually included in annual activity reports. In theory, it was one’s professional responsibility but in practice, it was just a burden. Whether this remains true today is unknown to me.

But the system failed then, and one suspects, is even less likely to succeed now. As in almost all human activities, trust trumps proof. There is no guarantee mthat something written down, even if in a peer-reviewed journal, is correct/true.

So how do we get to “truth” in science? For experimental work, replication of methodology as precursor to some other project generally leads to verification (or condemnation). For theoretical work, the same holds true, except the details differ.

Only in mathematics do we get certainty. Consider the childhood exercise of adding up the integers from 1 to say 10. Every educated adult knows (or can know) that the answer is (10)(11)/2. The proof is to write the sum out twice, once backwards, and add them together, i.e.,

1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9 + 10

10 + 9 + 8  + 7 + 6 + 5 + 4 + 3 + 2 + 1

===========================, so adding

11+11+11+11+11... etc.

 i.e., 10 terms each of value 11. The double count has to be corrected, and there we are with the incredible Gaussian (is the story about the young Gauss true?) derivation of
 $$
 \frac{n*(n+1)}{2}
 $$
 true here, in China, on the Moon, on Alpha Centauri, at the creation of the universe, at the death of the universe, forever! Now that’s TRUE. Certainly!

The dross falls away- buried and forgotten in the avalanche of papers. The real, the “true”, appears to emerge naturally over time. A system based on honesty and integrity stumbles on.