NRLgate -
Plagiarism by Peer Reviewers


Sections 3.7 thru 3.12


This page is part of the NRLgate Web site presening evidence of plagiarism among scientific peer reviewers involving 9 different peer review documents of 4 different journal and conference papers in the fields of evolutionary computation and machine learning.

This page contains sections 3.7 through 3.12 of "Evidence of plagiarism in X and Y of a paper on optimal control strategies submitted to the Machine Learning Conference."

Go to top of NRlgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to Complaint Letter to the Evolutionary Computation journal
Go to top of Previous Page
Go to top of Next Page

3.7. The unlikely coincidence that both reviewers X and Y employed the same quaint and infrequently used word in their reviews

Reviewer Y refers to,
previously used in machine learning studies.

(Emphasis added).
Reviewer X refers to
Previous studies, ...

(Emphasis added).
That is, both reviewers X and Y use the somewhat quaint and infrequently used word "studies" --- instead of a more ordinary word such as "work," "research," "literature," "paper," or "article."

How frequently do contemporary peer reviewers in this field of science use the word "studies"?

I was the general chair of the Genetic Programming 1996 Conference and have access to a computer file containing the 64,109 words of the 316 paper review forms from the 86 peer reviewers of the genetic programming papers at the GP-96 conference. Here is the frequency of occurrence of six alternative words contained in this corpus of writing by peer reviewers in this field: Both reviewers X and Y of my MLC paper on optimal control strategies appear to belong a small minority of reviewers.

Given the infrequency of usage of this particular word by contemporary peer reviewers in this field, isn't it improbable that both reviewers X and Y of the same paper would simultaneously make this particular choice of words (in the space of short reviews containing a few hundred words each)?
One the other hand, if one peer reviewer had created his review while looking at an already written review of the same paper, the probability of this particular choice of words is no longer small.

This is not the only occasion when we encounter two peer reviewers of the same paper both belonging to the small minority of peer reviewers who use this particular quaint and infrequently used word. The infrequently used word "studies" appears 2 times in review #2 of my paper submitted to the Evolutionary Computation journal and 1 time in the signed, non-anonymous review to the MIT Press of my first book. See section 7.6.

3.8. The unlikely coincidence that both reviewers X and Y would abbreviate the paper's title and that they would both abbreviate it to the same 6 words

The title of my submitted MLC paper on optimal control strategies had 14 words.

Both reviewers X and Y abbreviated the paper's title. Moreover, they converged onto the same 6 words:
Genetic breeding of optimal control strategies
How common is the practice of abbreviating the author's title on a paper review form?

We again make reference to the computer file containing the 64,109 words of the 316 paper review forms from the 86 peer reviewers of the genetic programming papers at the Genetic Programming 1996 Conference.

There were 27 reviews of papers whose titles contained between 11 and 17 words (i.e., 14 words plus or minus 3). In 63% of the cases, the reviewer entered the author's full exact title onto the paper review form without modification. Thus, the probability of randomly drawing 2 reviewers who abbreviate the author's title is about 1 in 9.

More importantly, there are many different ways for abbreviating titles that were used by the minority of peer reviewers who modify an author's title. In the case of titles that were abbreviated by more than one reviewer, there was no case where two reviewers abbreviated a title to identical words (or even the same number of words). Moreover, no two reviewers even used the same approach to abbreviation. The different approaches included Without the suggestive power of one already written review, isn't it improbable that reviewers X and Y would both abbreviate a title and that they would also converge to the identical shortened version?

We do not have sufficient data to compute the joint probability of convergent abbreviation; however, a rough guess is that it is around 1 in 100.

This is not the only occasion when I encountered two peer reviewers of the same paper both belonging to the small minority of peer reviewers who abbreviate the author's title and who coincidentally converge on the same abbreviation. As will been seen elsewhere, reviewers #1 and #2 abbreviated the 15-word title of my paper submitted to the Evolutionary Computation journal and they both converged to the same 8 words. See section 4.5.


3.9. The opening sentences of both reviews X and Y gratuitously provided the same unrequested information

The first section of the MLC paper review form asks the reviewer to evaluate the "significance" of the submitted paper.
Significance: How important is the work reported? Does it attack an important / difficult problem or a peripheral / simple one? Does the approach offer an advance.

(Emphasis added).
Review X starts his review,
The papers presents one example of using a genetic algorithm to learn control strategies for a version of the cart-and-pole system.

(Grammatical error of "papers" in original).
(Emphasis added).
Review Y starts his review,
This paper concerns the application of ideas from genetic algorithms to a broomstick balancing problem.

(Emphasis added).
Notice how the first sentence of both reviews ignored the specific question concerning "significance" that was actually asked by paper review form. Instead, both reviews were unresponsive to the question being asked. They both began by gratuitously providing an unrequested summary of the subject matter of the paper.

The paper review form didn't ask reviewer X what the paper "presented" (or "concerned" to use reviewer Y's word). It asked about "significance."

How improbable is it for peer reviewers in this field of science to unresponsively begin their answer to a paper review form's question on significance with an unrequested summary of the paper?

We again make reference to the computer file containing the 64,109 words of the 316 paper review forms from the 86 peer reviewers of the genetic programming papers at the Genetic Programming 1996 Conference.

As it happens, the first substantive question on the paper review form used by the GP-96 conference concerned the "significance" of the paper.
Significance of the Problem: Is the subject of this paper important?.
98.4% (311) of the 316 review documents began by addressing the question that was actually asked by the paper review form. Only 1.6% (5) of these 316 review documents began by unresponsively and gratuitously providing a summary of the subject matter of the paper.

Both reviewers X and Y of my MLC paper on optimal control strategies appear to belong the small minority of reviewers with this unusual pattern of behavior.

Given the infrequency of this unresponsive behavior by contemporary peer reviewers in this field, isn't it improbable that both reviewers of the same paper would exhibit this particular unusual behavior unless one peer reviewer were plagiarizing his review from an already written review of the same paper.

This is not the only occasion two peer reviewers of the same paper both belonging to the small minority of peer reviewers who have the habit of unresponsively providing a summary of the submitted paper instead of the requested evaluation of its significance. As will been seen elsewhere, both reviewers A and B of my MLC paper on empirical discovery belong to this same small minority. See section 2.10.


3.10. Both reviewers X and Y substituted "genetic algorithm" in lieu of the author's chosen term

Review X begins his review for my paper on genetic programming (GP) is as follows:
The papers presents one example of using a genetic algorithm to learn control strategies for a version of the cart-and-pole system.

(Grammatical error of "papers" in original).
(Emphasis added).
Review Y begins his review for my paper on genetic programming (GP) is as follows:
This paper concerns the application of ideas from genetic algorithms to a broomstick balancing problem.

(Emphasis added).
Notice that both reviewers X and Y imposed this substitution of "genetic algorithm" for the author's chosen term (perhaps offensive to both) that actually appears in the submitted paper.

This is not the only occasion when we encounter two peer reviewers of the same paper both belonging to the small minority of peer reviewers who impose their own term in this manner in lieu of the author's chosen term. As will been seen elsewhere, both reviewers A and B of my MLC paper on empirical discovery have this same propensity. See Section 2.11. See section 5.4.


3.11. The opening sentences of both reviews X and Y were similar in structure

Notice the overall semantic symmetry of the opening sentences of the reviews X and Y.
Review X begins,
The papers presents one example of using a genetic algorithm to learn control strategies for a version of the cart-and-pole system. The problem of learning non-linear control strategies is an important one, but the particular problem addressed here is a highly constrained case.

(Grammatical error of "papers" in original).
(Emphasis added).
Review Y begins,
This paper concerns the application of ideas from genetic algorithms to a broomstick balancing problem. The genetic search process found a control algorithm that solved the problem better than a suboptimal control strategy designed by the second author. The control algorithms found by the genetic search were represented as Lisp S-expressions from a limited set of functions and atoms.

(Emphasis added).

3.12. The time sequence of the plagiarism is suggested because reviewer Y correctly identified the 3 dimensions of the problem a mere 2 sentences before he complained that the paper failed to properly identify them

The previous sections showed how both reviewers X and Y proceeded in lock-step by All first 10 sentences of review Y are now shown below. Sentences 6 and 7 (in bold) are the end of reviewer Y's gratuitous "summary" of the subject matter of the paper. Notice that reviewer Y, correctly identifies the 3 dimensions in sentence 6. Sentence 8 is the almost identical 3-part explanatory sentence common to reviews X and Y that was discussed above. Sentence 9 (in bold) is the complaint that the submitted paper failed to properly identify the 3 dimensions of the problem.
This paper concerns the application of ideas from genetic algorithms to a broomstick balancing problem. The genetic search process found a control algorithm that solved the problem better than a suboptimal control strategy designed by the second author. The control algorithms found by the genetic search were represented as Lisp S-expressions from a limited set of functions and atoms. Whether they evaluated to a positive or to a negative quantity was used to determine the bang-bang force applied in each control situation. The standard broomstick balancing physics was used, but with a new control objective not previously used in machine learning studies. The goal was to drive the system in minimum time to a near-zero value for the cart velocity, pole angle, and pole angular velocity. Cart position was apparently ignored. The broomstick balancing problem was the standard two-dimensional one in that the cart moved only along a one dimensional track and the pole could swing only forward and back, not right to left. It is not clear what the authors mean by calling their problem "the three dimensional broom balancing problem." The authors do not discuss why they use a broom balancing problem so different from that used by previous machine learning researchers.

(Emphasis added).
As can be seen, reviewer Y correctly identified the 3 dimensions of the problem in his sentence 6, namely
the cart velocity, pole angle, and pole angular velocity.

(Emphasis added).
What is happening here?

Apparently, reviewer Y initially followed reviewer X's lead by beginning his review with a gratuitous (and larger) "summary" of the subject matter of the paper. Reviewer Y's 7-sentence summary is reasonably accurate. Then, starting with sentence 8, reviewer Y apparently shifted gears and reverted to reviewer X's already written review for guidance. In sentence 8, reviewer Y paraphrased the previously discussed 3-part explanatory sentence using almost identical words as reviewer X. Then, reviewer Y apparently continued under the spell of reviewer X's already written review in sentence 9. Even though reviewer Y had himself just accurately identified the 3 dimensions of the problem (sentences 6 and 7), reviewer Y parroted (in sentence 9) the complaint of reviewer X:
It is not clear what the authors mean by calling their problem "the three dimensional broom balancing problem."

(Emphasis added).
Not clear?

Apparently, this entire process of plagiarism was so thoughtless mechanical that reviewer Y never thought about the absurdity of what he was typing.

Finally, reviewer Y continued under the spell of reviewer X and reworded reviewer X's complaint about the alleged non-standardness of the problem (sentence 10).

The contradiction between reviewer Y's sentences 7 and 9 strongly suggests the time sequence of the plagiarism --- namely, that reviewer Y plagiarized from reviewer X. Even though reviewer Y fully and correctly grasped the 3 dimensions of the paper (as shown by his own sentence 7), reviewer Y was apparently so preoccupied with the mechanical task of parroting reviewer X's already written words that he missed the contradiction between what he himself typed in sentences 7 and 9. This inadvertent error by reviewer Y not only shows that his review was derived from reviewer X's already written review, but it strongly suggests the time sequence of the plagiarism. If the plagiarism occurred in the other order, this internal contradiction would probably have shown up in reviewer X's review.


Author: John R. Koza
E-Mail: NRLgate@cris.com

Go to top of NRlgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to Complaint Letter to the Evolutionary Computation journal
Go to top of Previous Page
Go to top of Next Page