NRLgate -
Plagiarism by Peer Reviewers


Sections 4 thru 4.1


This page is part of the NRLgate Web site presenting evidence of plagiarism among scientific peer reviewers involving 9 different peer review documents of 4 different journal and conference papers in the fields of evolutionary computation and machine learning.

This page contains sections 4 and 4.1 of "Evidence of plagiarism in reviews #1, #2, and #3 of a paper on electrical circuit design submitted to the Evolutionary Computation journal."


Go to top of NRlgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to Complaint Letter to the Evolutionary Computation journal
Go to top of Previous Page
Go to top of Next Page

4. Evidence of plagiarism in reviews #1, #2, and #3 of a paper on electrical circuit design submitted to the Evolutionary Computation journal

This section presents numerous pieces of evidence indicating the need for an impartial investigation and determination of whether there was plagiarism among the 3 scientific peer reviewers who reviewed the paper that I submitted to the Evolutionary Computation journal (ECJ) on the subject of applying automatically defined functions and genetic programming to electrical circuits (paper EC-JG-9209-0003).

There are numerous similarities between the reviews #1, #2, and #3 that I received for my ECJ paper.

4.1. Reviewer #2 referred, in his review, to a specific comment raised by reviewer #1, in his review --- thereby indicating that reviewer #2 had review #1 in front of him when he wrote his review

4.1.1. Reviewers #1 and #2 both use the word "hard" in a colloquial way in questioning the difficulty of the problem in the submitted paper


It is certainly legitimate for a peer reviewer to question the difficulty of the problem being discussed in a submitted paper. On the other hand, it is somewhat surprising that it would occur to anyone to question the difficulty of applying genetic programming to electronic circuit design. Nonetheless, both reviewers #1 and #2 raise this same question.

Reviewer #1 starts his response to question 3 of the journal's paper review form,
EVALUATION. Does the author carefully evaluate the approach? Does the
paper include systematic experiments, a careful theoretical analysis, or
give evidence of generality?

> This is an area of concern. There is no sense of the underlying
> difficulty of the problem. Is this a hard problem for current
> engineering methods? Given the goal of finding a Lisp expression
> for the system response, how complex is the space being explored
> by GP? How would random search over the same bounded space of
> Lisp expressions perform?

(Emphasis added)

Reviewer #1 then uses the word "hard" in a purely colloquial way (i.e., to mean "difficult"). Very few people in this field of science would ever use the word "hard" in this colloquial way because "hard" has a specialized technical meaning in computer science that is entirely inappropriate here.

How few?

I was general chair of the Genetic Programming 1996 Conference and have access to a computer file containing the 64,109 words of the 316 paper review forms from the 86 peer reviewers of the genetic programming papers at the GP-96 conference. In this file of 316 contemporary peer reviews from this field of science, there was only 1 instance of the word "hard" being used in this colloquial way in challenging a paper's difficulty.

In contrast, many people would use a word such as "difficult" to raise this question. Specifically, in the computer file, some form of the word "difficult" appears 60 times in 316 reviews. Other reviewers, of course, used other words to express this same idea.

Yet reviewer #2 also uses this same word "hard" in this same colloquial way in challenging the paper's difficulty. Reviewer #2 responds to question 3,
> EVALUATION. Does the author carefully evaluate the approach? Does the
> paper include systematic experiments, a careful theoretical analysis, or
> give evidence of generality?
...
p. 19, fig. 10: The figure shows that finding the correct function H(t)
is not necessary in order to give good input response. This again
raises the question: How hard is it to find some response function that
gives an adequate input response, i.e., how dense is the search space
with good sol[utions]?

(Emphasis added)
Isn't it improbable that two independently-acting reviewers would converge to this same unconventional usage of this word?

4.1.2. Both reviewers A and B converged on the same inappropriate section of the paper review form to locate their question about "hardness"


We have already seen several instances in reviews A, B, X, and Y where 2 reviewers of the same paper at the Machine Learning Conference coincidentally choose to locate a particular idea (using similar words and phrases) in the same section of the paper review form --- even though there were several other reasonable alternative places in the paper review form for the idea.

Question 3 (entitled "evaluation") is the location chosen by both ECJ reviewers #1 and #2 for their joint question about "hardness." Question 3 is a completely inappropriate place for this shared idea. This idea could reasonably have been placed in "discussion" or "general." However, the clearly correct place for saying that the paper is insignificant and makes no contribution to the field is in response the question 1 of the paper review form concerning "contributions."
GOALS AND CONTRIBUTIONS. Does the author clearly state the research
goals of the work? Does the paper clearly indicate what the
contributions are?

(Emphasis added).
Of course, as always, I am not criticizing the reviewers' competence in jointly placing a particular comment in the same inappropriate section of the paper review form (just as I do not criticize them for their unconventional colloquial use of the word "hard").

4.1.3. Reviewer #2 referred, in his review, to a specific comment raised by reviewer #1, in his review


However, the significance of this particular paragraph is not the fact that two reviewers raised the surprising issue of difficulty, that they both used the word "hard" in a colloquial way, or that they both placed this particular comment in the same inappropriate spot in the paper review form. The significant point is that reviewer #2 referred, in his review, to a specific comment raised by reviewer #1, in his review.

Reviewer #1 starts his response to question 3 of the journal's paper review form,
EVALUATION. Does the author carefully evaluate the approach? Does the
paper include systematic experiments, a careful theoretical analysis, or
give evidence of generality?
> This is an area of concern. There is no sense of the underlying
> difficulty of the problem. Is this a hard problem for current
> engineering methods? Given the goal of finding a Lisp expression
> for the system response, how complex is the space being explored
> by GP? How would random search over the same bounded space of
> Lisp expressions perform?

(Emphasis added)
Reviewer #2 responds to question 3,
> EVALUATION. Does the author carefully evaluate the approach? Does the
> paper include systematic experiments, a careful theoretical analysis, or
> give evidence of generality?
...
p. 19, fig. 10: The figure shows that finding the correct function H(t)
is not necessary in order to give good input response. This again
raises the question: How hard is it to find some response function that
gives an adequate input response, i.e., how dense is the search space
with good sol[utions]?

(Emphasis added)

Again?

It is reviewer #1, not reviewer #2, who raised the question of
Is this a hard problem for current engineering methods?

(Emphasis added)

The only other place where the issue of hardness is ever raised among these 3 peer reviews is in review #1. Reviewer #2 is raising the issue of whether the paper's problem is "hard" for the first time within the body of his own review document! Yet reviewer #2 says,
This again raises the question: How hard is it ...

(Emphasis added)
How can reviewer #2 possibly "again" raise a question that he hasn't yet raised within the body of his own review document?

The reason why reviewer #2 referred, in his review, to an issue raised by reviewer #1, in his review, is that reviewer #2 was looking right at the already written words of reviewer #1 when he wrote his review. As discussed in detail in section 4.2 below, review #1 was supplied by e-mail to peer reviewer #2.

Note that the time sequence for the plagiarism is that reviewer #2 plagiarized from reviewer #1.

4.1.4. Both reviewer #1 and #2 immediately followed the word "hard" with a clarification of exactly what they meant


Both reviewers #1 and #2 want to be very sure that the reader fully understands what they meant by "hard," so they both immediately clarify things.

Reviewer #1 clarifies by saying,
EVALUATION. Does the author carefully evaluate the approach? Does the
paper include systematic experiments, a careful theoretical analysis, or
give evidence of generality?

> This is an area of concern. There is no sense of the underlying
> difficulty of the problem. Is this a hard problem for current
> engineering methods? Given the goal of finding a Lisp expression
> for the system response, how complex is the space being explored
> by GP? How would random search over the same bounded space of
> Lisp expressions perform?

(Emphasis added)

Reviewer #2 clarifies by saying,
> EVALUATION. Does the author carefully evaluate the approach? Does the
> paper include systematic experiments, a careful theoretical analysis, or
> give evidence of generality?
...
p. 19, fig. 10: The figure shows that finding the correct function H(t)
is not necessary in order to give good input response. This again
raises the question: How hard is it to find some response function that
gives an adequate input response, i.e., how dense is the search space
with good sol[utions]?

(Emphasis added)

Both reviewers raise the same issue, namely
how complex is the space being explored by GP?
which is the same issue as
how dense is the search space with good sol[utions]

Notice also that both reviewers #1 and #2 ask for a comparison with random search.

Of course, the one thing that reviewers #1 and #2 succeeded in clarifying by their lock-step progression of words and ideas is that reviewer #2 had the already written text of review #1 in front of him while he was writing his review.

4.1.5. Both reviewer #1 and #2 switched to the interrogatory mode at the same point


Notice also that reviewer #1 switched to the interrogatory mode (i.e., used question marks) as he clarified what "hard" meant,
how complex is the space being explored by GP?
and that reviewer #2 switched to the interrogatory mode as he clarified what "hard" meant,
how dense is the search space with good sol[utions]?
By the way, this is not the only occasion when we will encounter a peer reviewer who suddenly switches to the interrogatory mode to raise this exact same issue at this exact same point in his flow of thoughts.

Reviewer T3 of my TAI paper on pursuer-evader games said,
How does this compare with other [sear]ch techniques (e.g. random)? How full is search space with solutions for your applications?

(Emphasis added).
As will be seen later, reviewer T3 is almost certainly not the same person as ECJ reviewer #1 or #2 (he may well be ECJ reviewer #3). Yet, we see this same switch to the interrogatory mode while this same point is being made (although with the new synomyn "full"!). See Section 5.8.


Author: John R. Koza
E-Mail: NRLgate@cris.com

Go to top of NRlgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to Complaint Letter to the Evolutionary Computation journal
Go to top of Previous Page
Go to top of Next Page