NRLgate -
Plagiarism by Peer Reviewers
Sections 5.4 thru 5.11
This page is part of the NRLgate Web site presenting evidence of
plagiarism among scientific peer reviewers involving 9 different peer review
documents of 4 different journal and conference papers in the fields of
evolutionary computation and machine learning.
This page contains sections 5.4 through 5.11 of "Evidence of plagiarism
in reviews T2 and T3 of a paper on pursuer-evader games submitted to the
Tools for Artificial Intelligence conference (TAI), evidence that TAI reviewers
T1, T2, and T3 are affiliated with the Naval Research Laboratory, and similarities
linking reviews T1, T2, and T3 with various MLC and ECJ reviews."
Go to top of NRlgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to Complaint Letter
to the Evolutionary Computation journal
Go to top of Previous Page
Go to top of Next Page
5.4. Review T2 substituted "GA" in lieu of the author's chosen
term ("Genetic Programming") in the same way as MLC reviewers
A, B, X, and Y
Reviewer T2 of my TAI paper on pursuer-evader games begins his review,
- The idea of evolving Lisp expressions with GAs is an interesting
one and the work described in this paper is interesting.
- However, I feel that the experimental methodology used in this research
makes it difficult to evaluate the success of the work.
(Emphasis added)
However, I used the term "genetic programming" in my submitted
TAI paper.
The first sentence of 4 other review documents --- A, B, X, and Y
--- similarly imposed this substitution of "genetic algorithm"
or "GA" for the author's chosen term (perhaps offensive to both)
that actually appears in the submitted paper. See
section 2.11. See section
3.10.
For example, reviewer A of my MLC paper on empirical discovery said,
- This paper reorts on a technique of learning concepts expressed
as LISP expression using genetic algorithms. This is a topic of general
interest. The methodology adopted prevents a clear assessment of how much
over advance this approach represents.
- Spelling error of "reorts" and grammatical error of "over
advance" in original).
(Emphasis added)
Note that we are probably not talking about plagiarism in this particular
section. We are simply saying that reviewer T2 is apparently the same person
as reviewer A or B and that reviewer T2 is apparently the same person as
reviewer X or Y. If that were the case, the institutional affiliation of
reviewers A or B and of reviewers X or Y would be the same as the apparent
institutional affiliation of TAI reviewer T2 (i.e., the Naval Research Laboratory).
5.5. The opening sentences of TAI review T2 and MLC review A contain
5 similarities
Reviewer T2 of my TAI paper on pursuer-evader games begins his review,
- The idea of evolving Lisp expressions with GAs is
an interesting one and the work described in this paper is interesting.
- However, I feel that the experimental methodology used in this
research makes it difficult to evaluate the success of the work.
(Emphasis added)
Reviewer A of my MLC paper on empirical discovery begins his review,
- This paper reorts on a technique of learning concepts expressed
as LISP expression using genetic algorithms. This is a topic
of general interest. The methodology adopted prevents a
clear assessment of how much over advance this approach represents.
- Spelling error of "reorts" and grammatical error of "over
advance" in original).
(Emphasis added)
Note, again, that we are probably not talking about plagiarism in this particular
section. We are simply saying that reviewer T2 is apparently the same person
as reviewer A. If that were the case, the likely institutional affiliation
of reviewer A would be the same as the apparent institutional affiliation
of TAI reviewer T2 (i.e., the Naval Research Laboratory).
5.6. There are 4 similarities in another sentence of TAI review T2 and
MLC review A and they both contain a particular infrequently used word that
MLC reviewer X frequently uses
Reviewer T2 of my TAI paper on pursuer-evader games said,
- Good solutions are achieved so rapidly that is hard to judge
the difficulty of the problems (a comparison with random search
or other techniques would be helpful).
(Emphasis added).
Reviewer A of my MLC paper on empirical discovery said,
- In order to judge, it would be necessary to see the results
compared against an alternative search technqiue, perhaps
even random search.
- (Spelling error in "technqiue" in original)
(Emphasis added).
In addition to the similar elements appearing in these 2 sentences, notice
the word "judge."
Reviewer X of my MLC paper on optimal control strategies used this word
twice in his 501-word review:
- The technical soudness of the paper is extremely hard to judge.
- ...
- The data provided is insufficient to judge the merits of this
approach.
- (Spelling of "soudness" in original).
- (Emphasis added throughout this secton).
Reviewer T2 of my TAI paper on pursuer-evader games said,
- Good solutions are achieved so rapidly that is hard to judge
the difficulty of the problems (a comparison with random search or other
techniques would be helpful).
(Emphasis added).
Reviewer A of my MLC paper on empirical discovery said,
- In order to judge, it would be necessary to see the results
compared against an alternative search technqiue, perhaps even random search.
(Spelling error in "technqiue" in original)
- (Emphasis added).
Here's the frequency of occurrence of the word "judge" in various
review documents:
Reviewer B - 0 times
Reviewer Y - 0 times
Reviewer #1 - 0 times
Reviewer #2 - 0 times
Reviewer #3 - 0 times
Reviewer T1 - 0 times
Reviewer T2 - 0 times
Reviewer T3 - 0 times
Reviewer T4 - 0 times
316 reviews by 86 GP-96 peer reviewers - 3 times (1-in-21,379 odds)
5.7. Reviewers T2, A, and B over-diligently filled in every blank on
the paper review form
Reviewer T2 for my TAI paper on pursuer-evader games placed the following
hand-written words in the "Comments to the author(s)" section
of his paper review form:
- Please see attached
Referring again to the computer file containing the 64,109 words of the
316 paper review forms from the 86 peer reviewers of the genetic programming
papers at the Genetic Programming 1996 Conference. Section 8 (entitled "Suggestions
to Author") on the 16-part review form used by the GP-96 conference
was entirely blank on only 5% (19 of 316) of the review documents. 4% (14
of 316) of the reviews did not provide any advice to the author in section
8, but nonetheless over-diligently filled in the blank line of section 8
with a vacuous phrase.
Reviewers A and B of my MLC paper on empirical discovery did not give the
author any advice in the corresponding section of the MLC paper review form,
but nonetheless over-diligently filled in the blank space with a vacuous
phrase.
Both TAI reviews T1 nor T3 are hand-written and they filled in the "Comments
to the author(s)" section of the TAI paper review form in long hand.
This is not the only occasion when we encounter such over-diligence in filling
in every blank line on a form. See
section 2.9. See section
7.12.
5.8. Reviewer T3 contained 2 sentences that are almost identical in
thoughts, words, and phrases to paragraphs contained in MLC review X and
ECJ review #2
Review T3 was hand-written and the copy supplied to me was clipped (apparently
in xeroxing). Reviewer T3 said,
- How does this compare with other [sear]ch techniques
(e.g. random)? How full is search space with solutions for
your applications?
Recall that reviewer X of my MLC paper on optimal control strategies said,
- One suspects that the search space defined by the functions is dense
with solutions. It would help to see comparison with another
search method, even random search, on the same search space.
(Emphasis added).
Recall that reviewer #2 of my ECJ paper said,
- Since results are obtained so quickly (within 50 generations) it
is especially important to evaluate the density of acceptable solutions
in the search space. This usually means comparison with some
baseline approach, perhaps random search.
(Emphasis added).
Reviewer T3's says,
- How full is search space with solutions ... ?
(Emphasis added).
Full?
What a strange choice of words!
"Dense" is the ordinary way of saying the same thing. I have never
seen the use of such a strained synonym for "dense." Of course,
if one felt compelled to use a different word instead of "dense,"
one might compromise with "full." The conclusion is that reviewer
T3 is indeed "full."
5.9. Both reviewers T2 and T3 zeroed-in on the obscure issue as to why
the submitted paper did not use the mutation operation
All forms of evolutionary computation employ the operation of Darwinian
selection and reproduction.
The mutation operation generally plays a very small role in the genetic
algorithm. In fact, the dominant role of the crossover operation (as compared
to the mutation operation) distinguished John Holland's pioneering work
(1975) in inventing the genetic algorithm from earlier (and later) work
in the field of evolutionary programming and evolution strategies. Thus,
genetic algorithms (and genetic programming) are distinguished from other
forms of evolutionary computation in that they rely heavily on the crossover
(sexual recombination) operation. Genetic programming is a variant of the
genetic algorithm.
The mutation operation receives relatively little attention in the genetic
algorithms literature. It is not at the "top of mind" for most
practitioners of genetic algorithms.
Yet, reviewer T2 said, in his 141--word review,
- Finally, it is stated that mutation is not used. It is not
clear, then how the numbers are evolved in the solution to the simple pursuit
game (crossover does not appear to be sufficient).
(Emphasis added)
Review T3 was hand-written and the copy supplied to me was clipped (apparently
in xeroxing). Filling in a couple of minor words (shown by brackets), reviewer
T3 said, in his approximately 99-word review,
- [You] did not use mutation?? Do you have comparisons using
& not using mutation? Why [didn't] you use mutation? In particular,
if you only X-over subtrees, what happens if [some individual] is almost
optimal, but only the root of the tree has the wrong function?
(Emphasis added)
As previously mentioned, between 1988 and 1995, I have submitted about 100
papers on genetic programming to various peer-reviewed conferences, journals,
and edited collections of papers. Almost 70 have now been published (or
have been accepted for publication). Each of these 100 submissions were
reviewed, on average, by 3 peer reviewers (sometimes by as many as 14).
Thus, I have received approximately 300 peer reviews of my submitted papers
on genetic programming over the years. This accumulation of peer reviews
is a not insubstantial sampling of the way a broad range of anonymous scientific
peer reviewers react and comment on technical papers in this field.
Except for these 2 reviewers of this TAI paper and one other reviewer (discussed
below), no reviewer has ever raised the issue of my not using the mutation
operation in my work (which I did not use in papers until about 1995).
It seems improbable two independent-acting reviewers --- in reviews of only
99 and 141 words --- would zero-in on such an obscure issue.
By the way, although athe word "use" is a common word, it is nonetheless
curious to see the same choice of verb in the space of such short reviews.
5.10. Both reviewers T2 and T3 were argumentative about the ability
of genetic programming for solving problems without the mutation operation
Reviewer T2 said,
- ... crossover does not appear to be sufficient.
(Emphasis added)
Reviewer T3 had a very focused observation about the mutation operation:
- In particular, if you only X-over subtrees, what happens if
[some individual] is almost optimal, but only the root of the tree has the
wrong function?
(Emphasis added)
As previously mentioned, the mutation operation is not at the "top
of mind" for most practitioners of genetic algorithms. There has been
very little research activity directed to this particular operation. However,
it should be noted, in passing, that William M. Spears of the Naval Research
Laboratory (Spears 1993) wrote a paper entitled "Crossover or Mutation?"
within less than a year of this review for the FOGA-2 workshop in which
he studied the mutation operation. It should also be noted, in passing,
that a paper by Kenneth DeJong and William Spears of the Naval Research
Laboratory (DeJong and Spears 1993, page 621) takes specific note of the
fact that I sometimes do not use the mutation operaton in genetic programming.
Finally, it should be noted, in passing, documents released by the U. S.
government under the Freedom of Information Act (FOIA) named one particular
employee of the Naval Research Laboratory (William M. Spears) as having
reviewed a 40 papers in one nearby year (1994) in the field of evolutionary
computation. This number of papers is far beyond the number reviewed by
the typical scientist in this field and especially extraordinary for a student
with a master's degree. None of the foregoing necessarily means that Spears
is reviewer T3. Both Spears and another employee of the Naval Research Laboratory
(Schultz) are among the 4 TAI reviewers who are involved with evolutionary
computation; both of them could reasonably have reasonably rated themselves
as a "10" in terms of familiarity with evolutionary computation;
and either could possibly have been reviewer T3. It is also entirely possible
that neither of these 2 NRL employees were reviewer T3. Also, while the
evidence of plagiarism between reviewers T2 and T2 is persuasive, the TAI
reviews are shorter and do not contain as many different linkages as, say,
the MLC or ECJ reviews. No final judgment or opinion should be formed at
this time on any of the matters herein. Instead, the truth concerning all
of these matters herein should be definititvely determined in a thorough
and impartial investigation and factual determination made under the proposed
arbitration procedure by a retired federal judge.
Note that neither Spears nor Schultz was a member of the program committee
the Machine Learning Conference or a member of the editorial board of the
Evolutionary Computation journal. As suggested above (and supported
by numerous additional indications in the next section below), TAI reviewer
T2, reviewer A of the submitted MLC paper on empirical discovery, reviewer
X of the submitted MLC paper optimal control strategies, and reviewer #2
of the submitted ECJ paper may very well be the same person. Also, as suggested
above (and supported by numerous additional indications in the next section
below), TAI reviewer T1, reviewer B of the submitted MLC paper on empirical
discovery, Y of the submitted MLC paper optimal control strategies, and
reviewer #1 of the submitted ECJ paper may very well be the same person.
If so, neither Spears nor Schultz can be reviewer T2, A, X, or #2. Similarly,
neither Spears nor Schultz can be reviewer T1, B, Y, or #1.
5.11. Reviewers T2, T3, and #3 are the only reviewers who raised the
issue of the mutation operation --- thereby suggesting that Evolutionary
Computation reviewer #3 is either TAI reviewer T2 or T2
As mentioned above, with one exception, no reviewer has ever raised the
issue of my not using the mutation operation in my work except for TAI reviewers
T2 and T3.
What was that exception?
Reviewer #3 of my paper submitted to the Evolutionary Computation
journal said,
- The authors clearly state some research goals, but I think they
have
- some "hidden agendas" that should be made explicit, most notably
(1)
- to demonstrate that crossover is really helping in the described
- problem
, (2) a second goal is probably to demonstrate that genetic
- programming works on a novel application (control problems), or
- possibly, works better than the traditional GA on this application [if
- this is what I was supposed to learn from the paper, then the authors
- should be more explicit about this], (3) to advertise the genetic
- programming method
(a full 1 1/2 pages of text is devoted to
- describing other applications that the method has been used for), and
- only slightly more subtly to advertising Koza's book and videotape.
I
- think this last agenda is inappropriate for a scientific journal.
Thus, reviewer #3 considers it to be a "hidden agenda" if research
work happens to shows that
- ... crossover is really helping in the described problem,
Later, in this same review document, ECJ reviewer #3 says, under "extra
comments,"
- 10 pages - argues that crossover is effective for
the one run
Reviewer #3 apparently thinks that it is a "hidden agenda" that
"to demonstrate that crossover is really helping" and that "crossover
is effective."
Author: John R. Koza
E-Mail: NRLgate@cris.com
Go to top of NRlgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to Complaint Letter
to the Evolutionary Computation journal
Go to top of Previous Page
Go to top of Next Page