NRLgate -
Plagiarism by Peer Reviewers
Sections 5.0 thru 5.3
This page is part of the NRLgate Web site presenting evidence of
plagiarism among scientific peer reviewers involving 9 different peer review
documents of 4 different journal and conference papers in the fields of
evolutionary computation and machine learning.
This page contains sections 5 through 5.3 of "Evidence of plagiarism
in reviews T2 and T3 of a paper on pursuer-evader games submitted to the
Tools for Artificial Intelligence conference (TAI), evidence that TAI reviewers
T1, T2, and T3 are affiliated with the Naval Research Laboratory, and similarities
linking reviews T1, T2, and T3 with various MLC and ECJ reviews."
Go to top of NRlgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to Complaint Letter
to the Evolutionary Computation journal
Go to top of Previous Page
Go to top of Next Page
5. Evidence of plagiarism in reviews T2 and T3 of a paper on pursuer-evader
games submitted to the Tools for Artificial Intelligence conference (TAI);
evidence that TAI reviewers T1, T2, and T3 are affiliated with the Naval
Research Laboratory; and similarities linking reviews T1, T2, and T3 with
various MLC and ECJ reviews
This section presents evidence indicating the need for an impartial investigation
and determination of whether there was plagiarism among the 2 of the 4 scientific
peer reviewers who reviewed the paper that I submitted to the Second International
Conference on Tools for Artificial Intelligence (TAI) on the subject of
applying genetic programming to two different pursuer-evader game problems
and the artificial ant problem (No. 140).
My paper was being reviewed by the Tools for Artificial Intelligence conference
on artificial intelligence during the same year, but not during an overlapping
time period within the year, as my 2 papers (on empirical discovery
and optimal control strategies) were in the process of being reviewed by
the Machine Learning Conference. As will be seen,
- Only 4 of the reveiwers for the TAI conference were knowledgable in
evolutionary computation and they were all from the Naval Research Laboratory
- There was plagiarism among TAI peer reviewers T2 and T3.
- TAI reviewer T1 is established to be the person as MLC reviewer B because
both reviews contained direct quotations from both submitted papers, but
the quoted words do not appear in either paper. (These are the words that
MLC reviewer B incorrectly transcribed when he plagiarized MLC reviewer
A).
- There are similarities between TAI reviews T1, T2, and T3 and MLC reviews
A, B, X, Y
- There are similarities between TAI reviewers T1, T2, and T3 and reviewers
#2 and #3 of my paper submitted to the Evolutionary Computation journal.
The evolutionary computation papers represented only a tiny fraction (8
out of 120) of the published papers of the TAI conference because the subject
matter of the TAI conference was considerably broader than machine learning
or evolutionary computation and included many subjects from the field of
artificial intelligence (AI).
There were only 4 reviewers on TAI's "List of Reviewers" who were
involved in any known way in the field of evolutionary computation. Except
for these 4 people, none of the other TAI reviewers have ever, to my knowledge,
written a paper about evolutionary computation. Based on personal observation,
I believe I am correct in saying that none of the other TAI reviewer even
attended one of the major conferences on genetic algorithms or evolutionary
computation. All 4 of the EC-knowledgable reviewers at the TAI conference
had the same institutional affiliation, namely
- Kenneth DeJong, of Code 5510 of the Naval Research Laboratory
- John Grefenstette, of Code 5514 of the Naval Research Laboratory
- Alan C. Schultz, of Code 5514 of the Naval Research Laboratory
- William M. Spears, of Code 5514 of the Naval Research Laboratory
Because of the breadth of subject matter at the TAI conference, papers were
apparently reviewed by both specialists and non-specialists. Accordingly,
the first question on the TAI paper review form asked:
- Please answer the following questions by giving a proper number
from 1 (lowest) to 10 (highest):
How familiar are you with the topic of this paper?
Reviewer T2 and T3 rated themselves as a "10." As will be seen
shortly, both reviewers T2 and T3 gave the paper a poor recommendation (a
"weak reject" rating). Reviewer T1 rated himself as a "9"
and gave the paper a recommendation of "weak accept." In other
words, reviewers T1, T2, and T3 rated themselves as being very familiar
with evolutionary computation. These self-ratings were apparently accurate
and honest since reviewers T1, T2, and T3 made various specific comments
reflecting knowledge of evolutionary computation, genetic algorithms, and
genetic programming.
In other words, the 3 EC-knowledgable reviewers T1, T2, and T3 appear to
have been from the Naval Research Laboratory. As will be seen, this view
is further supported by the similarities between TAI reviews T1, T2, and
T3 and MLC reviews A, B, X, Y and reviewer #2 of the Evolutionary Computation
journal.
Reviewer T4 gave himself a lower rating (7) as to familiarity with evolutionary
computation and gave the paper a recommendation of "accept." Reviewer
T4's comments were very general and did not reflect any particular specific
lack of knowledge about evolutionary computation. The final decision on
the paper was apparently based primarily on six numerical scores and one
check-off category on the TAI paper review form. In spite of 3 negative
recommendations, my TAI paper on pursuer-evader games was apparently accepted
based on the strength of reviewer T4 (the non-EC-knowledgable reviewer).
Coincidentally, 4 of the 8 published papers on evolutionary computation
were authored at the Naval Research Laboratory (out of the 120 published
papers at the TAI conference):
- Alan C. Schultz and John Grefenstette (a paper on applying SAMUEL to
pursuer-evader games)
- William M. Spears and Kenneth DeJong
- K. Dontas and Kenneth DeJong
- J. W. Bala and Kenneth DeJong
Coincidentally, all 4 EC-knowledgable TAI reviewers from the Naval Research
Laboratory had at least one of their own papers accepted and published by
the TAI conference.
Coincidentally, the subject of first of the above 4 papers from NRL that
were published at the TAI conference was an application of the SAMUEL system
to pursuer-evader games (evasive maneuvers) amd the subject of my submitted
paper to the TAI conference was pursuer-evader games. It will be recalled
that the SAMUEL system was invented in-house at the Naval Research Laboratory
in Washington by John Grefenstette.
- HELPFUL PREVIEW AND HINT: The reader may find it helpful to keep
in mind that the subsequent subsequent
section 7 will provide considerable evidence that
- reviewers A, X, #2, and T2 are the same person
- reviewers B, Y, #1, and T1 are the same person
- reviewers #3 and T3 are the same person
- The reader may also find it helpful to keep in mind that subsequent
section 6.3 will provide considerable evidence that there are only 2
people in the overlap between the reviewers for the Machine Learning Conference
(MLC), the editors and editorial board of the Evolutionary Computation
journal (ECJ), and Tools for Artificial Intelligence conference (TAI) and
that they are
- Ken DeJong of the Naval Research Laboratory
- John Grefenstette of the Naval Research Laboratory
DISCLAIMER: The 2 sessions on evolutionary computation at this particular
meeting of the annual IEEE TAI conference were a one-off event. There is
no reason to believe that the ongoing operation of the annual TAI conference
suffers from the problems discussed herein (and every reason to believe
the contrary).
5.1. Reviewer T1 used quotation marks and ellipsis and the 3-word prepositional
phrase "On one run, ..."--- thereby linking TAI reviewer T1 to
either MLC reviewer A or B
Quotation marks are ordinarily used to highlight words that are so memorable
and important that only the author's exact words do justice to the idea
involved. It is very odd to use quotation marks to highlight a mere prepositional
phrase --- particularly one as unmemorable and insignificant as the one
quoted below.
The only section of the paper review form of the Tools for Artificial Intelligence
conference that requested a written response by the reviewer was entitled
"Comments to the author(s)." Reviewer T1's entire written response
of my 6,026-word TAI paper on pursuer-evader games consisted of 31 handwritten
words:
- 1. Main ideas published elsewhere
- 2. New applications are interesting and novel.
- 3. reader of Tools Conference needs to know more of details like: frequency/rate
of convergence
- 4. Results are shaky: "on one run ..."
(Quotation marks and ellipsis in the original).
(Emphasis added)
This is, of course, not the only occasion when we seen these same 3 words
placed inside quotation marks with the grammatically incorrect use of ellipsis.
The paper on pursuer-evader games submitted to the TAI conference was in
the process of being reviewed contemporaneously during the same limited
time period as the submitted MLC paper on empirical discovery (and the submitted
MLC paper on optimal control strategies).
Reviewer A of my MLC paper on empirical discovery said,
- It is not sufficient to say "In one run, ...".
(Quotation marks and ellipsis in the original).
(Emphasis added)
Reviewer B of my MLC paper on empirical discovery said,
- The paper needs to be strengthened by presenting more formally than
"on one run ... "
(Quotation marks and ellipsis in the original).
(Emphasis added)
The joint appearance of this 3-word quotation within reviews written for
two different scientific conferences suggests that TAI reviewer T1 is the
same person as either MLC reviewer A or B.
But is TAI reviewer T1 the same person as MLC reviewer A or B?
5.2. Two reviews of different papers at different conferences contain
the same quotation error ("on" instead of "in") ---
thereby establishing that TAI reviewer T1 is the same person as MLC reviewer
B
The fundamental purpose of quotation marks is to capture words that are
so significant that only the author's precise words do justice to the important
idea involved. When writers quote such memorable words, they usually carefully
check to be sure that they have correctly transcribed the quoted words.
Notice that reviewer T1 of my TAI paper said,
- "on one run ..."
- (Quotation marks and ellipsis in the original).
(Emphasis added)
Where did reviewer T1 get these 3 words?
These 3 words do not appear anywhere in my submitted 6,026-word TAI paper
on pursuer-evader games!
In fact, the only similar phrase appearing anywhere in the submitted TAI
paper is "In one run" (Emphasis added).
Recall that review B of the submitted MLC paper on empirical discovery made
this same inadvertent typographical error.
- "on one run ... "
- (Quotation marks and ellipsis in the original).
(Emphasis added)
In fact, the only similar phrase appearing anywhere in the submitted MLC
paper is "In one run" (Emphasis added).
These 3 words do not appear anywhere in my submitted 3,118-word MLC paper
on empirical discovery!
How could two reviews for different papers at different conferences contain
the same transcription error ("on" instead of "in")?
The straight-forward explanation is that TAI reviewer T1 is the same person
as MLC reviewer B. Indeed, if reviewers T1 and B are not the same person,
then T1 must have had access to an already written review of B (or vice
versa) and there must then have been plagiarism between A, B, and T2 ---
3 different scientific peer reviewers at 2 different conferences.
Recall that the apparent institutional affiliation of reviewer T1 is the
Naval Research Laboratory (based on T1's self-rating). Thus, if reviewer
T1 and B are the same person, the institutional affiliation of reviewer
B is the Naval Research Laboratory. However, if reviewer T1 and B are different
people, we are left with the unlikely scenario in which one of them gained
access to an already written review document located at a military research
laboratory.
HELPFUL PREVIEW AND HINT: Subsequent sections (6 and 7) will provide
considerable additional independent evidence that the institutional affiliation
of MLC reviewer B (and other MLC reviewers) is the Naval Research Laboratory
and that reviewers
- A, X, #2, and T2 are the same person
- B, Y, #1, and T1 are the same person
- #3 and T3 are the same person
5.3. Review T2 contained a paragraph that is almost identical in thoughts,
words, and phrases to paragraphs contained in reviews A, X, and #2 --- thereby
suggesting that TAI reviewer T2 may be the same person as reviewers A, X,
and #2
Normally, it is considered desirable for an automated machine learning technique
to produce results quickly and efficiently. In fact, it is common to criticize
techniques that consume too much computer time to produce results.
In this section, we will compare the thoughts, words, and phrases appearing
in a paragraph that appears in the 4 review documents:
- Reviewer A of my MLC paper on empirical discovery
- Reviewer x of my MLC paper on optimal control strategies
- Reviewer #2 of my ECJ paper on electrical circuit design
- Reviewer T2 of my TAI paper on pursuer-evader games
The only written comments made by reviewer T2 on the paper review form for
my TAI paper on pursuer-evader games were the following 3 hand-written words
in the "Comments to the author(s)" section:
- Please see attached
Reviewer T2 then attached the following 141 computer-printed words on a
separate piece of paper. Notice paragraph 2.
- The idea of evolving Lisp expressions with GAs is an interesting
one and the work described in this paper is interesting.
- However, I feel that the experimental methodology used in this research
makes it difficult to evaluate the success of the work. Good solutions
are achieved so rapidly that is hard to judge the difficulty
of the problems (a comparison with random search or other techniques
would be helpful). Also, only partial results are reported.
- Since a key issue is whether the GA is building up useful building blocks
(ie., Lisp subexpressions), it would be nice to see how your system scales
up to more difficult problems.
- Finally, it is stated that mutation is not used. It is not clear, then
how the numbers are evolved in the solution to the simple pursuit game (crossover
does not appear to be sufficient).
(Emphasis added)
Recall that reviewer A of my MLC paper on empirical discovery said,
- For one experiment, excellent results are claimed to appear within
the first nine generations. This is extremely suspicious, unless the
choice of functions to be used in the constructions of the concepts practically
guarantees success. In order to judge, it would be necessary to see
the results compared against an alternative search technqiue, perhaps
even random search.
- (Spelling error in "technqiue" in original)
(Emphasis added).
Recall that reviewer X of my MLC paper on optimal control strategies said,
- The papers claims that optimal control strategies were evolved within
46 generations - extremely quickly by genetic algorithm standards. One
suspects that the search space defined by the functions is dense with solutions.
It would help to see comparison with another search method, even random
search, on the same search space. The data provided is insufficient
to judge the merits of this approach.
(Emphasis added).
Recall that reviewer #2 of my ECJ paper said,
- Evaluation is the weak point of the paper. Since results are
obtained so quickly (within 50 generations) it is especially important
to evaluate the density of acceptable solutions in the search space. This
usually means comparison with some baseline approach, perhaps random
search. However, the comparison here doesn't do this issue justice.
(Emphasis added).
Note that we are probably not talking about plagiarism in this particular
section. We are simply saying that reviewers A, X, #2, and T2 may be the
same person. If that were the case, the likely institutional affiliation
of reviewers A, X, and #2 would be the same as the apparent institutional
affiliation of TAI reviewer T2 (i.e., the Naval Research Laboratory). However,
if reviewer T2 is a different person than A, X, and #2, we are left with
the unlikely scenario in which 3 people (A, X, and #2) gained access to
an already review document located at a military research laboratory.
Author: John R. Koza
E-Mail: NRLgate@cris.com
Go to top of NRlgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to Complaint Letter
to the Evolutionary Computation journal
Go to top of Previous Page
Go to top of Next Page