NRLgate -
Plagiarism by Peer Reviewers
Sections 7.12 thru 7.16
This page is part of the NRLgate Web site presening evidence of
plagiarism among scientific peer reviewers involving 9 different peer review
documents of 4 different journal and conference papers in the fields of
evolutionary computation and machine learning.
This page contains sections 7.12 through 7.16 of "Indications that
there are only 2 or 3 (as opposed to 9) different plagiarizing reviewers
among the peer reviewers at the Machine Learning Conference (MLC), the editors
and members of editorial board of the Evolutionary Computation journal
(ECJ), and the Tools for Artificial Intelligence conference (TAI)."
Go to top of NRlgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to Complaint Letter
to the Evolutionary Computation journal
Go to top of Previous Page
Go to top of Next Page
7.12. Reviewers A, B, and T2 share the tendency to over-diligently fill
in every line of a form with government bureaucrats
Who, by training and habit, always fills in every line of a form?
Certainly not the typical academic!
Certainly not the typical business person at a busy commercial enterprise.
On the other hand, government bureaucrats are among the minority of people
who are accustomed and trained to meticulously filling in every line of
a form (even if it's just with something as uninformative as "non-applicable"
or "see above" or "please see attached") because it
is standard practice within the government bureaucracy never to leave blanks
on a form.
Recall that reviewer A's entire response to section 5 (entitled "General
comments for the author(s)") for my MLC paper on empirical discovery
was as follows:
- See above.
Reviewer B's entire response the same section of the paper review form was
as follows:
- Same comments as above.
Reviewer T2's hand-written response for my TAI paper on pursuer-evader games
in the "Comments to the author(s)" section of the paper review
form was as follows:
- Please see attached
As previously mentioned, the chance is only about 1 in 25 that reviewer
will not give the author any advice on his paper review form, but will also
over-diligently fill in the blanks on a paper review form with a vacuous
phrase. Moreover, the chance is only 2 in 316 that a reviewer will not give
any "author advice" and will also use word "above" in
"filling in the blank." This is not the only occasion when we
encounter such over-diligence in filling in every blank line on a form.
See section 2.9. See
section 5.7.
There are only 2 government bureaucrats in common with the 24 Machine Learning
Conference reviewers, the editors and 32 editorial board members of the
Evolutionary Computation journal, and the reviewers for the Tools for
Artificial Intelligence conference with involvement in the field of evolutionary
computation. The 2 government employees are
- Kenneth DeJong, Chief Scientist of Code 5510 of the Naval Research Laboratory
- John Grefenstette, Section Head of Code 5514 of the Naval Research Laboratory
Of course, an overly diligent style in filling out forms that is a different
style than that employed by the vast majority of academic, commercial, and
other people in the field of evolutionary computation and machine learning
does not alone establish the identity of the plagiarizers involved here.
7.13. There are only 3 persons among the editors and 32 editorial board
members of the Evolutionary Computation journal who are known to
be familiar with an lightly attended, oral presentation
Reviewer A says, in section 2 (entitled "originality"),
- The approach has been reported on previously in MLW89. The
applications here are new.
- (Emphasis added).
Reviewer X says similarly in section 2.
- The authors have reported similar work at last year's ML Workshop
...
- (Emphasis added).
It is true that I orally presented a paper on genetic programming
at MLW89 in July covering some of the material that was about to appear
in my soon-to-published IJCAI-89 paper in August 1989. This oral presentation
(to a small break-out session representing about a third of the workshop's
attendance) and was not published in the printed proceedings of the
MLW89 workshop. Knowledge of this unpublished, lightly attended, oral presentation
is, therefore, not available by reading the published proceedings of this
workshop. Therefore, knowledge of this particular talk is highly limited.
The official "Participant List" of the 1989 Machine Learning Workshop"
at Cornell in 1989 lists only 3 people who are members of the editorial
board of the Evolutionary Computation journal. They are
- Kenneth DeJong of the Naval Research Laboratory
- John Grefenstette of the Naval Research Laboratory
- David Schaffer, Philips Laboratory, New York
Schaffer has co-authored papers with Grefenstette during the period when
both were at Vanderbilt University in the mid 1980's, and now works in upstate
New York.
Of these 3 workshop attendees in 1989, only Grefenstette and DeJong were
reviewers for the subsequent Machine Learning Conference to which
my MLC paper on empirical discovery and my MLC paper on optimal control
strategies were submitted. David Schaffer was not on the program committee
for the subsequent Machine Learning Conference. If reviewers A and
X are different people, then Grefenstette and DeJong are people who are
likely to know about my lightly attended oral presentation in 1989 and they
would be both be candidates to be reviewers A and X. If reviewers A and
X are the same person, then either Grefenstette and DeJong are candidates
to be reviewers A and X.
Of course, familiarity with one particular unpublished, lightly attended,
oral presentation at a particular earlier workshop does not alone establish
the identity of the plagiarizers involved here.
7.14. It would be difficult for 2 or more geographically dispersed members
of the editorial board of the Evolutionary Computation journal and
for 2 geographically dispersed reviewers for the Machine Learning Conference
to collude on the reviewing of a particular paper
The 32 individual members of editorial board of the Evolutionary Computation
journal are geographically-dispersed and institutionally-diversified.
The 24 individual members of program committee of the Machine Learning Conference
are geographically-dispersed and institutionally-diversified.
The existence of a paper submitted to a scientific journal or conference
is ordinarily known only to the submitting authors and the journal's editors
and conference chairmen.
The individual members of the editorial board of the Evolutionary Computation
journal and the individual members of the program committee of the Machine
Learning Conference ordinarily have do not know about the existence of a
submitted paper unless they happen to be appointed to review the paper.
Moreover, the editors of a journal and the chairmen of a conference do not
ordinarily tell reviewers who else has been appointed to review a particular
paper.
During the limited review period for reviewing a particular paper, two peer
reviewers might, conceivably, become aware that they were both reviewing
the same paper . If that were to happen, one such peer reviewer could, conceivably,
then provide his already written review to the second peer reviewer so that
the second reviewer could use it as a template in writing his review. However,
absent that unusual sequence of events, it would ordinarily be difficult
for 2 or more geographically dispersed members of the editorial board of
the Evolutionary Computation journal and for 2 geographically dispersed
reviewers for the Machine Learning Conference to collude on the reviewing
of a particular paper.
The additional difficulties presented by the telltale electronic >'s
contained in review document #1 of my ECJ paper are discussed next.
7.15. Even if 2 or 3 members of the 32 members of the editorial board
of the Evolutionary Computation journal were to collude on a particular
paper, the telltale electronic >'s would not appear as they did on review
document #1
Even if 2 members of the editorial board of the Evolutionary Computation
journal were to collude on a particular paper, the editor of the journal
would still expect to receive each reviewer's review by e-mail separately
from each reviewer --- not to receive two reviews from one reviewer and
none from the other. Thus, for example, reviewer #2 could not send his review
to reviewer #1 as a "reply" (the act that caused the >'s to
be added to review #1) and expect reviewer #1 to send both review documents
to the editor. More significantly, reviewer #1 would have to send his review
to the editor --- either as an ordinary "reply" to the editor's
message transmitting the paper review form or as an ordinary direct message.
Consequently, each reviewer would have to send his reply directly to the
editor in either of the two usual ways.
First, a reviewer could write his review as a direct e-mail "reply"
to the paper review form sent to him by the editor. In that case, the standard
questions on the journal's paper review form would be flagged by >'s
as "received text" and his answers would appear without the >'s.
(Review #2 of my paper submitted to the Evolutionary Computation journal
is an example of this).
Second, a reviewer could copy the paper review form into his favorite word
processor, write the review using his word processor, and then send the
original questions and his answers as a direct e-mail message to the editor.
In that case, neither the questions nor answers would be flagged with >'s.
(Review #3 is an example of this).
Neither of these methods of transmission would cause the >'s to become
attached to the answers written by the two reviewers (as they appear in
review #1 of my paper submitted to the Evolutionary Computation journal).
7.16. The transmittal letter from the North American Associate Editor
makes the same error as reviewers #1 and #2 of claiming that the paper does
not contain a "baseline" comparison even though the paper actually
devotes almost 2 pages to this subject
Recall that both reviewers #1 and #2 of my paper submitted to the Evolutionary
Computation journal made the gaffe of claiming that the paper lacked
the obligatory baseline comparison with random search (even though it occupied
almost 2 pages of space in the paper).
In the case of my ECJ paper, North American Associate Editor Grefenstette
(in his signed introductory part of his e-mail message) requested six revisions
of the paper as a condition for publication in the Evolutionary Computation
journal. Condition number 3 stated,
- 3. The evaluation of the method is incomplete. How does GP compare
on this problem with, for example, random search, analytic methods,
other heuristic methods, other genetic algorithms? [Emphasis added]
In other words, North American Associate Editor Grefenstette made the same
egregious gaffe as reviewers #1 and #2 in falsely asserting the absence
of the "baseline" comparison with random search.
Author: John R. Koza
E-Mail: NRLgate@cris.com
Go to top of NRlgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to Complaint Letter
to the Evolutionary Computation journal
Go to top of Previous Page
Go to top of Next Page