NRLgate -
Plagiarism by Peer Reviewers
Sections 7.17 thru 7.20
This page is part of the NRLgate Web site presening evidence of
plagiarism among scientific peer reviewers involving 9 different peer review
documents of 4 different journal and conference papers in the fields of
evolutionary computation and machine learning.
This page contains sections 7.17 through 7.20 of "Indications that
there are only 2 or 3 (as opposed to 9) different plagiarizing reviewers
among the peer reviewers at the Machine Learning Conference (MLC), the editors
and members of editorial board of the Evolutionary Computation journal
(ECJ), and the Tools for Artificial Intelligence conference (TAI)."
Go to top of NRlgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to Complaint Letter
to the Evolutionary Computation journal
Go to top of Previous Page
Go to top of Next Page
7.17. There are only 2 people among the 24 Machine Learning Conference
reviewers and among the editors and 32 editorial board members of the Evolutionary
Computation journal who are known to have a close working relationship,
at the same institutions, over many years
Because of the seriousness of collusion and plagiarism in the scientific
community, a very unusual kind of relationship based on strongly-motivated
common goals, shared experiences, and a great deal of mutual trust is a
likely precondition for multiple occurrences of plagiarism.
The lock-step similarity in thoughts, words, grammatical oddities, grammatical
errors, and order of occurrence within the same sections of the paper review
forms indicates that the collusion did not arise merely from a vague shared
viewpoint about technology in general or from vague verbally communicated
generalities over the telephone. The transformation of each already written
review into its slightly altered form required the expenditure of some effort
(albeit somewhat mechanically). The transformation of one already written
review into its paraphrased form required conscious effort expended by a
person who knew he was acting improperly at the time he did it. The effort
was expended with the intent to deceive. Of course, the supplying of an
already written review to a second peer reviewer was also a conscious activity
executed by a person who knew he was acting improperly at the time he did
it. These improper activities presuppose an enormous amount of mutual trust
and strongly-motivated common goals.
Strongly-motivated common goals that cause people to bend the rules tend
to arise far more frequently when people have a long-term face-to-face relationship
in a closed environment. In contrast, such concerted action is far less
likely to arise between individuals who rarely see each other face-to-face,
who live in different cities or continents, and who are driven by entirely
different kinds of goals and employed by entirely different types of institutions
(government, educational institutions, commercial companies, and non-profit
research institutions).
With 2 notable exceptions, the editors and members of the editorial board
of the Evolutionary Computation journal and the reviewers for the
Machine Learning Conference are an institutionally diverse group and dispersed
geographically group of people who work in many different kinds of environments
(government, educational institutions, commercial companies, and non-profit
research institutions). The 2 exceptions are
- Kenneth DeJong of the Naval Research Laboratory
- John Grefenstette of the Naval Research Laboratory
It should also be recalled that North American Associate Editor John Grefenstette
obtained his PhD from the University of Pittsburgh in 1980 under the chairmanship
of Dr. Kenneth DeJong who was at the University of Pittsburgh at the time,
so this relationship spans almost 2 decades.
For almost a decade, they have both worked in the same place; they socialize
together; they work together as an editor and associated editor for both
the Evolutionary Computation journal and the Machine Learning
journal; they both serve on the organizing committees of many of the same
scientific conferences; they both serve as reviewers for many of the same
scientific conferences in the field; and they frequently travel together
to the same scientific conferences around the world.
When two people have an exceedingly close working relationship, they also
often end up exchanging and possessing shared misinformation. Several examples
of shared information have appeared throughout this document. This kind
of common misinformation often arises when two people operate in a closed
environment and have an exceedingly close working relationship over a period
of many years.
Of course, the existence of an exceedingly close working relationship, shared
goals, a common work environment, and a great deal of mutual trust over
a long period of time between a particular 2 individuals does not alone
establish the identity of the plagiarizers involved here.
7.18. An exceedingly close relationship with the editors can be inferred
from the unusual hostility of reviewers #1, #2, and #3 of my ECJ paper (thereby
raising the question of whether the submitted paper was reviewed entirely
inside NRL by non-members of the journal's editorial board)
The aforementioned hostility may indicate something about the kind of relationship
that a reviewer would necessarily need to have with the editors of a journal
in order to feel comfortable turning in a written review with such hostile
language.
When an arms-length scientific peer reviewer sends a review into the
Evolutionary Computation journal, he can reasonably expect that, at
the minimum, that both the Editor-in-Chief and the associate editor involved
would see his review. In the case of this journal, the reviewer has probably
been dealing directly with the associate editor. Moreover, the reviewer
would reasonably assume that the Editor-in-Chief exercises (at the minimum)
some degree of overall supervision and oversight of the entire process.
If an arms-length reviewer didn't like a particular paper, he would simply
say so. He give some valid scientific reasons for his negative opinion,
and leave it at that. But, he would be very unlikely to sign his name to
the kind of unprofessional and antagonistic language that appears in all
3 peer reviews for this paper.
To send in such an antagonistic review, an arms-length reviewer would need
to know --- for sure, and in advance --- that both the Editor-in-Chief
and the North American Associate Editor wouldn't be offended by receiving
such language and think less of the reviewer for having written it. That
is, the reviewer would have to have an exceedingly close relationship with
both the Editor-in-Chief and the North American Associate Editor.
Who would fit this description on the editorial board?
It is difficult to identify any of the 32 advertised and geographically
dispersed names on the editorial board of the journal who fit this description.
None are even located in the District of Columbia (although geographic proximity
is not an absolute precondition for an exceedingly close relationship).
Could any of these 3 "independent" peer reviewers possibly be
outside the 32-member editorial board?
There are 3 good reasons why they should not be.
First, the Evolutionary Computation journal was only 3 months old
when this paper was submitted. The paper number (EC-JG-9209-0003) suggests
that it was the third paper submitted in the month of September, 1992. The
Editor-in-Chief's January 3, 1993 report to the editorial board stated that
"We have received to date 19 submissions, 4 of which have been reviewed,
selected, revised, and sent to MIT Press for inclusion in the first issue."
Since the first "Call For Papers" was issued in late July 1992,
there cannot have been many more than about a dozen papers in the hopper
in September 1992 when my ECJ paper arrived. Ordinarily, 12 papers receive
36 reviews and there were 32 members on the editorial board. Thus, after
only three month of operations, this 32-member editorial board of this new
journal could hardly have been "badly overworked" or suffering
from lagging in enthusiasm in any way. It would have been an easy matter
to farm this paper out to three of the 32 members of the editorial board
at the time that this paper was submitted.
Second, the Editor-in-Chief has stated that non-board reviewers only came
into vogue since mid-1994 at the Evolutionary Computation journal
(when a number of changes were introduced when North American Associate
Editor Grefenstette could no longer continue handling 80% of the papers).
Third, the normal legitimate reason to go outside the editorial (i.e., to
attain some particular highly specialized expertise) did not apply to this
particular paper. In September 1992, I was the only published author on
genetic programming --- on or off the editorial board. The 32 editorial
board members had no greater or lesser inability to handle this paper than
anyone outside the editorial board. Because I was the only published author
on genetic programming at the time, this paper would necessarily have to
be reviewed by three reviewers who were generally knowledgeable about evolutionary
computation, but not directly experienced with GP.
For these 3 reasons, there was certainly no discernible or legitimate reason
why the reviewing of my submitted paper on genetic programming would have
been done, in whole or in part, outside the editorial board of the journal.
However, for sake of argument, suppose we ignore these 3 reasons and nonetheless
look outside the advertised names on the editorial board for 3 reviewers
who would fit the above description of a person who would be comfortable
in submitting such an antagonistic written review document that was likely
to be seen by both the Editor-in-Chief DeJong and the North American Associate
Editor Grefenstette.
One possibility is subordinate employees of North American Associate Editor
Grefenstette and Editor-in-Chief DeJong of Codes 5510 and 5514 at the Naval
Research Laboratory in Washington. Specifically, Code 5514 consists of the
following personnel:
- John Grefenstette, Ph.D., Section Head
- William L. Adams, M.S.
- David Aha Ph.D.
- Helen Cobb, M.S.
- Robert Daley, Ph.D., Visiting Scientist, University of Pittsburgh
- Diana Gordon, Ph.D.
- Mitchell A. Potter, George Mason University
- Connie Loggia Ramsey, M.S.
- Alan C. Schultz, M.S.
- William M. Spears, M.S.
Subordinates at NRL, like employees everywhere, are usually in sync with
their supervisors and the "culture" of their organization.
Another possibility is are the students at George Mason University who are
supervised by the Editor-in-Chief (where the Editor-in-Chief teaches part-time).
At least such GMU students (William M. Spears, Alan C. Schultz, and Mitchell
A. Potter) are also in Code 5514.
Students are usually in sync with their academic supervisors.
Both subordinates or students would know exactly what their superiors ---
Grefenstette and DeJong --- want to hear. And, such subordinates or students
would know the drill inside NRL for the Evolutionary Computation
journal and they would know these things for sure, and in advance. Thus,
there would be no risk to an in-house subordinate or student in signing
their name on an antagonistically worded document that would surely offend
an ordinary arms-length editor of a scientific journal.
Documents released by the U. S. government under the Freedom of Information
Act (FOIA) named one particular GMU master's degree student, William M.
Spears, who is also employed in Code 5514 of the Naval Research Laboratory
who reviewed 40 papers in 1994 alone in the field of evolutionary computation
(not necessarily for the Evolutionary Computation journal). These
documents indicated that other students and subordinates at NRL each annually
reviewed a similar, but smaller volume of papers in this field (not necessarily
for the Evolutionary Computation journal). These numbers reflect
the existence of the well-known "reviewing factory" or "boiler
room" at the Naval Research Laboratory in Washington for reviewing
in the genetic algorithms and machine learning field.
To put this volume peer reviewing in perspective, the most active master's
degree student in Code 5514 (William M. Spears) reviewed more papers in
one year than the cumulative 3-year total of peer reviews written by 50%
of the journal's 32-member editorial board taken together. Moreover, the
FOIA request mentioned above covered only Code 5514. It did not cover Code
5510 (where the Editor-in-Chief of the journal actually works). It is not
known how many papers for the Evolutionary Computation journal or
the Machine Learning Conference were reviewed in Codes 5510 and 5514. However,
there is no doubt that the peer reviewers at the reviewing "factory"
and "boiler room" at the Naval Research Laboratory in Washington
have no peers in the world of peer reviewing.
One begins to get the strong feeling that there is some connection between
reviewers #1, #2, and #3 and Codes 5510 and 5514 of the Naval Research Laboratory.
Could it be that my ECJ paper never got out of the Naval Research Laboratory
in Washington and was, therefore, reviewed entirely inside NRL by non-members
of the editorial board?
7.19. An exceedingly close relationship with the ECJ editors can be
inferred from reviewer #3's willingness to make a potentially embarrassing
unchecked accusation, in writing, that the authors of the submitted paper
were not "Up front"
Reviewer #3 (the one who used the phrases "hidden agenda," "blatant,"
and "only slightly more subtly") is the only one of the three
peer reviewers to recommend outright rejection of the paper. He asserted,
- ... I'm sure that substantial parts of this have been published
elsewhere. ... I'd like the authors to be up front about whether
that has been published elsewhere.
(Emphasis added).
Just how "sure" is reviewer #3 about this accusation that the
authors are not "up front"?
Reviewer #3 then cites the 1992 article (duly cited in the submitted paper's
bibliography) entitled, "Hierarchical automatic function definition
in genetic programming" in the Proceedings of Workshop on the Foundations
of Genetic Algorithms and Classifier Systems (called FOGA-92). The FOGA-92
workshop had just been held in the fall of 1992. My FOGA-92 paper was the
only presentation of automatically defined functions to the genetic algorithms
community at the time of these peer reviews.
There were only 19 papers presented at FOGA-92. In 1992, there were only
about a half dozen different books in print concerning the field of evolutionary
computation. It would have been a simple matter for reviewer #3 to obtain
a copy of the FOGA-92 article and actually check it out (or to simply ask
someone who attended FOGA-92).
Instead reviewer #3 says "I'm sure" and accuses the authors of
not being "up front" in an accusatory, one-way, unanswerable conversation
with the authors without checking his facts.
If reviewer #3 had checked the facts before he made the accusation that
the authors of the submitted paper were not "up front," he would
have found that the FOGA-92 article was a discussion of the small "toy"
Boolean problems of 3, 4, and 5 arguments. The submitted paper to the Evolutionary
Computation journal dealt with using genetic programming to evolve the
impulse response function for electronic circuit design using a new convolution-based
fitness measure. The submitted paper to the Evolutionary Computation
journal had nothing whatever to do with "toy" Boolean functions,
but, instead, involved a distinctly non-trivial problem offering the possibility
for practical applications.
Reviewer #3 (whoever he is) knew that Grefenstette and DeJong are well informed
about the field in which they are journal editors. He certainly also knew
that these two editors are highly active in the field and usually attend
conferences (and often play a major role in reviewing papers for them).
For example, North American Associate Editor Grefenstette and Editor-in-Chief
DeJong were both on the program committee and reviewers for the FOGA-92
workshop.
Why did reviewer #3 feel comfortable in sending an unchecked accusation
and not worry about embarrassing himself in the eyes of two arms-length
(and well informed) editors?
What kind of working relationship would reviewer #3 necessarily have with
both the North American Associated Editor and Editor-in-Chief to take the
chance of embarrassing himself with an accusation based on a mere guess?
(Indeed, an incorrect guess, in this case).
Could it be that reviewer #3 had a close enough working relationship with
both editors to know that the two editors couldn't care less about what
the review said as long as it recommended rejection of the paper?
One begins to get the strong feeling that there is some connection between
reviewer #3 and Codes 5510 and 5514 of the Naval Research Laboratory.
Could it be that my ECJ paper never got out of the Naval Research Laboratory
in Washington and was, therefore, reviewed entirely inside NRL by non-members
of the editorial board?
7.20. An exceedingly close relationship with the ECJ editors can be
inferred from reviewer #1's willingness to admit, in writing, that he couldn't
grasp the main take-home message of the paper
Reviewer #1 complained,
- It is also not clear in what sense things improved by going to automatic
function definition.
Reviewer #1 could have answered his own question by reading the following
from the paper's "abstract" on its first page:
- ... [T]his paper demonstrates that less computational effort is
required to yield a solution to the problem with automatic function definition
than without it.
Alternately, reviewer #1 could also have answered his own question by reading
the paper's "conclusion"::
- We found that only about 69% as much computational effort is required
when automatic function definition is used.
This paper had a simple, easily-understood, clearly-stated take-home message:
the new technique described in the paper promised to get results faster
on a computer.
In addition, this main point appeared throughout the paper with additional
supporting details. For example,
- This number, 1,120,000, is a measure of the computational effort
necessary to yield a solution to this problem with 99% probability [without
ADFs].
along the later statement,
- ... processing a total of 768,000 individuals ... is sufficient
to yield a solution to this problem with 99% probability [with ADFs].
Why was reviewer #1 unable to grasp this main take-home message?
One possible answer is that if a reviewer were prejudiced against a paper's
author or a paper's technology, he would therefore pay little attention
to what is actually in the paper. I can appreciate that a prejudiced reviewer
would want to construct a deceptive paper trail of seeming flaws to conceal
the obviously illegitimate basis for his preordained decision. And, I can
appreciate that such a reviewer not want to spend too much time thinking
about the paper, since he had already made up his mind.
But why admit to personal incompetence, in writing?
If a prejudiced peer reviewer were at arms length from the editorial leadership
of the journal, why would he risk embarrassment by signing a peer review
in which he said that he couldn't understand the paper's main point?
Would any of the 32 internationally known experts on the journal's editorial
board admit, in writing, to being unable to grasp a take-home message that
is clearly stated in its abstract, its conclusion, and its text?
What kind of close relationship with the editorial leadership of the journal
would one have to have in order to feel comfortable making such an admission
of personal incompetence in writing?
Could it be that reviewer #1 had some reason to think that the editors of
the journal couldn't care less about whether the review document made any
sense --- as long as it contained sufficient negative-sounding verbiage
to create a paper trail to seem to justify a preordained decision?
Perhaps a subordinate or student at NRL would feel comfortable making this
kind of admission of incompetence, in writing, to the North American Associate
Editor and the Editor-in-Chief.
Could it be that my ECJ paper never got out of the Naval Research Laboratory
in Washington and was, therefore, reviewed entirely inside NRL by non-members
of the editorial board?
Alternately, perhaps the reviewer #1 never exist at all (hence, no one risked
embarrassing himself to someone else by admitting incompetence) and the
review documents for this submitted paper were simply fabricated as part
of a "simulated" peer reviewing process.
One begins to get the strong feeling that there is some connection between
reviewer #1 and Codes 5510 and 5514 of the Naval Research Laboratory.
Author: John R. Koza
E-Mail: NRLgate@cris.com
Go to top of NRlgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to Complaint Letter
to the Evolutionary Computation journal
Go to top of Previous Page
Go to top of Next Page