NRLgate -
Plagiarism by Peer Reviewers
Sections 6 thru 6.3
This page is part of the NRLgate Web site presenting evidence of
plagiarism among scientific peer reviewers involving 9 different peer review
documents of 4 different journal and conference papers in the fields of
evolutionary computation and machine learning.
This page contains sections 6 thru 6.3 of "There are only 2 people
in the overlap between the reviewers for the Machine Learning Conference
(MLC), the editors and editorial board of the Evolutionary Computation journal
(ECJ), and Tools for Artificial Intelligence conference (TAI)."
Go to top of NRlgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to Complaint Letter
to the Evolutionary Computation journal
Go to top of Previous Page
Go to top of Next Page
6. There are only 2 people in the overlap between the reviewers for
the Machine Learning Conference (MLC), the editors and editorial board of
the Evolutionary Computation journal (ECJ), and Tools for Artificial Intelligence
conference (TAI)
Documents A, B, X, Y, #1, #2, #3, T2, and T3 establish, on their face, that
serious scientific misconduct involving collusion and plagiarism has occurred.
Plagiarism among peer reviewers is an offense that goes to the heart of
the integrity of the scientific peer review process.
Someone created these plagiarized documents.
Someone committed these offenses.
Up to 9 different persons violated the trust reposed in them by the Evolutionary
Computation journal, the Machine Learning Conference, and the Tools
for Artificial Intelligence conference.
Once it is established a serious offense has been committed, the obvious
question is who created these plagiarized reviews. Who are the trusted persons
who created these plagiarized peer review documents?
It is conceivable that reviewers A, B, X, Y, #1, #2, #3, T2, and T3 of the
4 different conference and journal papers involved here are 9 different
people. The 4 disjoint groups of plagiarizing reviewers would be
- reviewers A and B for my Machine Learning Conference paper on empirical
discovery,
- reviewers X and Y for my Machine Learning Conference paper on optimal
control strategies,
- reviewers #1, #2, and #3 for my Evolutionary Computation journal
paper on electric circuit design, and
- reviewers T2 and T3 for my paper on pursuer-evader games submitted to
the Tools for Artificial Intelligence conference (TAI).
The existence of 4 disjoint groups of plagiarizing reviewers would not make
the offense of plagiarism any less serious, it would just mean that there
are a surprisingly large number of disjoint groups of plagiarizing reviewers
within the extraordinarily small pool of people in the fields of genetic
algorithms and machine learning.
Most people (including myself) believe that wrongdoing is relatively rare
even though we know that wrongdoing occurs in every aspect of human activity,
including the scientific community. Although it is conceivable that reviewers
A, B, X, Y, #1, #2, #3, T2, and T3 of the 4 different conference and journal
paper involved here are 9 different people, the principle of Occam's Razor
suggests that the simplest explanation for a situation warrants extra attention.
All 5,000,000,000 souls on this planet are not candidates in the process
of determining the identity of the trusted persons who created these plagiarized
peer review documents . There were only 24 people on the advertised program
committee of the Machine Learning Conference (and 2 chairmen), 32 people
on the advertised editorial board of the Evolutionary Computation
journal (plus its Editor-In-Chief and 3 associate editors), and 186 reviewers
for the Tools for Artificial Intelligence conference.
I believe that a definitive identification of the wrongdoers involved in
creating these plagiarized reviews should be done by an impartial person
who is experienced and trained in reaching findings of fact and making judgments
based on the evidence. Specifically, I advocate a complaint resolution procedure
involving a retired federal judge acting under the auspices of the American
Arbitration Association for the task of making a definitive determination
of the truth. No final judgment or opinion should be formed at this time
on any of the matters herein. Instead, the truth concerning all of these
matters herein should be definitively determined in a thorough and impartial
investigation and factual determination made under the proposed arbitration
procedure by a retired federal judge.
However, the reader may wish to make some of his own preliminary conclusions
as to who created the 9 peer reviews for these 4 different conference and
journal papers. The factors below may enable the reader to reach his own
preliminary conclusion concerning this matter. The discussion below falls
into two main categories.
The information below may cause the reader to reach a preliminary conclusion
for himself that the identities of the plagiarizing reviewers is suggested
by the small overlap of the pools of reviewers for the Machine Learning
Conference, the Evolutionary Computation journal, and the Tools for
Artificial Intelligence conference (along with the familiarity with evolutionary
computation exhibited by reviewers A, B, X, Y, #1, #2, #3, T2, and T3).
The information provided in later sections may cause the reader to reach
a preliminary conclusion for himself that reviews A, X, #2, and T2 of the
4 papers are inter-linked by so many similarities that they were probably
written by the same person. Similarly, the reader may conclude that reviews
B, Y, #1, and T1 are inter-linked by so many similarities that they were
probably written by the same person. Of course, if the same person reviewed
different conference and journal papers, there is no plagiarism for those
papers. It is bad practice for a single reviewer to excessively review
a single author's work, but it would be no surprise if such reviews resembled
one another.
6.1. There are only 2 people in the overlap between the Machine Learning
Conference and the Evolutionary Computation journal
The 32 members of the editorial board of the Evolutionary Computation
journal (for its first 3 years of existence) were
- Emile Aarts, NV Philips, Eindhoven
- Thomas Baeck, University of Dortmund
- Richard K. Belew, Univ. California - San Diego
- Lashon Booker, The MITRE Corporation, MeLean,Virginia
- Yuval Davidor, Schema Evolutionary Algorithms Ltd.,Israel
- Lawrence Davis, TICA Associates
- Hugo de Garis, ATR, Kyoto
- Marco Dorigo, Universite' Libre de Bruxelles
- Larry J. Eshelman, Philips Laboratory, New York
- Dr. Terry Fogarty, University of the West of England
- David Fogel, Natural Selection, Inc.
- Lawrence Fogel, ORINCON Corporation
- Stephanie Forrest, Univesity of New Mexico
- David E. Goldberg, University of Illinois - Urbana
- John Holland, University of Michigan
- John Koza, Stanford University
- Reinhard Manner, University of Mannheim
- Bernard Manderick, Erasmus University of Rotterdam
- Worthy Martin, University of Virginia
- Jean-Arcady Meyer, Ecole Normale Superieure, Paris
- Zbigniew Michalewicz, University of North Carolina
- David Montana, BBN Systems and Technologies
- Gregory Rawlins, Indiana University
- Rick Riolo, University of Michigan
- David Schaffer, Philips Laboratory, New York
- Hans-Paul Schwefel, University of Dortmund
- Robert Elliott Smith, University of Alabama
- Steven F. Smith, Carnegie Mellon University
- Gilbert Syswerda, Optimax Systems Corporation
- Michael Vose, University of Tennessee
- Darrell Whitley, Colorado State University
- Stewart Wilson, Rowland institute for Science
In addition, the Editor-In-Chief of the journal is Kenneth DeJong of Code
5510 of the Naval Research Laboratory (and George Mason University).
The 3 associate editors of the journal handle different geographic areas
and they are
- John Grefenstette, Naval Research Laboratory, Washington, who is the
North American Associate Editor
- Heinz Muehlenbein, GMD, Germany
- Hiroaki Kitano, NEC Corporation, Tokyo
The 24 members of the program committee of the Seventh International Conference
on Machine Learning (to which my paper No.151 on empirical discovery and
my paper No 152 on optimal control strategies were submitted) were
- Ray Bareiss, Vanderbilt Univesity
- Gerald DeJong, University of Illinois at Urbana-Champaign
- Kenneth DeJong, Naval Research Laboratory (and George Mason University)
- Brian Falkenhainer, Zero Palo Alto Research Center
- Douglas Fisher, Vanderbilt University
- John Grefenstette, Naval Research Laboratory
- Russ Greiner, University of Toronto
- Kristian Hammond, University of Chicago
- Rob Holte, University of Ottawa
- Michael Kearns, Massachusetts Institute of Technology
- John Laird, University of Michigan
- Steven Minton, NASA Ames Research Center
- Tom Mitchell, Carnegie Mellon University
- Steven Muggleton, The Turing Institute
- Michael Pazzani, University of California at Irvine
- Lenny Pitt, University of Illinois at Urbana-Champaign
- J. Ross Quilnlan, University of New South Wales
- Larry Rendell, University of Illinois at Urbana-Champaign
- Jeffrey Schlimmer, Carnegie Mellon University
- Alberto Segre, Cornell University
- Jude Shavlik, University of Wisconsin
- Devicka Subramanian, Cornell University
- Richard Sutton, GTE Laboratories
- Chris Watkins, International Financial Markets Trading, Ltd.
In addition, the chairmen of the conference were Bruce Porter and Raymond
Mooney of the University of Texas at Austin.
There are only 2 people in the overlap between the Machine Learning Conference
and the Evolutionary Computation journal, namely
- Kenneth DeJong of the Naval Research Laboratory
- John Grefenstette of the Naval Research Laboratory
Of course, this observation concerning these 2 overlapping individuals does
not alone establish the identity of the plagiarizers involved here.
6.2. There are only 2 of the 24 people who were reviewers for the Machine
Learning Conference who are involved with the field of evolutionary computation
There is only a small overlap in subject matter between the fields of machine
learning (ML) and evolutionary computation (EC).
The Machine Learning Conference encompasses many different machine learning
(ML) technologies including, among others, decision trees, reinforcement
learning (RL), inductive logic programming (ILP), and evolutionary computation
(to the extent that it is applied to machine learning problems).
The field of evolutionary computation (EC) encompasses many different technologies
including genetic algorithms (GA), genetic programming (GP), classifier
systems (CFS), evolution strategies (ES), and evolutionary programming (EP).
Everyone familiar with the field of evolutionary computation and genetic
algorithms will instantly recognize that only 2 of the 24 members of the
program committee (and 0 of the 2 chairmen) of the Machine Learning Conference
are even remotely associated with the field of evolutionary computation.
Those 2 people are
- Kenneth DeJong of the Naval Research Laboratory
- John Grefenstette of the Naval Research Laboratory
None of the other 22 members of the program committee (or either
of the 2 chairmen) have ever, to my knowledge, written a single paper
about evolutionary computation and genetic algorithms. I believe I am correct
in saying that none even attended one of the major conferences on genetic
algorithms or evolutionary computation.
Indeed, the 2 people from the evolutionary computation community gained
appointment to the program committee of the Machine Learning Conference
primarily because they were actively involved in the specialized field of
evolutionary computation (just as most of the other people gained appointment
to the MLC program committee because were actively involved in other specialties
within the overall field of machine learning).
The most common way of assigning papers to reviewers at a conference whose
program committee consists of people representing many specialized technologies
is to assign a paper to recognized specialists in the subject matter of
the paper. If that were done for my 2 papers at the Machine Learning Conference,
then, reviews A, B, X, and Y were written by the 2 specialists in evolutionary
computation.
Sometimes a paper is assigned to one person who is a specialist in the subject
matter of the paper and one non-specialist. It would be unusual not to assign
a paper on genetic programming to at least 1 of the 2 specialists in evolutionary
computation. Indeed, failure to do so would negate the primary reason for
having these 2 active practitioners of evolutionary computation on the MLC
program committee in the first place.
Thus, the reader may come to the conclusion that is appropriate to tentatively
focus extra attention on the only 2 people on this list of Machine Learning
Conference reviewers who are involved with the fields of evolutionary computation.
Of course, this observation concerning these 2 overlapping individuals does
not alone establish the identity of the plagiarizers involved here.
6.2.1. Reviewers A and X of my MLC papers were knowledgeable about evolutionary
computation
The comments made by the reviewers A and X of my two MLC papers indicate
that they are very familiar with the field of genetic algorithms and evolutionary
computation.
Reviewer X of my MLC paper on optimal control strategies refers to
- genetic algorithm standards
(Emphasis added).
Reviewer A of my MLC paper on empirical discovery was "extremely suspicious.":
- For one experiment, excellent results are claimed to appear within
the first nine generations. This is extremely suspicious, unless
the choice of functions to be used in the constructions of the concepts
practically guarantees success. In order to judge, it would be necessary
to see the results compared against an alternative search technqiue,
perhaps even random search.
- The paper completely lacks any discussion of limitations of the method.
This also reduces the quality of the paper.
(Spelling error in "technqiue" in original)
- (Emphasis added).
How likely is that a generalist with no personal experience in using genetic
algorithms would refer to "genetic algorithm standards"?
How likely is it that a generalist with no personal experience in using
genetic algorithms assert that 9 generations are "extremely suspicious"?
Also, reviewer X is suspicious about the achievement of a solution within
"46 generations."
- The papers claims that optimal control strategies were evolved within
46 generations - extremely quickly by genetic algorithm standards. One
suspects that the search space defined by the functions is dense
with solutions. It would help to see comparison with another search method,
even random search, on the same search space. The data provided is insufficient
to judge the merits of this approach.
- There is no discussion of the limitations of the method, or of
directions for further research.
(Emphasis added).
How likely is it that a generalist with no personal experience in using
genetic algorithms would assert that 46 generations are suspicious?
As previously mentioned, only 2 of the 24 members of the program committee
of the Machine Learning Conference (and neither of its chairmen) are even
remotely associated with the specialized field of evolutionary computation
and genetic algorithms. Only 2 of the 24 have ever, to my knowledge, written
a paper about genetic algorithms. Those 2 people are
- Kenneth DeJong of the Naval Research Laboratory
- John Grefenstette of the Naval Research Laboratory
Of course, this observation concerning these 2 overlapping individuals and
their familiarity with evolutionary computation does not alone establish
the identity of the plagiarizers involved here.
6.2.2. Reviewer X of my MLC paper is conversant with the comparative number
of lines of computer code in various computer implementations of genetic
algorithm
Reviewer X of my MLC paper on optimal control strategies seems to be very
conversant with the number of lines of computer code in various computer
implementations of the genetic algorithm. He says,
- The presentation suffers from the presence of irrelevancies such
as the number of lines of Common Lisp code in the program (although
the number seems enormous compared to other implementations of genetic
algorithms), and the kinds of boards in the author's Mac II).
(Emphasis added).
John Grefenstette is author of a widely used computer program (called "Genesis")
in the C programming language for the genetic algorithm. This computer program
has been distributed, along with a detailed manual, for a number of years
by Grefenstette. Grefenstette is also founding curator of the public FTP
site and Genetic Algorithm Archive (GA Archive) at the Naval Research Laboratory
in Washington where Genesis and various other computer implementations of
the genetic algorithm may be retrieved by e-mail by the public. Thus, Grefenstette
is conversant with the comparative number of lines of computer code in many
different computer implementations of genetic algorithms.
In contrast, although I use Grefenstette's Genesis program in my own university
course on genetic algorithms, I do not know the number of lines of computer
code in Genesis (much less the number of lines in other programs that are
available at the FTP site managed by Grefenstette). These numbers are not
secret and can be obtained by anyone who is interested; however, these numbers
are simply not "top-of-mind" facts for most people.
While most of the 22 other members of the program committee (or the 2 chairmen)
of the Machine Learning Conference have a passing exposure to genetic algorithms,
it is most unlikely that any of them are familiar with the comparative number
of lines of computer code in the many different computer implementations
of the genetic algorithm.
Of course, familiarity with the comparative number of lines of computer
code in various different computer implementations of genetic algorithms
does not alone establish the identity of the plagiarizers involved here.
6.3. There are only 2 people in the overlap between the reviewers for
the MLC and TAI conferences and there are only 2 people in the overlap between
the reviewers for the TAI conference and the editors and editorial board
of the Evolutionary Computation journal
As discussed extensively earlier (Section
5), the only 4 reviewers for the Tools for Artificial Intelligence (TAI)
conference with any involvement in the field of evolutionary computation
were
- Kenneth DeJong, of Code 5510 of the Naval Research Laboratory
- John Grefenstette, of Code 5514 of the Naval Research Laboratory
- Alan C. Schultz, of Code 5514 of the Naval Research Laboratory
- William M. Spears, of Code 5514 of the Naval Research Laboratory
Moreover, TAI reviewers T1, T2, and T3 (but not T4) rated themselves on
the paper review form as being very familiar with evolutionary computation.
In addition, their detailed and specific comments in their TAI reviews reflect
knowledge of evolutionary computation and supports their self-rating that
they are very familiar with this field.
The only 2 people who overlap the TAI conference and the Machine Learning
Conference are
- Kenneth DeJong of the Naval Research Laboratory
- John Grefenstette of the Naval Research Laboratory
The only 2 people who overlap the TAI conference and the editors and editorial
board of the Evolutionary Computation journal are
- Kenneth DeJong of the Naval Research Laboratory
- John Grefenstette of the Naval Research Laboratory
Of course, this observation concerning these 2 overlapping individuals and
their familiarity with evolutionary computation does not alone establish
the identity of the plagiarizers involved here.
Author: John R. Koza
E-Mail: NRLgate@cris.com
Go to top of NRlgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to Complaint Letter
to the Evolutionary Computation journal
Go to top of Previous Page
Go to top of Next Page