NRLgate -
Plagiarism by Peer Reviewers


Sections 7.21 thru 7.24


This page is part of the NRLgate Web site presening evidence of plagiarism among scientific peer reviewers involving 9 different peer review documents of 4 different journal and conference papers in the fields of evolutionary computation and machine learning.

This page contains sections 7.21 through 7.24 of "Indications that there are only 2 or 3 (as opposed to 9) different plagiarizing reviewers among the peer reviewers at the Machine Learning Conference (MLC), the editors and members of editorial board of the Evolutionary Computation journal (ECJ), and the Tools for Artificial Intelligence conference (TAI)."

Go to top of NRlgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to Complaint Letter to the Evolutionary Computation journal
Go to top of Previous Page
Go to top of Next Page

7.21. The absence of normally expected vigilance by the editors of the Evolutionary Computation journal upon receipt of the 3 reviews (similarly raising the question of whether the submitted paper was reviewed entirely inside NRL by non-members of the journal's editorial board)

Since both North American Associate Editor John Grefenstette and Editor-in-Chief Kenneth DeJong the Evolutionary Computation journal have, from time to time, taught university courses, they would both necessarily have been trained to be vigilant to the remote possibility of collusion and plagiarism.


In addition, Grefenstette and DeJong are very experienced editors and knew that the vast majority of peer reviews are civil (especially when the reviewers feel they have to make a negative recommendation). If these three uncivil peer reviews came in to the journal from three independent peer reviewers, the journal's editors must surely have been struck by the coincidence of a common uncivil tone in all three reviews on one particular paper

Grefenstette and DeJong also knew that the characterization of the introductory paragraphs as "blatant" "advertising" was far from mainstream. They surely must have been struck by the coincidence of the same improbable non-mainstream criticism appearing in two peer reviews on one particular paper.

As experienced and vigilant editors and instructors editors, both North American Associate Editor Grefenstette and Editor-in-Chief DeJong surely also ought to have been struck by the repeated references to "Koza" (one of the paper's authors), by name, in a hostile and antagonistic way.

Why didn't these two experienced and vigilant editors and instructors take notice of these (and the many other) many other similarities in these "independent" peer review documents?

Certainly one of the duties of the editors of a scientific journal is to maintain the scientific integrity of the journal by being alert to the possibility (however rare) of improper conduct by peer reviewers.

What did Grefenstette and DeJong then do?

Did they exercise their duty of due care to the authors (and the journal, the readers of the journal, the publisher of the journal, and the scientific community) and make a minimal preliminary inquiry concerning these three reviews?

Did they do anything to try to find out how so many improbable events could have coincidentally occurred with respect to one particular paper?

Surely, mere curiosity alone ought to have aroused some flicker of interest.

Or did they neglect to make any inquiries about these astounding coincidences?

If they did not exercise their duty of due care, why not?

Was the reason that they had no reason to make such an inquiry because they already knew why these coincidences had occurred? Did the editors perhaps already know that these coincidences had occurred because the reviews had been done by one or more of their own subordinates or students within Codes 5510 or 5514 of the Naval Research Laboratory in Washington (and/or by current NRL contractors, former NRL employees, closely associated former students, the Editor-in-Chief's GMU non-NRL students, or persons already known to them to be like-minded as to genetic programming)?

Or, did the editors themselves participate in the one or more meetings or telephone calls that established the common thoughts, words, phrases, complaints, and the hostile tone that the "independent" peer reviewers later incorporated into their reviews?

Or , worse yet,did the editors "simulate" the peer review process by simultaneously acting as both writers and readers of peer reviews and then pass off, by e-mail to me, their own work as that of legitimate and independent scientific peer reviews?

7.22. An institutional affiliation can be inferred from the fact that reviewer #2 took umbrage over an imagined disparagement and slight of classifier systems, LS-1, and SAMUEL (thereby raising the question of whether the submitted paper was reviewed entirely inside NRL by non-members of the journal's editorial board)

Sometimes an accumulation of small pieces of evidence can cumulatively become very persuasive (particularly in a situation where only a tiny handful of people are involved).


SAMUEL is a machine learning technique developed by Grefenstette at the Naval Research Laboratory and first described in a 1989 paper (Grefenstette 1989).
SAMUEL is a genetic learning system designed for sequential decision problems. The design of SAMUEL builds on DeJong and Smith's LS-1 approach [17] as well as our own previous system called RUDI [7]. Some of the key features of SAMUEL are
- A flexible and natural language for expressing rules,
- Incremental rule-level credit assignment.
- Competition both at the rule level and at the strategy level.
- A genetic algorithm for search [of] the space of strategies.
- A set of heuristic rule learning operators that are integrated with the genetic operators.
Initial studies on an evasive maneuvers task have demonstrated that
- SAMUEL can learn general strategies for evasion that are effective against adversaries with a broad range of maneuverability characteristics, and under a variety of initial conditions (e.g., speed and range) [9],
- SAMUEL can learn high performance strategies even with noisy sensors [13],
- SAMUEL scan effectively use existing knowledge to speed up learning [16].

(Emphasis added).
Reviewer #2 challenged the submitted paper by saying,
The claim that ... GP is different because it searches a program space, is inaccurate. Many previous GA-based systems, going back to Smith's LS-1 and classifier systems, search program spaces. The systems LS-1, LS-2 and SAMUEL all have variable length representations, and so they too search spaces for solutions of "unspecified size".

(Emphasis added).
Nowhere does my ECJ paper make the claim that genetic programming is "different" from anything else (much less LS-1, classifier systems, LS-2, or SAMUEL). What the paper does say is
Genetic programming provides a way to find a composition of primitive functions and terminals of unspecified size and shape that may solve a problem.

(Emphasis added).
The paper does not provide even the slightest cause for offense or umbrage to those whose favored technology may be LS-1, classifier systems, LS-2, or SAMUEL.

I would say that if you asked almost anyone in the field of evolutionary to mention a long-established system that "searches a program space," they would immediately mention the genetic classifier system and leave it at that.

Classifier systems are well-known and well-studied.

The SAMUEL and the LS-1 system are much less well known. The SAMUEL system was invented by North American Associate Editor Grefenstette at the Naval Research Laboratory and is the subject of numbers published papers by various combinations of authors from the Naval Research Laboratory.

LS-1 is the subject of Stephen Smith's PhD thesis under Kenneth DeJong when DeJong was at Pittsburgh. LS-1 has been extensively studied at the Naval Research Laboratory and is the subject of several published papers by NRL authors.

I doubt that the name "SAMUEL" would leap into the mind of most researchers if they were asked about a system that "searches a program space." I think it is even less likely that the name "LS-1" would come to mind. Instead, I believe that most researchers in the field would simply mention the better-known "classifier system" and leave it at that.

On the other hand, someone who is conversant with SAMUEL might take umbrage about the paper's imagined disparagement of SAMUEL (and its omission of a bibliographic citation to SAMUEL). And someone truly interested in SAMUEL would be sufficiently conversant with SAMUEL to know that it has roots in Stephen Smith's LS-1 system from Pittsburgh.

But SAMUEL and LS-1 not only came to the mind of reviewer #2, but he took umbrage over an entirely imaginary disparagement and slight to SAMUEL and LS-1.

The only place in the world that I know of where there has been any extensive or any continuing research activity on all these three of the four technologies (namely, LS-1, classifier systems, or SAMUEL) is the Naval Research Laboratory in Washington.

But what about LS-2?

7.23. An institutional affiliation can be inferred from the fact that reviewer #2 took umbrage over an imagined disparagement and slight of LS-2 (thereby raising the question of whether the submitted paper was reviewed entirely inside NRL by non-members of the journal's editorial board)

Reviewer #2 also took umbrage over the imaginary slight to the LS-2 system.

What in the world is LS-2?

I had never heard of LS-2 when I received the 3 reviews of my submitted ECJ paper from the North American Associate Editor's. LS-2 is not mentioned in David Goldberg's 1989 book (which contained a virtually complete bibliography of this field as of the date of its publication and was the only textbook in the genetic algorithms field until recently). At the time, I just guessed (incorrectly) that it was some extension by Stephen Smith of his LS-1 system.

I did not find out what LS-2 was until November 1995.

During the summer of 1995, I asked John Holland, David Goldberg, and Rick Riolo if they knew what LS-2 was. None of them knew. I asked Robert Smith of the University of Alabama (who maintains an up-to-date bibliography on classifier systems) and he did not know.

Who are the published experts in this subfield of evolutionary computation? Considering just the editors and the 32 members of the journal's editorial board, they include the following (and I hope I did not inadvertently omit someone): Out of curiosity, additional inquiries were made in late December 1995 of several additional experts in classifier systems about whether they had ever heard of LS-2. Not surprisingly, based on these and earlier inquiries, none of the five published experts on the top half of the above list of 10 people had ever heard of LS-2.

Answers included replies from Stewart Wilson:
I think it exists, but can't remember where. Why don't you ask Steve Smith? sfs@isl1.ri.cmu.edu I think he'll know.
Here is the reply from Rick Riolo
I don't recall what LS-2 is, or developed it, unless is was something Smith did later.
That is, neither Holland, Goldberg, Riolo, Robert Smith of the University of Alabama, nor Riolo knew what LS-2 was, when asked. In other words, LS-2 is not widely known even the subfield that is directly concerned with this type of system.

Quite by accident, in early December 1995, I found one reference to LS-2 in a 1988 article at the National Conference on Artificial Intelligence (AAAI-88) by John Grefenstette (1988a) reporting on work on LS-2 done at the Naval Research Laboratory by Grefenstette.
The clustering problem was partially addressed in LS-2, a system that performs classification of human gait data (Schaffer & Grefenstette 1985).
It would appear, from this, self-citation that LS-2 is the joint work of David Schaffer and John Grefenstette. The cited 1985 article was co-authored by David Schaffer and Grefenstette during the time when they both were at Vanderbilt University.

But, then, in late December 1995, I subsequently learned that LS-2 was actually the invention of David Schaffer alone. In fact, I now know the facts: LS-2 was the subject of Schaffer's 1984 PhD thesis at Vanderbilt University.

So who knows about LS-2? Well, obviously David Schaffer. And, those who currently work at NRL or who recently worked at NRL would be familiar with LS-2.

For example, Lashon Booker, a former NRL employee not at a military contractor in suburban Virginia, replied to the e-mail survey about LS-2 by saying,
LS-2 is a rule-based system that uses vector-valued genetic search to solve multiclass pattern discrimination tasks. It was developed by Dave Schaffer and is described in his dissertation. I don't know if there are other citations for this work. Ask Dave, his email address is ds1@philabs.philips.com
Although Stephen Smith (inventor of LS-1) of Pittsburgh did not reply, one can assume he is familiar with succeeding work that was named in honor of his own PhD thesis work.

LS-2 is known, apparently, mainly to the small handful of people (such as the 5 people in the bottom half of the above list) all of whom are employees of NRL, former employees of NRL, who have worked with Grefenstette as co-author of a published paper, or have worked as a PhD student under DeJong.

The bottom line is that LS-2 is a relatively unknown system.

To further narrow things, Lashon Booker volunteered the information to me in Pittsburgh in July 1995 that he had never reviewed any paper of mine at any time. I did not ask this question of Lashon; the information was simply volunteered in the course of a conversation about the 1995 Machine Learning Conference. Thus, one of the five people in the bottom half of the above list need not be considered further.

Reviewer #2 is not only familiar with LS-2, he unjustifiably takes offense and umbrage at an imagined disparagement of four specific systems. The only place in the world that I know of where there has been any extensive or any continuing research on all 4 of these systems (LS-1, classifier systems, LS-2, and SAMUEL) is Codes 5510 and 5514 of the Naval Research Laboratory in Washington.

Although anything is possible in the age of telephones and e-mail, one must be initially inclined to think that the two other people not located in Washington (i.e., David Schaffer in upstate New York and Stephen Smith of Pittsburgh) who are not actually employed by NRL would be relatively less likely to be emotionally involved in whether or not some errant paper slighted these particular four technologies that occupied so much of NRL's time, money, and "technology transfer" efforts.

So, if, for sake of discussion, the bottom half of the above list were narrowed from five to two, the people who would be familiar with, and emotionally involved with, LS-2 consisted of Grefenstette and DeJong and, of course, their various subordinates at Codes 5510 and 5514 of NRL.

One begins to get the strong feeling that there is some connection between reviewer #2 and Codes 5510 and 5514 of the Naval Research Laboratory.

But, as previously mentioned, none of the 32 advertised members of the editorial board of the journal are located at the Naval Research Laboratory. Only the Editor-in-Chief and North American Associate Editor Grefenstette (and their subordinates) are located at the Naval Research Laboratory.

7.24. An institutional affiliation can be inferred from the fact that reviewer #2 overreached for gratuitous bibliographic citations to work at the Naval Research Laboratory (thereby raising the question of whether the submitted paper was reviewed entirely inside NRL by non-members of the journal's editorial board)

Reviewer #2's references to LS-1, classifier systems, LS-2, and SAMUEL can be interpreted as a not-too-subtle suggestion that the authors who desire their papers to be published should include bibliographic citations to the four technologies when revising their paper prior to publication.
Any doubt about the subtlety of this suggestion is eliminated by the list of six conditions (discussed in detail in section 3) for making this paper acceptable in the North American Associate Editor's e-mail message. Grefenstette said,
5. A more complete discussion of related work is needed.
Generally, when an author receives a suggestion from a colleague that he omitted a reference from the bibliography of his paper, he is immediately concerned that he may have committed the faux pas of omitting a bibliographic citation that he should have included.

Of course, when the suggestion comes from an anonymous peer reviewer, the question arises as to whether the suggestion is a legitimate and helpful suggestion coming from a scientific colleague who is genuinely concerned that the bibliographic omission might potentially embarrass the author if the author's paper got published without it. The other possibility, of course, is that the anonymous suggestion is coming from someone with a self-serving interest in receiving an undeserved citation.
In this instance, North American Associate Editor Grefenstette is the inventor of SAMUEL, co-author of a research paper on LS-2, and has worked and written extensively about both classifier systems and LS-1.

The Naval Research Laboratory --- like academic institutions and most commercial research and development laboratories --- consider the number of papers published by its scientific researchers as one measure of their performance. However, most such institutions are less concerned about bibliographic citations to the papers of their own personnel. In contrast, NRL is known to place considerable emphasis on the number of citations made by outside authors to the work of its personnel. The argument in favor of this policy is that citations may be a better indication of actual usage (i.e., actual transfer of technology). Accordingly, NRL diligently catalogs and documents the number of bibliographic citations by outsiders to the work of NRL personnel. In fact, NRL employees directly contact researchers around the country to collect instances where NRL authors may have been cited.

For example, I have been telephoned by NRL to inquire about any citations to NRL authors that have appeared in my published papers. Needless to say, anyone receiving a telephone call from a government agency checking up on bibliographic citations is left with the clear impression that these bibliographic citations are very important to someone.

In addition, NRL even advertises to find every last citation to its own authors, as shown by the following entry in the GA Digest (the biweekly e-mail newsletter on genetic algorithms published by NRL).
-Genetic Algorithms Digest Monday, August 26 1991 Volume 5 : Issue 26
...

From: wiley@aic.nrl.navy.mil (Cathy Wiley)
Date: Thu, 15 Aug 91 8:18:45 EDT
Subject: Request for Grefenstette citations

I am Cathy Wiley, Librarian for the Navy Center for Applied Research in
AI, and I have been asked to perform a citation count for John
Grefenstette. If you have published a paper that included a reference
to one or more of Dr. Grefenstette's papers, please send me:

(1) a copy of your paper by FAX to (202) 767-3172,

OR

(2) the complete citation of your paper, and the list of the references
to Dr. Grefenstette's papers in your paper's bibliography, by email to:
WILEY@AIC.NRL.NAVY.MIL,

OR

(3) a copy of your paper by surface mail to:

Cathy Wiley
Librarian, NCARAI
Code 5510
NRL
Washington, DC 20375-5000

Thanks for your help.

- Cathy Wiley

(Emphasis added).
Of course, in the case of my submitted ECJ paper about automatically defined functions in genetic programming applied to electronic circuit design, there isn't the slightest legitimate reason to include a bibliographic reference to LS-1, classifier systems, SAMUEL, the little-known LS-2, or any other work or author of the Naval Research Laboratory. (By the way, I did mention classifier systems, LS-1, and SAMUEL in the historical section of my 1992 book describing previous research work concerning systems that search "a program space").

There is no sense in which genetic programming is derivative of, or dependent upon, any of these four technologies that NRL has spent so much time and taxpayer money studying over the years (and so much time and taxpayer money promoting with hundreds of talks to scientific and academic audiences around the country).

When an anonymous peer reviewer uses his privileged position to overreach and demand such gratuitous and unjustified bibliographic citations, he creates an inference that his place of employment is the one (and only) institution in the world that is associated with the work that is to be gratuitously cited.

One begins to get the strong feeling that there is some connection between reviewer #2 and Codes 5510 and 5514 of the Naval Research Laboratory.

But, as previously mentioned, none of the 32 advertised members of the editorial board of the journal are located at the Naval Research Laboratory. Only the Editor-in-Chief and North American Associate Editor Grefenstette (and their subordinates) are located at the Naval Research Laboratory.

Could it be that my ECJ paper never got out of the Naval Research Laboratory in Washington and was, therefore, reviewed entirely inside NRL by non-members of the editorial board?



Author: John R. Koza
E-Mail: NRLgate@cris.com

Go to top of NRlgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to Complaint Letter to the Evolutionary Computation journal
Go to top of Previous Page
Go to top of Next Page