NRLgate -
Plagiarism by Peer Reviewers


Complaint Letter to the editorial board of the Evolutionary Computation journal


This page is part of the NRLgate Web site presenting evidence of plagiarism among scientific peer reviewers involving 9 different peer review documents of 4 different journal and conference papers in the fields of evolutionary computation and machine learning.

This page contains a the Complaint Letter to the editorial board of the Evolutionary Computation journal.


Go to NRLgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to top of Next Page

E-mail --- September 15, 1996
TO:
Editorial board of the Evolutionary Computation journal
CC: Editor-In-Chief and Associates Editors
FR: John R. Koza <NRLgate@cris.com>

This letter is a complaint concerning plagiarism by scientific peer reviewers in 9 peer review documents for 4 different journal and conference papers in the fields of genetic algorithms and machine learning.

This letter calls for a impartial investigation that will lead to definitive determination of the facts by means of binding arbitration between the journal's editorial board and myself conducted under the auspices of the American Arbitration Association by a mutually agreeable retired federal judge.

A first example of plagiarism occurs in the 2 peer reviews that I received from the International Machine Learning Conference (MLC) for my submitted paper applying genetic programming to empirical discovery. In the space of two very short reviews (only 138 and 218 words), there is lock-step agreement between peer reviewers (whom I will call "A" and "B") including Numerous lock-step similarities indicate that reviewer B had the entire text of reviewer A's already written review in front of him when he constructed his sycophantic review document. Even a few such similarities in a 138-word and 218-word review would alone be sufficient reason to warrant investigation; however, there are 16 lock-step similarities in these 2 reviews.

A second example of plagiarism occurs in the 2 reviews that I received from the Machine Learning Conference for a different paper applying genetic programming to optimal control strategies. There are numerous instances of lock-step agreement between peer reviewers (whom I will call "X" and "Y") including A third example of plagiarism among reviewers appears in reviews #1, #2, and #3 of my paper submitted to the Evolutionary Computation journal (ECJ) on applying genetic programming to electrical circuit design.

The fact that one ECJ peer reviewer had the entire text of another already-written review in front of him when he wrote his review is textually established because REVIEWER #2 INADVERTENTLY REFERRED, IN HIS REVIEW, TO A SPECIFIC COMMENT MADE BY REVIEWER #1 IN HIS REVIEWAlso, the imported comment contained a distinctive scientific term that was used in a colloquial (and incorrect) way.

There is also evidence on the face of the documents that reviewer #1 for the Evolutionary Computation journal made his already written review available to reviewer #2 by electronic means. . This improper transfer is shown by the abnormal pattern in the > symbols in the left margin of the review documents that I eventually received from the journal. (In e-mail systems, these automatically produced >'s distinguish "received text" from the recipient's reply to the "received text"). The telltale electronic >'s are abnormal for reviewer #1's responses to the questions on the ECJ's paper review form. The >'s show that reviewer #1's responses to the questions on his paper review form had been dispatched earlier as outgoing e-mail to reviewer #2. Reviewer #2 paraphrased this "received text" in order to make his review look different and then "replied to" reviewer #1. Reviewer #2 sent his plagiarized review (along with reviewer #1's already-written review) as an e-mail "reply" (thereby causing the telltale >'s to appear in an abnormal way for review #1).

Further confirmation of the fact that one peer reviewer had the entire text of another already-written review in front of him when he wrote his review comes from a comparison of the opening few sentences of ALL 5 SECTIONS of the paper review form for ECJ reviews #1 and #2. Although I had been puzzled by ECJ reviews #1, #2, and #3 since I first received them from the journal, it was only recently that I considered the possibility of plagiarism (as opposed to bias) among the reviewers and analyzed these documents side-by-side on a section-by-section basis. As soon as I did this, the numerous similarities in the choice of words, phrases, sentence structure, and thoughts in ALL 5 SECTIONS of these ECJ reviews became apparent.

The time sequence of the plagiarism indicated by the telltale electronic >'s is consistent with the time sequence indicated by the aforementioned Freudian slip in which ECJ reviewer #2 made a specific reference in his review to the contents of review #1.

In addition, there are numerous other lock-step similarities among ECJ reviews #1, #2 and #3, including A fourth example of plagiarism occurs in 2 of 4 reviews that I received from the Tools for Artificial Intelligence conference for a paper that applied genetic programming to pursuer-evader games. There is plagiarism among TAI reviewers T2 and T3.

The newly created WWW site (URL address below) presents most of the detailed evidence and may be viewed by members of the editorial board of the Evolutionary Computation journal.

It is conceivable that ECJ reviewers #1, #2, #3, and that MLC reviewers A, B, X, Y, and that TAI reviewers T2, and T3 of the 4 different conference and journal papers involved here are 9 different people. The existence of 4 separate groups totaling 9 plagiarizing reviewers would not make the offense of plagiarism any less serious, it would just mean that there are a surprisingly large number of separate groups of wrongdoers within the relatively small fields of evolutionary computation and machine learning.

However, reviews A, X, #2, and T2 of the 4 papers are inter-linked by numerous similarities (indicating that they were probably written by the same person). Moreover, reviews B, Y, #1, and T1 of the 4 papers are inter-linked by numerous similarities (again indicating a common authorship). In addition, there are linkages between reviews T3 and #3. These linkages suggest a single group of 3 different plagiarizing peer reviewers, rather than 9 different wrongdoers.

There were only 2 people in the world who were reviewers for MLC and ECJ, for MLC and TAI, and for ECJ and TAI. The small overlap of the pools of reviewers for the Machine Learning Conference, the Evolutionary Computation journal, and the Tools for Artificial Intelligence conference (supported by other evidence) points to the identities of the plagiarizing reviewers.

The diagram below presents plagiarism as horizontal dashed lines (with added arrows when the time sequence of the plagiarism can be established from the documents themselves) and presents likely common authorship of reviews as vertical lines. (It appears in clickable form on the home page of the WWW site).

TWO REVIEWERS........THIRD
IN COMMON WITH
.......PEER
MLC, ECJ, TAI
........REVIEWER

...B..<---..A
...|........|
...Y..<---..X
...|........|
...#1.--->..#2..---..#3
...|........|........ |
...T1.......T2..---..T3

MLC reviewers B and Y and ECJ reviewer #1 misspelled a certain technical word 9 times out of 9 within reviews B, Y, and #1. It is worth mentioning, in passing, that one of the only two people in the world who were in common with MLC, ECJ, and TAI misspelled this same word in two different PUBLISHED conference papers (for conferences where the author himself directly prepares the CAMERA-READY PAGES for printing in the proceedings).

Common authorship between the 138-word MLC review B and the 31-word TAI review T1 is established by the fact that both B and T1 directly quote the same 3 words from both submitted papers. However, the 3 quoted words did not appear anywhere in either paper! Instead, the actual phrase in the papers (which differed by one word from the quoted phrase) was directly and correctly quoted in MLC review A. In reviewer B's hasty plagiarism of review A, he mistyped the correct direct quotation from review A into his review. Months later, he reused his own earlier misquotation in his review of my paper for the second conference.

MLC reviews A and X, ECJ review #2, and TAI review T2 are linked by several similar whole paragraphs as well as by the appearance of certain infrequently used words and features (whose rarity is established by means of a computerized database of contemporary peer reviews). Reviews A, X, #2, and T2 are further linked by both words and a similar paragraph to 2 different SIGNED documents. It is worth mentioning, in passing, that the 2 signed documents were therefore apparently written by the second person belonging to the group consisting of the only two people in the world who were in common with MLC, ECJ, and TAI.

[It is also worth mentioning only 4 reviewers on TAI's "List of Reviewers" were involved in the field in evolutionary computation; that all 4 were all from the Naval Research Laboratory; that 3 of the 4 reviewers of my TAI paper rated themselves on the paper review form as being very familiar with the topic of my submitted paper (and made EC-specific comments about the paper); that 4 of the 8 accepted papers on evolutionary computation at the TAI conference happened to have been authored by the 4 reviewers from the Naval Research Laboratory; and that one of the accepted NRL papers was on the same subject as my submitted TAI paper, namely pursuer-evader games (evasive maneuvers).]

During the process of inserting synonyms and changing words, the receiving reviewer(s) frequently imported various punctuation errors, scientific mistakes, and other features from the mother review that he was mechanically paraphrasing. The number of careless errors in these different plagiarism events suggests that this improper activity may not be a matter of "only 9 times in a lifetime." Thus, in addition to these 9 reviews involving 4 different conference and journal papers, I am currently analyzing the significant similarities between some other reviews of other papers submitted to conferences in other years (including MLC).

The plagiarism involved here did NOT involve "cutting and pasting" entire sentences of one review in order to construct a second review. Conscious effort was expended with the specific goal of disguising the wrongdoing. The second reviewer used the first already written review as a template for thoughts, choice of words, punctuation, grammar, and the placement of items within his paper review form. However, in each case, the second reviewer paraphrased and freshly retyped the first review in order to try to make his review look different (by synonym insertion and other perturbations). The transformation of one already written review into its paraphrased form required conscious effort (albeit somewhat mechanical) expended by a person who knew he was acting improperly at the time he did it. Of course, the supplying of one already written peer review document to another peer reviewer also required conscious action executed by a person who knew he was acting improperly at the time he did it.

My specific complaint to the editorial board of the Evolutionary Computation journal, at this time, is that the journal passed off plagiarized peer review documents to the submitting author (me) as if these peer reviews were the work of legitimate and independent scientific peer reviewers.This passing off of fraudulent documents was conducted by electronic transmission means. My complaint is directed to the journal because it is the entity to which I submitted my paper and it is the entity that is responsible for the fraudulent documents that were issued in its name. I was entitled to honest and ethical treatment when I submitted a scientific paper to a scientific journal.. I did not get that.

Plagiarism among peer reviewers is an offense that goes to the heart of the integrity of the scientific peer review process.

Documents A, B, X, Y, #1, #2, #3, T2, and T3 establish that serious scientific misconduct has occurred. Up to 9 different persons (but probably 3) violated the trust reposed in them by the Evolutionary Computation journal, the Machine Learning Conference, and the Tools for Artificial Intelligence conference.

SOMEONE created these plagiarized documents.

The obvious question is who.

The identity of the trusted persons who created these plagiarized peer review documents needs to be definitively established. These identities are, of course, known to the founding Editor-In-Chief and the North American Associate Editor of the Evolutionary Computation journal who handled my submitted paper and who also originated the electronic transmission containing the review documents that I received by-email from the journal. The information about the identities of the plagiarizing reviewers is part of the records of the journal and the property of the journal.

Plagiarism can sometimes be established judicially merely from the face of the documents (e.g., plagiarism of literary and musical works, software piracy, judicial determination of authorship and authenticity of questionable documents, etc.). Of course, additional evidence may be desirable or necessary. Since the founding Editor-In-Chief and the North American Associate Editor happen to have both used e-mail addresses at Codes 5510 and 5514 of the Naval Research Laboratory in Washington in the electronic transmission of these fraudulent documents to me, much of the relevant additional documentary evidence that needs to be discovered for this case happens to be also necessarily preserved as government documents in accordance with statutory requirements and government record retention policies.

An unmistakably strong deterrent message must be sent to the scientific community in the fields of genetic algorithms and machine learning --- namely: THERE IS ZERO TOLERANCE FOR PLAGIARISM AND DISHONESTY IN THE PEER REVIEW PROCESS.

To accomplish this, I propose that the task of making a definitive determination that an offense has been committed and making a definitive identification of the wrongdoers be put into the hands of a person who Since my complaint is against the journal, this proposal can be implemented by a motion at the editorial board of the Evolutionary Computation journal that the journal submit these issues of scientific integrity between the journal and myself to binding arbitration by a mutually acceptable retired federal judge under the auspices of the American Arbitration Association . My proposal is to let this matter be settled by an impeccably impartial person based solely on facts and evidence. I am initiating this proposal for arbitration primarily for the plain reason that the journal has no established complaint resolution procedure for dealing with scientific misconduct. The scientific community believes it has "high standards" in its peer review process. However, the reality is that there is no mechanism for ongoing supervision of these standards. More importantly, there is no established mechanism for deciding on the merits of a complaint that is based on facts and evidence. Misconduct can occur in any area of human activity. The people from governmental, commercial, and educational institutions who do peer reviewing in the fields of evolutionary computation and machine learning are not fundamentally different from all other human beings in that regard. An unsupervised, unaccountable, secretive process involving perceived significant temptations, built-in conflicts of interest, operating in a lawless environment with a "circle the wagons" culture is guaranteed to produce ethical violations. The question is not "if," but "when," and "where" and "who." Wrongdoers do not usually voluntarily confess to their misconduct, particularly if it is serious. So why then is there is no established mechanism for supervision or accountability? In particular, why should the victim of wrongdoing bear the burden of proposing a one-off complaint resolution procedure and winning adoption of it from a group that typically will include the wrongdoers? Certainly, maintenance of high standards cannot depend on the generous willingness of wrongdoers to voluntarily accept accountability for their own actions.

The sole goal for a scientific journal in this matter (and all matters) should be determining the truth. The existence of plagiarism involving 9 reviews for 4 different conference and journal papers means that if the Evolutionary Computation journal has any aspirations to scientific credibility, it is going to be necessary to clear the air with the truth. If certain individuals have acted in a way that the journal believes was inconsistent with its standards, the journal should step forward and say so. The journal should say that if misconduct has occurred, the journal wants the particular individuals who committed the misconduct to be identified; that the journal considers these particular individuals responsible for their own conduct; that the journal considers their conduct to be improper; and that the journal disassociates itself from any such misconduct by these particular individuals. The journal has the choice, at this moment, of speaking up and saying that it is in favor of the truth. My proposal to let this matter be settled by an impeccably impartial person based solely on factual evidence gives an advantage only to whichever side's position is firmly based on hard documentary evidence and the truth.

I cannot imagine what argument can possibly be raised to an impartial determination of the truth concerning serious offenses of plagiarism at a scientific journal. If there has been no misconduct in the scientific peer review process in the fields of evolutionary computation and machine learning, I am sure the entire editorial board of the Evolutionary Computation journal will be eager to see an impartial judge establish the erroneousness of the charges and to clear the air concerning incorrect charges. If there has been misconduct, I am also sure that the journal will want to see that a strong deterrent message be sent, that truth be brought out, that justice be done, and that this entire matter be settled with finality. If there has been misconduct in which the founding Editor-In-Chief and North American Associate Editor were merely the innocent and unknowing transmitters of the plagiarized review documents that were provided to them by an arms-length group of 3 colluding reviewers from amongst the 32 geographically dispersed members of the editorial board, I am sure that both of these editors will be eager to see the responsibility squarely placed on the wrongdoers who are dishonoring the journal with which we all have been associated.

I believe there is only one satisfactory remedy for the problem of peer reviewing in the fields of genetic algorithms and machine learning --- the truth.

The editorial board may be interested in knowing how this issue of plagiarism has arisen. As is well known, I have previously complained that the peer reviewing process in the fields of genetic algorithms is overconcentrated in a small group of like-minded people who had acquired a disproportionately and inappropriately large voice in the process. Since I viewed the problem as one of bias and overconcentration, it had not occurred to me (until the last few months) that plagiarism between peer reviewers might be involved. In the last few months, events caused me to start reexamining various peer review documents in my files. This reexamination of past reviews for possible plagiarism was triggered specifically by the stream of e-mail messages that have been circulating amongst this journal's editorial board since December 1995 voicing the fierce opposition of the journal's editors against 2 ordinarily mundane requests.

The first request was that the editor make an accounting (without identifying any particular paper reviewed by any particular reviewer) to the journal's governing editorial board consisting of the number of reviews, by year, that each reviewer wrote in each of the first 3 years of the journal's operation. This requested high-level gross accounting is for the 125 submitted papers handled during the first 3 years (i.e., approximately 375 reviews). My paper was but 1 of 125 papers. Everyone on the editorial board already knows that 50% of the 32 members of the advertised editorial board accounted for only about 5% of all the reviews made for this journal for this 3-year period. Anyone familiar with the remaining 50% of the 32 people on the editorial board can readily figure out that only a relative few of the remainder of the 375 reviews will be accounted for by an additional hefty fraction of this remaining 50%. The final small sliver of the editorial board may, conceivably, have produced all of the remaining reviews; however, that seems unlikely. Thus, the issue raised by the requested tabulation is that there is uncertainty as to the origins of somewhere around 300 of the 375 reviews for the 125 submitted papers.

In addition to the editorial board's lengthy internal controversy over this high-level gross numerical tabulation, there has also been a second multi-month electronic controversy over the even more mundane proposal to publicly thank the people who have reviewed for the journal over the years (by listing them as a group by name, without any numbers, as is commonly done by most other journals and conferences). It is astounding that there could be months of heated controversy over a proposal to thank people who have voluntarily given their time and effort to a scientific journal. After all, the names of the people that the reading public thinks are doing all of the journal's reviewing (i.e., 32 members of the journal's advertised editorial board) have always been publicly disclosed in precisely this widely practiced manner. Obviously, the only new names on such a thank-you list would be those reviewers who are not on the journal's editorial board. These other people (who have not received the gratification of being on the editorial board) are, at the moment, totally unthanked and unacknowledged.

A numerical tabulation, by name and by year, issued to the journal's governing board manifestly has nothing to do with privacy, so the asserted concern about privacy cannot really be a bona fide issue with anyone. A public thank-you of a group (without any numbers) has even less to do with privacy or anything else since the names of the people that the reading public thinks are doing all the journal's reviewing have always been routinely published (indeed, advertised).

So why have these 2 ordinarily mundane tabulations aroused such prolonged controversy and excited such disproportionate sensitivity by the journal's editors?

I learned one thing from some 14 years of direct personal experience in dealing with government employees accustomed to operating secretively in closed and unaccountable environments: When mundane information requests are fiercely resisted on manifestly preposterous grounds, the trail almost always leads to a slippery slope that needs to be followed.

Ordinarily, the worst that either of these 2 requested mundane tabulations could possibly show is the already well-known and undisputed fact that there has been overconcentration in the reviewing process. Overconcentration is bad, but it is not a crime. Consequently, the editors' "drawing the line in the sand" on these 2 mundane requested tabulations suggested the existence of a slippery slope leading somewhere other than mere confirmation of a well known fact. So what could possibly lie at the bottom of the slippery slope about which the journal's editors are so disproportionately sensitive? The editor's unreasonable position created the inference that the editorial board would be shocked if it ever saw the list of the people who have been doing the reviewing for this journal. This shock might perhaps be triggered by the manifest lack of academic credentials of the people who have been doing the lion's share of the reviewing for this journal. Or, it might be triggered by the manifest lack of arms-length independence from employment supervisors of the people who have been doing most of the reviewing for this journal. Alternately, this shock might be triggered by the totally inappropriate station of the people who have been doing most of the reviewing for this journal. And, most likely, the board's shock might be further increased by knowledge of the exceedingly small number of names disclosed by these tabulations. Indeed, an exceedingly small number of names creating these 300 unaccounted review documents would necessarily excite the board's interest concerning --- to put it delicately --- the circumstances of their creation.

Indeed, the existence of the aforementioned pattern of plagiarism by scientific peer reviewers of the Evolutionary Computation journal and the MLC and TAI conferences raises the question of whether the "circumstances of creation" of these 9 plagiarized peer reviews for my 4 submitted papers was something that happened "only 9 times in a lifetime." If this plagiarism had occurred in "only" 9 times, surely these plagiarized review documents would have been constructed far more care and deception.

I previously complained about "overconcentration" and the existence of a "reviewing factory" in the peer review process in the fields of evolutionary computation and machine learning. I have previously tried to pursue my personal complaints about the reviewing process both indirectly and directly using every possible available avenue. I called upon the community to candidly face up to the problem of overconcentration (whose existence and undesirability has always been privately acknowledged by almost everyone). I now realize that a mere administrative overconcentration involving a "reviewing factory" may not have been the underlying problem at all. Instead, the question is whether there is a "plagiarism factory."

It should be recognized that a complaint about plagiarism is very different than a complaint about bias in that it cannot be easily obfuscated with difficult-to-understand issues of scientific opinion. In a case about plagiarism, the central issue is the circumstances of creation of the documents, not the opinions expressed in the documents. It is a matter of analyzing documents, time stamps on e-mail, and other physical evidence.

The subscribers and readers of this journal (few that they are), the submitting authors (who are apparently drying up), the members of the editorial board who have unqualifiedly lent their good names to this journal (but who have had precious little involvement with its operations), and the publisher who has soldiered on beyond the call of duty in supporting the journal's financial losses are all entitled to know the truth about the scientific misconduct at this journal. Certainly potential future submitting authors to this journal (and upcoming conferences in the fields of genetic algorithms and machine learning) are entitled to know whether their submitted papers are going to be reviewed by the same 9 (or 3) people who wrote reviewers A, B, X, Y, #1, #2, #3, T2, and T3.

Finally, everyone involved should be very clear about the following: I do not think there is anything wrong about complaining about dishonesty. I am not going to be made to feel self-conscious about insisting on the truth. I do not think it is any shame to me to have been the victim of wrongdoing. And, I am not going to be apologetic about the fact that I am going to persist in seeking the truth in this matter until there is an honest resolution.


John R. Koza
WWW: http://www.cris.com/~nrlgate/
PLEASE REPLY TO E-MAIL: NRLgate@cris.com
CC: Distribution list


P. S. The 3 to 9 wrongdoers involved (whomever they may be) could salvage a small amount of honor and avoid further protraction of this matter for themselves, the community, and me by stepping forward now and by (1) acknowledging the fact that they colluded and plagiarized scientific peer reviews of papers concerning genetic programming; (2) resigning all their positions at all journals and conferences; and (3) stating publicly that they will not engage in peer reviewing activity for conferences, journals, books, or funding proposals for the next five years. A non-quibling acknowledgment of the truth and an unconditional termination of all of their reviewing activity would enable them to concentrate fully on their future research activities.


Author: John R. Koza
E-Mail: NRLgate@cris.com

Go to NRLgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to top of Next Page