NRLgate -
Plagiarism by Peer Reviewers


Sections 10 thru 11


This page is part of the NRLgate Web site presenting evidence of plagiarism among scientific peer reviewers involving 9 different peer review documents of 4 different journal and conference papers in the fields of evolutionary computation and machine learning.

This page contains section 10 entitled "Public policy and scientific policy issues" and section 11 entitled "References."


Go to top of NRlgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to Complaint Letter to the Evolutionary Computation journal
Go to top of Previous Page


10. Public policy and scientific policy issues involving the scientific peer review process and the Naval Research Laboratory

The events discussed herein raise several public policy issues as well as issues for scientific journals and conferences.

For background information, see chapter 1 and particularly section 1.6, section 1.7, and section 1.8 on the Naval Research Laboratory.

10.1. Fragility of the scientific peer review process

Governmental and educational research institutions expend large amounts of money on scientific research. Evaluation of scientific research usually requires technical expertise that is not possessed by high-level managers and policy makers. These managers and policy makers must therefore rely heavily on measurements of the merits of scientific research that is made by other scientists.

The peer review process dispenses rewards and punishments to individuals and research groups in the scientific community that are perceived (correctly) as being extremely important to their survival, prosperity, and standing in their fields. Every researcher knows that the outcome of the scientific peer review process translates into direct benefits in the form of retention and advancement in his employment; tenure in academic careers; funding for his own or his group's projects; recognition; and increased ability to hire assistants.

I assume that no one thinks that the people from the governmental, commercial, and educational, institutions involved in peer reviewing for the fields of evolutionary computation and machine learning are fundamentally different from all other human beings in all other fields of human activity in that wrongdoing is impossible and inconceivable.

An unsupervised, unaccountable, and secretive process involving perceived significant temptations, built-in conflicts of interest, operating in a lawless environment is guaranteed to produce ethical violations. The question is not "if," but "when," and "where" and "who."

The scientific peer review process is generally secretive. There is usually no meaningful supervision. There is generally no mechanism for settling complaints. Instead, complaints are often handled on a "circle the wagons" type of "code of silence" that makes police officers look talkative when it comes to exposing misconduct by fellow officers. While the police at least rely on the overriding "greater good" of public safety to rationalize their corruption, it is not clear what overriding "greater good" motivates scientific researchers to try to "circle the wagons" and go into as deep state of denial whenever the possiblity of scientific misconduct is raised.

10.2. Need for established mechanisms for supervision, accountability, and complaint resolution in the scientific peer review process

Wrongdoers do not usually voluntarily confess to their misconduct, particularly if it is serious.

The scientific community believes it has high standards in its peer review process. However, there is no mechanism for ongoing supervision to verify that these standards are being maintained. More importantly, there is no mechanism for reaching a decision on a complaint that these standards are being violated.

Why then is there is no established mechanism for supervision?

Why then is there is no established mechanism for fact-finding and accountability?

Why should the victim of wrongdoing bear the burden of proposing a one-off dispute resolution procedure as an additional burden in the process of complaining?

Why should the wrongdoers directly involved in the complaint participate in a voting process that determines whether or not anything is to be done about their case?

What is needed is a compulsory complaint resolution procedure that does not depend on the wrongdoer's generous willingness to be held accountable for his own misconduct.

The procedure must be automatic. The parties in cases of scientific misconduct will usually be politically active in the scientific hierarchy and totally enmeshed in the circular backscratching system of rewards and punishments dispensed by the peer review process.Indeed, such involvment is the precondition for the ability to commit multiple instances of serious scientific misconduct.

This is simply a way of saying that the scientific community needs the "rule of law," rather than the "rule of men." What we have now is a lawless environment in which there is no compulsory jurisdiction over wrongdoers.

One approach is for a federal law requiring that there be some compulsory complaint resolution procedure for all federally funded research (both inside and outside the government). Federal requirements of this type usually become the model for handling such matters for all areas of the covered field, even those areas that are not within their direct reach. This procedure could be implemented as a requirement for arbitration (as exists, for example, in the securities industry) or by providing access to an administrative or judicial procedure that is both streamlined and respectful of the rights of both accuser and accused. Absent such a federal law, scientific organizations, such as journals and conferences, should adopt a compulsory complaint resolution procedure that does not depend on the wrongdoer's generous willingness to be held accountable for his own misconduct.

The compulsory complaint resolution procedure should not involve people directly involved in the scientific research process. The experience with scientists attempting to act as judges is consistently poor (as would, no doubt, an attempt to get people trained in the law to do computer science). The most common outcome of attempts at self-regulation and self-judgment is that the well known "circle the wagons" "code of silence" culture of the scientific community immediately takes control of the fact-finding process. A less common, but equally bad, result is that there is a lynch mob approach that tramples over people's rights.

10.3. Reform of the scientific peer review process at journals and conferences

First, the scientific community needs to acknowledge is that reality is that an unsupervised, unaccountable, and secretive process involving perceived significant temptations, built-in conflicts of interest, operating in a lawless environment with a "circle the wagons" culture is guaranteed to produce ethical violations. The question is not "if," but "when," and "where" and "who."

Such violations are, of course, regrettable, but they are a fact of life whose their existence should not be treated as impossible or inconceivable.

The recognition that a problem exists is the precondition to doing anything about the problem. Curiously, privately, there are few scientific reseachers who do not privately acknowledge that there is a problem. Yet, even though most scientific reseachers would characterize their work as a search for the truth, they seem singularly incapable of uttering the truth concerning this forbidden subject.
.
Second, the scientific community should not reward overparticipation in the peer reviewing process. Everyone involved should recognize that the secretive peer reviewing process has a special, differential appeal for potential "control freaks." Often, these would-be Lysenko's and "sci pols" are gross under-producers in terms of scientific research who think that massive amounts of peer reviewing and other administrative activities is a substitute for productive results. Peer reviewing should be strictly an amateur, not professional, sport. The analogy should be to jury service (where most states forbid frequent service and volunteering). In peer reviewing, those who work hard should be generously thanked by the community; those who work too hard should be sent packing.

Third, there is a simple, easily implementable thing that the scientific community can easily implement that will greatly diminish the potential for many of the worst problems in the peer review system: Rigorously and untiringly enforce diversity in the reviewing process. There is no reason for any group (whether a university laboratory, a commercial enterprise, or a government agency) to have multiple people involved in the peer review process in a circular back-scratching network of inter-locking peer reviewing activity. The likelihood of problems in peer reviewing (and particularly the worst ones) can be greatly diminished by the simple expedient of preventing the concentration of peer reviewing activity in the first place. Rules against overconcentration must be institutionalized because when this principle of diversity is not preserved in an institutionalized way, it becomes extremely easy for people to make exception after exception.

Fourth, the scientific community needs to recognize that it is important to have "government of laws --- not of men." An important advantage of established and institutionalized rules is that they are remembered. Arrangements that are based on casual personal judgments rarely pass the test of time. Each of the lessons must then be painfully relearned.There is a widespread exchange of mutual pleasantries among so-called scientific "colleagues" at periodic get-togetherssuch as conferences. However, there really are very few scientific researchers who have any possible way of knowing anything about the true character of their geographically dispersed scientific "colleagues."

10.4. Reform at the Naval Research Laboratory

The high-level management of a scientific research institution cannot possibly be conversant with every different scientific field and should recognize the difficulty of the problem that it faces in evaluating internal performance of its own research activities.

The high-level management of the Naval Research Laboratory can do several simple things quickly to deal with the issues raised herein.

First, the management should recognize that its own departments do not necessarily act in accordance with the best interests of the overall mission of the Naval Research Laboratory. The management of the Naval Research Laboratory owes it to the Laboratory, the taxpayers, and the country to understand the fundamental dynamics by which individual departments in a large organization adapt their behavior to survive and prosper in their environment. It is all really very Darwinian. It is very much in the interest of an individual department within the Naval Research Laboratory to give the high-level management the impression (or misimpression) that the particular department is at the cutting edge of research in a particular field and, more importantly, that the commercial, education, or private research arenas are all neglecting areas that are potentially important to the mission of the Naval Research Laboratory. The successfully creation of this image (in the minds of NRL management) is the "fitness measure" that determines the survival and prosperity of a department and therefore is the driving force that dictates the department's behavior.

Second, the management should recognize that scientific peer reviewing is not a proper governmental function in the first place. It is not clear that this activity serves any public purpose whatsoever, that it is an appropriate activity for the government in the first place, and that it is an appropriate use of taxpayer money. There is certainly no justification for massive governmental participation in scientific peer reviewing in terms of information acquisition since of the scientific papers involved herein are unclassified research that is freely and readily available. There is certainly no scientific journal or conference that will live or die if it doesn't have governmental reviewers.

Third, if there is any scientific peer reviewing at all by government agencies, it should be strictly limited in volume. A government agency is an essentially limitless resource in comparison to other entities in the academic and scientific community. A good rule is that no scientist should be reviewing more than about 3 to 5 times the number of papers that he submits (on an annual basis). The principle is simply that this approximate rate of reviewing balances the effort by other reviewers in reviewing that researcher's work. Once a person starts reviewing more than this number of paper, he moves out of the category of a scientist who is "paying is own way" in the peer review process and starts to become a "professional" reviewer. If there is any scientific peer reviewing at all by government agencies, it should be limited (as should all reviewing) strictly to the scientist chosen by the journal or conference involved. This appointment should always be non-delegatable (especially to subordinate employees or students who may be working in an agency). If there is any scientific peer reviewing at all, there should never be more than one person from the same agency involved with the same conference or journal.

Fourth, once a governmental department gets into the business of inventing technology in-house and promoting its own creations (which are, by themselves, reasonable activities), there is an additional strong argument that the department should not be involved in the peer review process at all.

Fifth, to the extent that governmental agencies are involved in funding research (which NRL generally does not, while agencies such as ONR, NSF, and DARPA do), it is particularly inappropriate for their personnel to be taking the actions (i.e., accepting papers for publication) that provide a key measurement in deciding the suitability of individual researchers or research groups to be awarded grants and also deciding the outcome of grant proposals that are heavily based on published papers. The potential for abuse becomes magnified when the implied threat of withholding favorable intra-government reviews for non-governmental scientists' funding proposals can be leveraged into favorable reviewing decisions by those applicants for the agency's own papers and unfavorable reviewing decisions by those applicants for papers on technologies that compete with the agency's in-house inventions.

Sixth
, there is no legitimate governmental reason to support established ongoing technical conferences. This is particularly true concerning conferences that charge drastically below-market admissions fees and conferences that routinely run generate large profits. I believe that there is a good argument for governmental research agencies to occasionally support the initial start-up of a new scientific conference in a field that the government deems to be of some national importance. However, the fields of evolutionary computation and machine learning are well past this stage The International Conference in Genetic Algorithms will be in its 12th year in 1997; it produced a profit of over $30,000 on revenues of about $80,000 at its last conference. Yet this conference continues to receive Department of the Navy support (nominally for student travel). However, when a "non-profit" corporation runs at a large profit, the money is simply moving from government coffers to the coffers of the private corporation that operates the conference. Student travel has nothing to do with the situation. In the case of ICGA, the corporation involved is a memberless and purposeless organization (ISGA) that appears to be accumulating money that is acquired in the name of student travel with the objective of deciding, one day, what to do with the money. If ISGA ever comes up with an idea for a project that merits government support, it should seek government funding for that clearly identified project based on the merits of its specific project. The Machine Learning Conference (MLC) is a successful ongoing conference that routinely draws over 200 attendees, but charges dramatically below-market fees to its attendees. MLC reportedly has recently broken-even or shown a small profit at its most recent two conferences in Tahoe and Italy. After 13 years, there is no possible governmental reason to subsidize this particular group of conference attendees --- particularly when the conference charges below-market attendance fees. Certainly, conference attendees in machine learning are no poorer than conferences attendees in any other field. It is time to end the perceived dependence of these financially viable technical conferences on government support.

Seventh, it's time for a performance review of Code 5510's and Code 5514's back-scratching circularity of publication acceptances at conferences and journals on which people from Code 5514 are heavily represented (and which are sometimes also subsidized by small grants from NRL or by larger grants from ONR that are influenced by NRL). This performance audit should be performed by scientists who have no connection with the NRL-influenced funding stream.

Eighth, the potentially intimidating practice of using government employees to directly contact scientific researchers to collect citations to NRL authors should be stopped. If any agency wishes to compile citation counts on its own authors (a dubious use of public money to begin with), it should do by inspecting the readily available public literature in the field.

Ninth, the scientific community needs to realize that it is in a continuing involuntary and unseen competition for government funds with in-house departments within governmentt agencies even though a particular scientist (e.g. myself) may never have directly participated in any explicit funding competition. Individual departments within the Naval Research Laboratory need to give its high-level management the impression (or misimpression) that they are at the cutting edge of research in a particular field (an impression that is created by acceptance of research papers by the outside scientific community). More importantly, individual departments need to create impression that outside commercial, education, and private research arenas are neglecting areas that are potentially important to the mission of the Naval Research Laboratory because this provides the fundamental reason that they should be internally funded. The absence of publications in the scientific literature by other researchers and about competing technologies helps create this impression. The operation of both of these imperatives can be seen by browsing through the table of contents of the Machine Learning journal (where the vast majority of genetic algorithms and evolutionary computation papers are from authors associated with NRL), and the Machine Learning Conferences (where genetic programming is noticably absent, over a prolonged period of years, in spite of numerous submissions by numerous different authors), and the Evolutionary Computation journal (where all research that is remotely competitive with in-house NRL work and most research by the people most active in the field has been permanently frozen out).

11. References

DeJong, Kenneth A., and Spears, William. 1993. On the state of evolutionary computation. In Forrest, Stephanie (editor). 1993. Proceedings of the Fifth International Conference on Genetic Algorithms. San Mateo, CA: Morgan Kaufmann Publishers Inc. Pages 618 - 623.

DeJong, Kenneth A. 1996. Evolutionary computation: Recent developments and open issues. Proceedings of the First International Conference on Evolutionary Computation and Its Applications (EvCA 96). Moscow: Russian Academy of Sciences. Pages 7­p;17.

Grefenstette, John J. 1988a. Credit assignment in genetic learning systems. In Proceedings of the Seventh National Conference on Artificial Intelligence. Morgan Kaufmann. Pages 596­p;600.

Grefenstette, John J. 1988b. Credit assignment in rule discovery systems based on genetic algorithms. Machine Learning. 3(2-3) 225-245.

Grefenstette, John J. 1989. A system for learning control strategies with genetic algorithms. In Schaffer, J. D. (editor). Proceedings of the Third International Conference on Genetic Algorithms. San Mateo, CA: Morgan Kaufmann. Pages 183-190.

Grefenstette, John J. 1991. Lamarckian learning in multi-agent environments. In Belew, Richard and Booker, Lashon (editors). Proceedings of the Fourth International Conference on Genetic Algorithms. San Mateo, CA: Morgan Kaufmann 1991. Pages 303-310.

Grefenstette, John J. 1992. The evolution of strategies for multi-agent environments. Adaptive Behavior. 1(1) 65-90.

Grefenstette, John J. , Ramsey, Connie L. , and Schultz, Alan C. 1990. Learning sequential decision rules using simulation models and competition. Machine Learning. 5(4) 355-381.

Koza, John R. 1989. Hierarchical genetic algorithms operating on populations of computer programs. In Proceedings of the 11th International Joint Conference on Artificial Intelligence. San Mateo, CA: Morgan Kaufmann. Volume I. Pages 768-774.

Koza, John R. 1992. Genetic Programming: On the Programming of Computers by Means of Natural Selection. Cambridge, MA: The MIT Press.

Koza, John R. 1994. Genetic Programming II: Automatic Discovery of Reusable Programs. Cambridge, MA: The MIT Press.

Koza, John R. 1994b. Genetic Programming II Videotape: The Next Generation. Cambridge, MA: The MIT Press.

Koza, John R., and Rice, James P. 1992. Genetic Programming: The Movie. Cambridge, MA: The MIT Press.

Koza, John R. 1995. "Codes 5510 / 5514." Orange-covered booklet distributed at the 1995 International Conference on Genetic Algorithms conference (ICGA-95) in Pittsburgh. Palo Alto, CA: Prodigy Press.

Langley, Pat. 1986. On machine Learning. Machine Learning. 1(1):5-10.

Langley, Pat, Simon, Herbert A., Bradshaw, Gary L., and Zytkow, Jan M. Scientific Discovery: Computational Explorations of the Creative Process. Cambridge, MA: The MIT Press, 1987.

Potter, Mitchell A., DeJong, Kenneth A., and Grefenstette, John J. 1995. A co-evolutionary approach to learning sequential decision rules. In Eshelman, Larry J. (editor). Proceedings of the Sixth International Conference on Genetic Algorithms. San Francisco, CA: Morgan Kaufmann Publishers.

Quinlan, J. R. 1986. Induction of decision trees. Machine Learning. 1 (1): 81-106.

Schaffer, J. David. 1985. Multi-objective learning via genetic algorithms. Proceedings of the Ninth International Joint Conference on Artificial Intelligence. Los Altos, CA: Morgan Kaufmann. Pages 593-595.

Schultz, Alan C. and Grefenstette, John J. 1990. Improving tactical plans with genetic algorithms. In Proceedings of IEEE Conference on Tools for Artificial Intelligence. Los Alamitos, CA: IEEE Computer Society Press. Pages 328-334.

Spears, William M. 1993. Crossover or mutation? In Whitley, Darrell (editor). 1993. Foundations of Genetic Algorithms 2. San Mateo, CA: Morgan Kaufmann Publishers Inc. Pages 221 ­p; 237.

Author: John R. Koza
E-Mail: NRLgate@cris.com

Go to top of NRlgate Home Page
Go to Abbreviated Table of Contents
Go to Detailed Table of Contents
Go to Complaint Letter to the Evolutionary Computation journal
Go to top of Previous Page