Integrity Under Attack: The State of Scholarly Publishing
02/06/2011 | Douglas N. Arnold
http://ima.umn.edu/~arnold/siam-columns/integrity-under-attack.pdf
Scientific journals are surely important. They provide the most effective means for disseminating and archiving scientific results, and so are a key part of an enterprise on which our health, security, and prosperity ultimately depend. Publications are used by universities, funding agencies, and others as a primary measure of research productivity and impact. They play a decisive role in hiring, promotion, and salary decisions, and in the ranking of departments, institutions, even nations. With big rewards tied to publication, it is not surprising that some people engage in unethical behavior, abuse, and downright fraud. Still, when I started to look at the issues more closely, I was appalled by what I found. In this column, I give a few troubling examples of misconduct by authors and by journals in applied mathematics. One conclusion I draw is that common bibliometrics—such as the impact factor for journals and citation counts for authors—are easily manipulated not only in theory, but also in practice, and that their use in ranking and judging should be curtailed.
SIAM places great value on scholarly publishing, of course, and we are taking strong actions to ensure the integrity of our own publications and to protect our authors from theft of their work. But we are still struggling to decide just what actions we should take. So I invite the thoughts of members of the SIAM community. If you have witnessed troubling incidents in journal publication, let me know. Do you think such incidents are on the rise? Should SIAM be doing more? Should we look beyond our own publications and authors?
Author misconduct—most obviously verbatim plagiarism, but also more subtle appropriation of ideas and duplicate publication—has always been with us. At SIAM, however, our impression is that the problem is becoming far more common. Perhaps even more disturbing is journal misconduct, carried out by publishers and editors, often with an evident profit motive. One example is a sloppy or sham peer review process designed to produce the impression of a serious scholarly journal without the substance. Another is the deliberate manipulation of citation statistics in order to raise the impact factor or other journal bibliometrics. A recent case involving SIAM brings in both author and journal misconduct. A paper published in a SIAM journal in 2008 was plagiarized essentially verbatim from a preprint version posted by the authors on the web. A copied version of the paper appeared in the International Journal of Statistics and Systems in the same year with different title and authors. SIAM's publisher, vice president for publications, executive director, and I undertook a full investigation, which required nearly six months.
The case got messier and more disturbing week by week. I decided that our final report on it should be made fully public; it is available on the web, where you can read the details.
Meanwhile, here are some of the sad conclusions. Based on the papers that we reviewed, we determined that the suspect authors had committed plagiarism in this and various other cases. At least four articles published under their names in four different journals are essentially verbatim copies of the articles of other authors, and we have reason to believe that there are other cases as well. The journal publisher, Research India Publications, publishes nearly 50 journals, many related to applied mathematics, but did not respond to our inquiries about the plagiarized article. We contacted the editor-in-chief listed on the journal web page, but he himself has been unable to contact the journal! After learning about this incident from us, he submitted his resignation to the journal but has received no response from the publisher; his name, along with those of numerous other distinguished mathematicians, remains on the journal website. Rumors of editor and journal misconduct have dominated the highly publicized case of the applied math journal Chaos, Solitons and Fractals (CSF), published by Elsevier. As reported in a 2008 article in Nature, 2 “Five of the 36 papers in the December issue of Chaos, Solitons and Fractals alone were written by its editor-in-chief, Mohamed El Naschie. And the year to date has seen nearly 60 papers written by him appear in the journal.”
In fact, of the 400 papers by El Naschie indexed in Web of Science, 307 were published in CSF while he was editor-in-chief. This extremely high rate of self-publication by the editor-in-chief led to charges that normal standards of peer-review were not upheld at CSF; it has also had a large effect on the journal’s impact factor. (Thomson Reuters calculates the impact factor of a journal in a given year as C/A, where A is the number of articles published in the journal in the preceding two years, and C is the number of citations to those articles from articles indexed in the Thomson Reuters database and published in the given year.) El Naschie’s papers in CSF make 4992 citations, about 2000 of which are to papers published in CSF, largely his own. In 2007, of the 65 journals in the Thomson Reuters category “Mathematics, Interdisciplinary Applications,” CSF was ranked number 2.
Another journal whose high impact factor raises eyebrows is the International Journal of Nonlinear Science and Numerical Simulation (IJNSNS), founded in 2000 and published by Freund Publishing House. For the past three years, IJNSNS has had the highest impact factor in the category “Mathematics, Applied.” There are a variety of connections between IJNSNS and CSF. For example, Ji-Huan He, the founder and editor-in-chief of IJNSNS, is an editor of CSF, and El Naschie is one of the two co-editors of IJNSNS; both publish copiously, not only in their own journals but also in each other's, and they cite each other frequently.
Let me describe another element that contributes to IJNSNS's high impact factor. The Institute of Physics (IOP) publishes Journal of Physics: Conference Series (JPCS). Conference organizers pay to have proceedings of their conferences published in JPCS, and, in the words of IOP, “JPCS asks Conference Organisers to handle the peer review of all papers.” Neither the brochure nor the website for JPCS lists an editorial board, nor does either describe any process for judging the quality of the conferences. Nonetheless, Thomson Reuters counts citations from JPCS in calculating impact factors. One of the 49 volumes of JPCS in 2008 was the proceedings of a conference organized by IJNSNS editor-in-chief He at his home campus, Shanghai Donghua University. This one volume contained 221 papers, with 366 references to papers in IJNSNS and 353 references to He. To give you an idea of the effect of this, had IJNSNS not received a single citation in 2008 beyond the ones in this conference proceedings, it would still have been assigned a larger impact factor than any SIAM journal except for SIAM Review.
Another example of journal misconduct was revealed with an element of comedy. In “‘CRAP’ paper accepted for publication,” published online in June in Science News, senior editor Janet Raloff 3 described an experiment in which Cornell graduate student Philip Davis and a friend used a computer program, SCIgen, to generate a random document; the grammar and vocabulary were those of a computer science research paper, but the document was completely free of meaningful content. (The paper opens, “Compact symmetries and compilers have garnered tremendous interest from both futurists and biologists in the last several years. The flaw of this type of solution, however, is that DHTs can be made empathic, large-scale, and extensible.'' Four pages later, it concludes, “We expect to see many futurists move to studying TriflingThamyn in the very near future.” Indeed!) The paper was submitted to The Open Information Science Journal (TOISCIJ), published by Bentham Science, a publisher of more than 200 open-access scientific journals (many of which, according to the publisher’s website, have high impact factors). Although the paper was submitted under pseudonyms and with the give-away affiliation Center for Research in Applied Phrenology, or CRAP, Davis was notified four months later that the “submitted article has been accepted for publication after peer-reviewing process in TOISCIJ.” Following the open-access model, the publisher told the authors that the paper would be published as soon as they sent a check for $800. (They declined to do so.)
The cases I have recounted are appalling, but clear-cut. Perhaps even more dangerous are the less obvious cases: publishers who do not do away with peer review, but who adjust it according to nonscientific factors; journals that may not engage in wide-scale and systematic self-citation, but that apply subtle pressures on authors and editors to adjust citations in favor of the journal, rather than based on scholarly grounds; authors who may not steal text verbatim, but who lift ideas without giving proper credit. These are much harder to measure and adjudicate. What do you think? Are such practices significantly distorting the scientific literature or enterprise? Do you have a story of such dubious practices to tell?
One conclusion that I am ready to draw is that we need to back away from the use of bibliometrics like the impact factor in judging scientific quality. It has long been noted that what the impact factor measures is not well correlated with the quality of a journal, and even much less with the scientific
quality of the papers appearing in it or of the authors of those papers. In our field, the 2008 IMU-ICIAM-IMS report Citation Statistics 4 made that case eloquently. Less emphasized has been that these metrics are open to gaming, and are in fact being gamed; in some cases they are likely a better indicator of the unscrupulousness of the authors, editors, or publishers than of the quality of their work.
Frequently, I hear of technical solutions, proposed in the hope that an adjustment to the formula—for example, increasing the time frame for the impact factor from 2 to 5 years, or excluding self-citations—will solve the problem. Such remedies, in my opinion, are doomed to failure. The numbers of citations to mathematical articles are small integers, with excellent papers often drawing lifetime totals of only tens or hundreds of citations, and such numbers are easily manufactured. What one editor can do in one journal by self-citation, a pair of editors can do with two journals without self-citation. Counting can never replace expert opinion.
What can we, as concerned scientists, do? Of course, the first step is to look to ourselves: As scientists, we should place great emphasis on scientific integrity, in what we write and what we review.
Ask yourself some questions before lending your name to a journal as an editor. Does that journal hew to high standards of peer review? Does it have clear policies and mechanisms for enforcing them? Is its output a useful addition to the sprawling scientific literature? We also need to educate others, not only our students, but also our colleagues and administrators and managers. The next time you are in a situation where a publication count, or a citation number, or an impact factor is brought in as a measure of quality, raise an objection. Let people know how easily these can be, and are being, manipulated.
We need to look at the papers themselves, the nature of the citations, and the quality of the journals look forward to learning from the experiences and thoughts of the SIAM community. You can reach
me at president@siam.org.
www.siam.org/journals/plagiary
Nature, vol. 456, 27 November 2008, page 432.
Scientific journals are surely important. They provide the most effective means for disseminating and archiving scientific results, and so are a key part of an enterprise on which our health, security, and prosperity ultimately depend. Publications are used by universities, funding agencies, and others as a primary measure of research productivity and impact. They play a decisive role in hiring, promotion, and salary decisions, and in the ranking of departments, institutions, even nations. With big rewards tied to publication, it is not surprising that some people engage in unethical behavior, abuse, and downright fraud. Still, when I started to look at the issues more closely, I was appalled by what I found. In this column, I give a few troubling examples of misconduct by authors and by journals in applied mathematics. One conclusion I draw is that common bibliometrics—such as the impact factor for journals and citation counts for authors—are easily manipulated not only in theory, but also in practice, and that their use in ranking and judging should be curtailed.
SIAM places great value on scholarly publishing, of course, and we are taking strong actions to ensure the integrity of our own publications and to protect our authors from theft of their work. But we are still struggling to decide just what actions we should take. So I invite the thoughts of members of the SIAM community. If you have witnessed troubling incidents in journal publication, let me know. Do you think such incidents are on the rise? Should SIAM be doing more? Should we look beyond our own publications and authors?
Author misconduct—most obviously verbatim plagiarism, but also more subtle appropriation of ideas and duplicate publication—has always been with us. At SIAM, however, our impression is that the problem is becoming far more common. Perhaps even more disturbing is journal misconduct, carried out by publishers and editors, often with an evident profit motive. One example is a sloppy or sham peer review process designed to produce the impression of a serious scholarly journal without the substance. Another is the deliberate manipulation of citation statistics in order to raise the impact factor or other journal bibliometrics. A recent case involving SIAM brings in both author and journal misconduct. A paper published in a SIAM journal in 2008 was plagiarized essentially verbatim from a preprint version posted by the authors on the web. A copied version of the paper appeared in the International Journal of Statistics and Systems in the same year with different title and authors. SIAM's publisher, vice president for publications, executive director, and I undertook a full investigation, which required nearly six months.
The case got messier and more disturbing week by week. I decided that our final report on it should be made fully public; it is available on the web, where you can read the details.
Meanwhile, here are some of the sad conclusions. Based on the papers that we reviewed, we determined that the suspect authors had committed plagiarism in this and various other cases. At least four articles published under their names in four different journals are essentially verbatim copies of the articles of other authors, and we have reason to believe that there are other cases as well. The journal publisher, Research India Publications, publishes nearly 50 journals, many related to applied mathematics, but did not respond to our inquiries about the plagiarized article. We contacted the editor-in-chief listed on the journal web page, but he himself has been unable to contact the journal! After learning about this incident from us, he submitted his resignation to the journal but has received no response from the publisher; his name, along with those of numerous other distinguished mathematicians, remains on the journal website. Rumors of editor and journal misconduct have dominated the highly publicized case of the applied math journal Chaos, Solitons and Fractals (CSF), published by Elsevier. As reported in a 2008 article in Nature, 2 “Five of the 36 papers in the December issue of Chaos, Solitons and Fractals alone were written by its editor-in-chief, Mohamed El Naschie. And the year to date has seen nearly 60 papers written by him appear in the journal.”
In fact, of the 400 papers by El Naschie indexed in Web of Science, 307 were published in CSF while he was editor-in-chief. This extremely high rate of self-publication by the editor-in-chief led to charges that normal standards of peer-review were not upheld at CSF; it has also had a large effect on the journal’s impact factor. (Thomson Reuters calculates the impact factor of a journal in a given year as C/A, where A is the number of articles published in the journal in the preceding two years, and C is the number of citations to those articles from articles indexed in the Thomson Reuters database and published in the given year.) El Naschie’s papers in CSF make 4992 citations, about 2000 of which are to papers published in CSF, largely his own. In 2007, of the 65 journals in the Thomson Reuters category “Mathematics, Interdisciplinary Applications,” CSF was ranked number 2.
Another journal whose high impact factor raises eyebrows is the International Journal of Nonlinear Science and Numerical Simulation (IJNSNS), founded in 2000 and published by Freund Publishing House. For the past three years, IJNSNS has had the highest impact factor in the category “Mathematics, Applied.” There are a variety of connections between IJNSNS and CSF. For example, Ji-Huan He, the founder and editor-in-chief of IJNSNS, is an editor of CSF, and El Naschie is one of the two co-editors of IJNSNS; both publish copiously, not only in their own journals but also in each other's, and they cite each other frequently.
Let me describe another element that contributes to IJNSNS's high impact factor. The Institute of Physics (IOP) publishes Journal of Physics: Conference Series (JPCS). Conference organizers pay to have proceedings of their conferences published in JPCS, and, in the words of IOP, “JPCS asks Conference Organisers to handle the peer review of all papers.” Neither the brochure nor the website for JPCS lists an editorial board, nor does either describe any process for judging the quality of the conferences. Nonetheless, Thomson Reuters counts citations from JPCS in calculating impact factors. One of the 49 volumes of JPCS in 2008 was the proceedings of a conference organized by IJNSNS editor-in-chief He at his home campus, Shanghai Donghua University. This one volume contained 221 papers, with 366 references to papers in IJNSNS and 353 references to He. To give you an idea of the effect of this, had IJNSNS not received a single citation in 2008 beyond the ones in this conference proceedings, it would still have been assigned a larger impact factor than any SIAM journal except for SIAM Review.
Another example of journal misconduct was revealed with an element of comedy. In “‘CRAP’ paper accepted for publication,” published online in June in Science News, senior editor Janet Raloff 3 described an experiment in which Cornell graduate student Philip Davis and a friend used a computer program, SCIgen, to generate a random document; the grammar and vocabulary were those of a computer science research paper, but the document was completely free of meaningful content. (The paper opens, “Compact symmetries and compilers have garnered tremendous interest from both futurists and biologists in the last several years. The flaw of this type of solution, however, is that DHTs can be made empathic, large-scale, and extensible.'' Four pages later, it concludes, “We expect to see many futurists move to studying TriflingThamyn in the very near future.” Indeed!) The paper was submitted to The Open Information Science Journal (TOISCIJ), published by Bentham Science, a publisher of more than 200 open-access scientific journals (many of which, according to the publisher’s website, have high impact factors). Although the paper was submitted under pseudonyms and with the give-away affiliation Center for Research in Applied Phrenology, or CRAP, Davis was notified four months later that the “submitted article has been accepted for publication after peer-reviewing process in TOISCIJ.” Following the open-access model, the publisher told the authors that the paper would be published as soon as they sent a check for $800. (They declined to do so.)
The cases I have recounted are appalling, but clear-cut. Perhaps even more dangerous are the less obvious cases: publishers who do not do away with peer review, but who adjust it according to nonscientific factors; journals that may not engage in wide-scale and systematic self-citation, but that apply subtle pressures on authors and editors to adjust citations in favor of the journal, rather than based on scholarly grounds; authors who may not steal text verbatim, but who lift ideas without giving proper credit. These are much harder to measure and adjudicate. What do you think? Are such practices significantly distorting the scientific literature or enterprise? Do you have a story of such dubious practices to tell?
One conclusion that I am ready to draw is that we need to back away from the use of bibliometrics like the impact factor in judging scientific quality. It has long been noted that what the impact factor measures is not well correlated with the quality of a journal, and even much less with the scientific
quality of the papers appearing in it or of the authors of those papers. In our field, the 2008 IMU-ICIAM-IMS report Citation Statistics 4 made that case eloquently. Less emphasized has been that these metrics are open to gaming, and are in fact being gamed; in some cases they are likely a better indicator of the unscrupulousness of the authors, editors, or publishers than of the quality of their work.
Frequently, I hear of technical solutions, proposed in the hope that an adjustment to the formula—for example, increasing the time frame for the impact factor from 2 to 5 years, or excluding self-citations—will solve the problem. Such remedies, in my opinion, are doomed to failure. The numbers of citations to mathematical articles are small integers, with excellent papers often drawing lifetime totals of only tens or hundreds of citations, and such numbers are easily manufactured. What one editor can do in one journal by self-citation, a pair of editors can do with two journals without self-citation. Counting can never replace expert opinion.
What can we, as concerned scientists, do? Of course, the first step is to look to ourselves: As scientists, we should place great emphasis on scientific integrity, in what we write and what we review.
Ask yourself some questions before lending your name to a journal as an editor. Does that journal hew to high standards of peer review? Does it have clear policies and mechanisms for enforcing them? Is its output a useful addition to the sprawling scientific literature? We also need to educate others, not only our students, but also our colleagues and administrators and managers. The next time you are in a situation where a publication count, or a citation number, or an impact factor is brought in as a measure of quality, raise an objection. Let people know how easily these can be, and are being, manipulated.
We need to look at the papers themselves, the nature of the citations, and the quality of the journals look forward to learning from the experiences and thoughts of the SIAM community. You can reach
me at president@siam.org.
www.siam.org/journals/plagiary
Nature, vol. 456, 27 November 2008, page 432.
Відповіді
2011.02.07 | Калькулятор
Re: Integrity Under Attack: The State of Scholarly Publishing
Там хоть эта самая Integrity есть! А у нас тотальная разъединенность в области так называемых гуманитарных и социальных наукИ кто ж соберет инфо о родных доморощенных El Naschies? Или даже попытка сделать это будет теперь классифицироваться как уголовщина по новому закону об информации?
Неужели в историю мировой науки войдет только один сертифицированный и всем известный дважды академик без которого Украине гаплык? А предводитель последышей Макаренко вкупе с Сухомлинским? А наследники Вышинского ? А .... Ау....
2011.02.07 | Трясця
Мовчатимуть усі
Тому що наші злочинні академіки - не науковці, а так собі - ніби-то по-справжньому, але отож.