I had visited the website on "profs against plagiarism" upon receiving a message from Dr. Ghodsi. While it is a worthwhile cause, it is regrettable that such a site is needed at all. Today, I saw that an article by David Parnas has been posted there. In an ideal world, each person's research is evaluated based on technical content, contribution, and impact. The problem is that no two researchers would agree on these metrics for any given article, even assuming that you could find experts to comment on the work. Such an assessment is difficult, even in a country like the United States where there are many experts in any given subfield of computer science and engineering. So, counting articles does serve a useful purpose, at least as an initial or rough indicator. If someone has published only a handful of articles, then s/he should be classified as insufficiently active in research. On the other hand, publishing scores of articles is probably an indication of an active research program. It is the intermediate cases that pose a problem. This reminds me of teaching assessment through student surveys. Our university recognizes the shortcomings of such surveys and recommends that they be used only to flag ineffective faculty and to recognize very good teachers. Arguing over middle-of-the-road scores of 2.2 versus 2.5 is deemed inappropriate. In short, the surveys are not meant to place a total order on faculty with regard to teaching effectiveness. In the same way, counting articles should not be used as a total ordering mechanism. One final thought: I think that faculty should stop worrying about who is getting ahead by publishing in "easy" journals and focus on their own work and contributions. In the long run, effective researchers will be recognized. The reward of good research should be personal satisfaction that comes from discoveries and training of ethical students who will spread this style of work, rather than competition for monetary or other rewards.
Regards. ... B. Parhami
Regards. ... B. Parhami