Vijay sir has posted a very interesting list of scientific fraud in writing in this forum. I'd like to pick up three of them — selective reporting as scientific fraud, salami slicing of reports, and omission of others' original publications – and post my observations and feelings. I think each of the three is an example of academic dishonesty but there is a deeper societal and cultural systemic basis in each one. I also think using "fraud" to label them as such is too strong a term to use. The reason I think fraud is too strong a term for this set, is in the complexity around each of these, and how people are sometimes compelled under circumstances.
Take subgroup analysis for example (See Vijay sir's listing "Reporting only the findings that support the original hypothesis." as fraud. )
I think there is a fine line between academic pragmatism, dishonesty, and fraud, let's leave at that. On the surface of it, reporting only those data that support the hypotheses seem like fraud (although I'd think it's too strong a word, I'd rather go for "dishonesty" or something similar expressive). I htink these issues merit discussion here. In particular when is selective reporting dishonesty versus when it is not.
There are situations where the investigators have set out rival hypotheses, gone about data collection in as unbiased a manner as possible, and then have analyzed their data. In doing so, they realize that data in general support their hypotheses and while writing up their publication from the project, would deliberately highlight those points that support their hypotheses. That's straightforward, and most people would not count that as dishonest practices.
However, problems arise when people cut some slack on their data, or report justifiable claims of their hypothesis on the basis of subgroup analyses. For a good discussion of the publication bias and other problems that arise (goes beyond moral obligations of the researcher or the author, and plagiarism charges), see Rifai et al,Reporting Bias in Diagnostic and Prognostic Studies: time for action, full text here (http://www.clinchem.org/cgi/content/full/54/7/1101)
As you see, these are systemic problems inherent in the culture of academics that people have come to accept and grow on. These are problems that need to be addressed from a range of different perspectives, as educators, we need to remember that it is the study quality (rather than the precise results) that are important; that p-value does not really tell you anything other the probability of your findings under conditions of the null and that's that; that there is no such thing as "positive" or "negative" studies. There is also a need for registry for all kinds of studies for all countries, or a common database.
Similar situations arise when you consider salami slicing of your reports. By salami slicing is meant where you do repeated analysis of the same data, develop different messages out of them package them as publications and get credit for separate publications. In reality, you could have written up all of them in one paper and be done with. Is this dishonesty? Well, yes and no, depending on whose perspective you consider. From an academic knowldege management perspective, it borders on stacking up your plate whereas one paper would do; if you ask the investigator, he or she will justify that each message is vitally important in its own merit, and that any number of messages can be collapsed into one paper, but does that happen all the time? Plus, add to it, the system wants you be more productive and write more papers. Where are the data going to come from? One data, several different messages. The grantor organizations want you to be productive in that, buck for buck, you need to have as much productivity for one project, there is the "culture" of "publish or perish", and indeed, in an academic sense, who'd like to remove oneself from the academic gene pool of excellence and ladder climbing? Add to that the complexity of peer review process and it's no wonder that people would like to cut their data too thin to send to as many journals as possible and hope to get published. One side of the multiple comparison problem in academic data analysis.
Is there a way out? Once again, I think it's a systemic issue. There are now multiple channels of academic publishing (of course in biomedical sciences we are a little slow to adapt whereas physics and math people are way ahead with their prepublication archives like ArXiv and so on). There are channels such as blogs and wikis and each is a good way to express your papers. Can we not build an academic knowldege base around them? Must we have peer review processes? For a good discussion, search for Richard Smith [AU] and peer review. Here's a sharp criticism of some problems plaguing our culture of publications in science (perhaps from a biomedical perspective), here by Jet Akst "I hate your paper", http://www.the-scientist.com/templates/trackable/display/article1.jsp?a_day=1&index=1&year=2010&page=36&month=8&o_url=2010/8/1/36/1
It of course talks of the peer review system, but you get the idea.
The third thing is about lack of citing previous research perceived as academic fraud. And again, I do not know anymore if it is a fraud or if it is dishonesty, or if it is just pragmatism, or jealousy, or not wiling to issue the credit where it's due, or ego clash, or claims, or what it is. At the least, it's irritating. But most discerning readers do find out about these things anyway. Isn't it systemic? You bet. In an environment, where funds are tight, there is intense competition among rival groups working on the same project(s), it is not unusual to see people NOT citing one another and willing to give credit. Dishonesty? Yes, most certainly is. But I also think it's ingrained in our culture where we are hesitant to applaud others (not all are like that, but for many of us, openly applauding others for work that they have done does not come naturally; As Amzad Ali Khan, the famous saradiya, once lamented, he found his desi audience to be too miserly in clapping). Again, issues are ingrained in our culture, in our system, in our psyche.
My point here is that, whether it is selective reporting on the basis of subgroup analysis (or other strategies), whether it is salami slicing of data, or wilful resistance to cite other peer groups, each of these is a symptom of a deeper systemic issue ingrained in our culture, be it academic, be it otherwise a generic social issue. Of course, that is not to take away any blame from the reporter/student/researcher. My point is this, issuing a warning is not enough under the circumstances. There is now also a case to strengthen the structure and influence the mindset of the students/early career researchers/funders to alert them about the perils of moral crises.
# :-), my two cents