Research Methods Exam 4 Study Guide

Pataasin ang iyong marka sa homework at exams ngayon gamit ang Quizwiz!

how is data sharing different from open access?

data sharing is a part of open access. open access is the entire process.

which impact factor attempts to take prestige of the journal into account?

eigenfactor

are two reviewers enough?

fletcher & fletcher 1999 - need at least 6 reviewers, all favoring rejection or acceptance to yield a stats significant conclusion (p<0.05)

what does open access aim for?

it aims for quality, reliability, and better impact of science. Knowledge and training are foundations of success in the promotion of research information availability and open science. - it believes that publicly funded research should be freely available to everyone

what was the first impact factor metric developed by eugene garfield?

journal impact factor

what statistic is used to determine inter-rater reliability?

kappa

what is an important reason that open-access journals have made headway?

libraries are maxed out on their budgets

double blind peer review

neither author or reviewer are known to each other, only the editor knows their identities (less common)

eugene garfield

noted for his article "Citation Indexes for Science"

an editor is looking for something new to publish, this would best be described as which of the following motivations?

novelty

what kind of peer review involves having the authors and reviewers know each other's identities?

open

what is the name for the process that involves researchers sending their manuscripts in for potential publication?

peer review

what happens to results in open access?

results and outcomes of research remain as part of the open scientific process, so that they can be verified and utilized

what happens when your paper is desk rejected?

the editor will send your article back with critiques

blind peer review

the most common in medical journals, where the author and institution are known to the reviewer, but the reviewer's name remain anonymous to authors

the unsettling finding that the rate of statistical findings that disappear when others look for them is referred to as what?

the replication crisis

which impact factor measure the author and not the article?

H index

grant review

Hodgson 1997 - two real panels reviewing the same grants, 73% agreement

open access skills

Includes principles and methods of open science, such as: - Basics of open science; benefits, principles of research ethics, repeatability, reliability, policies - Planning research using open methods and tools - Utilizing research outputs, managing research data - Opening the whole research process - Open publishing and dissemination

does peer review enforce orthodoxy? pt 2

Mitchell Feigenbaum, pioneer of chaos theory: "Both papers were rejected, the first after a half-year delay. By then, in 1977, over a thousand copies of the first preprint had been shipped. This has been my full experience. Papers on established subjects are immediately accepted. Every novel paper of mine, without exception, has been rejected by the refereeing process. The reader can easily gather that I regard this entire process as a false guardian and wastefully dishonest."

does peer review enforce orthodoxy?

Rosalyn Yalow, 1977 Nobel Prize in Physiology or Medicine: "In 1955 we submitted the paper to Science. ... The paper was held there for eight months before it was reviewed. It was finally rejected. We submitted it to the Journal of Clinical Investigations, which also rejected it."

Detecting fraud by examining published and raw data

Simonsohn (2013): To undetectably fabricate data is difficult. It requires: - A good understanding of the phenomenon being studied (What do the distributions of data tend to look like? Which variables correlate and by how much?) - A good understanding of how sampling error should affect the data (How much variation should we expect to see in the data?)

journal self citation

To provide one the ability to easily compare self-citation rates among journals particularly as this influences Impact factor calculations.

retraction watch

Tracking retractions as a window into the scientific process

does peer review enforce orthodoxy? pt 3

Tuzo Wilson, developed the theory of Hawaiian island formation: "I ... sent [my paper] to the Journal of Geophysical Research. They turned it down. ... They said my paper had no mathematics in it, no new data, and that it didn't agree with the current views. Therefore, it must be no good. Apparently, whether one gets turned down or not depends largely on the reviewer. The editors, too, if they don't see it your way, or if they think it's something unusual, may turn it down."

testing the peer review process

a British Medical Journal (BMJ) article deliberately inserted eight errors into a 600-word report about to be published and then sent it to 300 reviewers

a journal that limits who can use data and how or for what purpose is called what?

a restricted journal

open peer review

authors and reviewers are known to each other

what are the benefits of open access science?

benefits include alleviating the replication crisis and driving costs down for subscriptions

How do the subscription journals justify their existence?

claim they are more selective and can do more

who peer reviews papers?

competitors! the process allows them to anonymously undermine your work (anonymity is protected, process assumes that peer reviewers will act ethically) *not collaborators because they have a conflict of interest

different types of contributions

- new theory - new synthesis - new application - tutorial

author level biases

- prestige (author/institution), bias towards successful researchers - sexism, bias against women - racism

is peer review reliable?

- rates of agreement only "moderately better than chance" (kappa = 0.26) - agreement greater for rejection than acceptance

Approximate numbers at each stage

- 1000 rejected by one editor within 48 hours - Further 3000 rejected with second editor - Within one week of submission 3000 read by senior editor; further 1500 rejected - 1500 sent to two reviewers; then 500 more rejected - 1000 screened by clinical epidemiology editor and more rejected

journal level biases

- seeking positive "exciting" results (if the research question is important and interesting, the answer should be less important) - number of experts available

path to open access

- But a total conversion will be slow in coming, because - scientists still have every economic incentive to submit their papers to high-prestige subscription journals. - The subscriptions tend to be paid for by campus libraries, and few individual scientists see the costs directly. - From their perspective, publication is effectively free.

BMJ peer review process

- 7000 research papers, 7% accepted - 400-500 to weekly manuscript meeting attended by the Editor, an external editorial adviser (a specialist or primary care doctor) and a statistician.. - and the full team of BMJ research editors, plus the BMJ clinical epidemiology editor - 350 research articles accepted, usually after revision - value added by commissioned editorials and commentaries

why should you be a peer reviewer?

- moral obligation - opportunity to see cutting-edge research - opportunity for networking

the largest open-access publishers

- BioMed Central and PLoS charge $1,350-2,250 to publish peer-reviewed articles in many of their journals - One small group published 22,000 articles at a cost of $290 per article

American Statistical Association (ASA):Statement on P-values

"The statistical community has been deeply concerned about issues of reproducibility and replicability of scientific conclusions. .... much confusion and even doubt about the validity of science is arising. Such doubt can lead to radical choices such as...to ban p-values" (ASA, Wasserstein & Lazar 2016, p. 129)

the value of rejection pt 2

- By rejecting papers at the peer-review stage on grounds other than scientific validity, and so guiding the papers into the most appropriate journals, publishers filter the literature and provide signals of prestige to guide readers' attention. - Such guidance is essential for researchers struggling to identify which of the millions of articles published each year are worth looking at, publishers argue — and the cost includes this service

journal impact factor formula

- Developed by Garfield and Sher (1963) - The two year mean citations per paper based on the number of citations in year t to papers published in the previous two years - From Thompson Reuters' Web of Science (all come from different websites)

Eigenfactor

- Developed by Jevin West and Carl Bergstrom at the University of Washington, - Journals are rated according to the number of incoming citations, with citations from highly-ranked journals weighted to make a larger contribution to the Eigenfactor than those from poorly-ranked journals. - Eigenfactor score scales with the size of a journal. - Excludes self-citations (*only one) -To allow per-article comparisons using the Eigenfactor approach, the Article Influence score scales Eigenfactor score by the number of articles published by the journal and thus is directly comparable to impact factor.

cases of fraud

- Diederik Stapel -award-winning social psychologist. - 137 published papers. - 54 retracted to date. - Altered or simply made up data. - Brought down from within by his own students, to whom he had provided identical made-up datasets. - A number of his publications have data that are extremely odd.

reasons for the replicability crisis

- Errors - Fraud (Researchers p-hacking, Fake data) - Publication bias by researchers - Publication bias by journals

open access

- Free, immediate, online access to the results of research - Free to reuse e.g. to build tools to mine the content

two routes of open access

- Gold route: paying APCs to ensure publishers makes copy open - Green route: self-archiving Open Access copy in repository

other methods

- Immediacy index - Eigenfactor - H Index (or H factor)

journal impact factor

- In the early 1960's Irving H. Sher and Eugene Garfield created the Journal Impact Factor to help select journals for Science Citation Index (SCI). - They knew that a core group of highly cited large journals needed to be covered in SCI, but they also wanted to include the small, but important review journals which would not be included if they relied only on publication or citation counts.

H factor or H index

- Index that attempts to measure both the productivity and impact of the published work of a scientist or scholar. - A scholar with an index of h has published h papers each of which has been cited by others at least h times. - Serves as an alternative to more traditional journal impact factor metrics in the evaluation of the impact of the work of a particular researcher

what is the contribution of the paper?

- Is the contribution clearly stated in the abstract, introduction, and conclusion? - Are the claims supported in the paper?

journal citation reports (JCR)

- JCR distills citation trend data for 10,000+ journals from more than 25 million cited references indexed by Thomson Reuters every year - Science Edition and Social Sciences Edition released annually - Science Edition covers 7,200+ journals in 171 subject categories - Social Sciences Edition covers 2,100+ journals in 55 subject categories

issues with JIF

- JIF depends heavily of the research field - Two-year window is very short time period (5-year JIF may be better in that respect) - Different websites produce different results because the denominator is different

peer review

- Journal or conference editor receives a submitted paper - Editor performs initial check for quality - Editor sends paper to a few experts for review - Editor receives reviews and makes a decision (Accept, Reject, Modify)

Is the research novel?

- Literature review is needed - Sometimes similar results are published simultaneously

what are the benefits of open access?

- Make your stuff available on the Web (whatever format) under an open license make it available as structured data (e.g. Excel instead of a scan of a table) - Use non-proprietary formats (e.g. CSV instead of Excel) - Use URIs to denote things, so that people can point at your stuff link your data to other data to provide context

Plan for openness from the outset

- Many decisions taken early on in the project will affect whether the data can be made openly available - Think about where you want to publish and include APCs in grant applications if needed - Ensure consent agreements also include permission to archive and share data for reuse by others - Explore the potential for openness when drafting agreements with commercial partners

immediacy index

- Measure of topicality and urgency of a scientific journal - A 1-year JIF Number of times articles published in year x were cited in indexed journals during same year. - Number of articles, reviews, proceedings or notes published in year x

what do editors want from papers?

- Originality (Novelty) - Clear and engaging writing - Importance (Significance) - Relevance to readers - Contribution - Usefulness to readers and, ultimately, to patients - Truth - Excitement/ "wow" factor

are there any impediments to data sharing?

- Protection of participants' anonymity: But there are ways to make data anonymous and many (most?) datasets already are anonymous - Researchers may want to conduct additional analyses: Perhaps allow such researchers a period of time before data are made available? Involve the original researchers in the re-analysis.

how to conduct a peer review

- Recommend for or against publication - What are the standards of the journal / conference? - Do you recommend a different publication venue? - Revision (major, minor?) - Resubmission? - Is another review needed? - Justify your review with comments - Constructive criticism - General/Specific comments - For the editor: How confident are you in your review? - Should the qualifications (good or bad) of the author be considered? (need to find a balance between overly permissive and overly restrictive) - Consider the standards of the target publication - Students tend to be overly permissive in reviews - Be diplomatic in your criticism

what is open access science?

- Science carried out and communicated in a manner which allows others to contribute, collaborate and add to the research effort - With all kinds of data, results and protocols made freely available at different stages of the research process

limitations of the IF

- Self-citations - Many times editors insist that authors cite works in that journal - Some disciplines tend to cite more than others - Journals change their names thus affecting impact factor for approximately two years - Does not take into account negative citations

data sharing

- Some journals are strongly encouraging (e.g. Journal of Research in Personality) and others are now requiring (e.g. Judgment and Decision Making) that data files are submitted for accepted articles. - Perhaps journals, universities and granting agencies ought to start mandatory centralized data storage.

the replication crisis

- Statistical 'findings' disappear when others look for them. - Beyond the social sciences to genomics, bioinformatics, and medicine (Big Data) - Methodological reforms (some welcome, others radical)

subscription science

- The publishers of expensive journals give two other explanations for their high costs, - Although both have come under heavy fire from advocates of cheaper business models: They do more and they tend to be more selective. - The more effort a publisher invests in each paper, and the more articles a journal rejects after peer review, the more costly is each accepted article to publish.

profits for publishers

- The science-publishing industry generated $9.4 billion in revenue in 2011 and published around 1.8 million English-language articles - An average revenue per article of roughly $5,000. - Analysts estimate profit margins at 20-30% for the industry, so the average cost to the publisher of producing an article is likely to be around $3,500-4,000

the value of rejection

- Tied into the varying costs of journals is the number of articles that they reject. - PLoS ONE publishes 70% of submitted articles, - Whereas Physical Review Letters (a hybrid journal that has an optional open-access charge of $2,700) publishes fewer than 35%; - Nature published just 8% in 2011

how do I find a journal impact factor?

- Use the LRC's Electronic Resources to go to Web of Science - Click on Additional Resources to find Journal Citation Reports

more benefits

- You can access relevant literature: not behind pay walls - Ensures research is transparent and reproducible - Increased visibility, usage and impact of your work - New collaborations and research partnerships - Ensure long-term access to your outputs - Help increase the efficiency of research

Using JCR Wisely

- You should not depend solely on citation data in your journal evaluations. Citation data are not meant to replace informed peer review. Careful attention should be paid to the many conditions that can influence citation rates such as language, journal history and format, publication schedule, and subject specialty. - The number of articles given for journals listed in JCR include primarily original research and review articles. Editorials, letters, news items, and meeting abstracts are usually not included in article counts because they are not generally cited. Journals published in non-English languages or using non-Roman alphabets may be less accessible to researchers worldwide, which can influence their citation patterns. This should be taken into account in any comparative journal citation analysis.

peer review critiques

- a reviewer was incompetent (61%) - a reviewer was biased (51%) - a reviewer required unnecessary references to his/her publication (23%) - comments from reviewer included personal attacks (18%) - a reviewer delayed the review so he could publish an article on the same topic (10%) - a reviewer breached confidentiality (7%) - a reviewer used your material without your permission (5%)

autism and vaccinations

- arguably the most famous retracted journal article in history. Andrew Wakefield reported a small study in The Lancet - his claim suggested that measles, mumps, and rubella (MMR) vaccinations might cause autism - wakefield selected participants and changed and manipulated diagnoses and clinical histories to promote undisclosed financial interests - this paper resulted in a rise in measles, serious illness, and some deaths

bias

- author level biases - journal level biases

different types of peer review

- blind - double blind - open

first mention of impact factor

- garfield recommends keeping track of who cited the paper - impact factor refers to the impact of the article - "In effect, the system would provide a complete listing, for the publications covered, of all the original articles that had referred to the article in question." ... Such an "impact factor" may be much more indicative than an absolute count of a scientist's publications."

what did garfield suggest?

- he suggested that each article be given a code and all works that cited articles would be linked to the original article - based on law indexing (Shepard's Citations 1873) - subject indexes to scientific literature were in existence

is the research significant?

- is it more than just an engineering exercise? - is it important for researchers in the field? - is it interesting? - is it non-obvious?

is the paper clear and logical?

- is there enough detail? - is there too much detail? - is the research reproducible?

what does Alicia Wise from Elsevier think about subscription science?

- she doubts that it could replace the current system - "I don't think it's appropriate to say that filtering and selection should only be done by the research community after publication," - She argues that the brands, and accompanying filters, that publishers create by selective peer review add real value, and would be missed if removed entirely

problems with the peer review process

- slow - expensive - subjective - biased - open to abuse - difficult to detect fraud and errors

results

- the median number or errors spotted was 2 - 20% of reviewers saw no errors - major errors were overlooked (methodological weakness, inaccurate reporting of data, unjustified conclusions, omissions and inaccurate reporting of data)

why can the term novelty be a gray area?

- what about dissertations? - what about foreign-language publications?

in that experiment, what was the median number of mistakes caught by the reviewers?

2

how many reviewers of the BMJ detected at least one error?

80%

what kind of peer review involves only hiding the reviewers name to the author who submitted the article?

Blind

which country's medical journal deliberately inserted eight errors into a report to 300 reviewers?

Britain

who is evaluating Open science for the EU in 2018?

Elsevier

pragmatics of open access

Elsevier, Springer, and ~3 other subscription based companies - Commercial - Academic Institutions - Dominate the market - Elsevier 37% profit margin (billions of dollars) - Most scientists never see the direct cost, universities get the bill


Kaugnay na mga set ng pag-aaral

Reading 38- Dividends and Share Repurchases: Basics

View Set

World Christian- Ken Graves Exam 1

View Set

Cinderella Man Movie Flashcard Notes

View Set

Test 1 (ch. 5) Practice questions

View Set