As a blanket statement without any context, I will say this: for many published 
instruments there are reliability data available. In my experience when an IRB 
asks for that kind of information, that is what they are asking for; if there 
aren't any, then there aren't any, and a brief statement of why that's not a 
problem should generally be sufficient. I don't see it at all as 
micromanagement nor as politically motivated. I think a prudent IRB would 
request that kind of information if it will help them make a decision. 

We know next to nothing about the proposal which started all of this and the 
context is very important. We don't know if the person/committee reviewing ths 
study felt uncomfortable about something within the overall proposal and felt 
that some tangible bit of information might settle his/her/their minds about 
the potential risks involved. We don't know if this is a committee request or 
an individual request.

We only know that this is a student with a self-constructed questionnaire about 
music preferences. We know nothing of how the student presented the study to 
the reviewers, or about the purpose of the study, or of the content of the 
items. Basically we have minimal information here and are trying to come to 
some grand conclusions about how IRBs operate.

In addition, the letters IRB tend to raise people's hackles for whatever 
reasons and immediately you begin to lose objectivity in the discussions. This 
is a shame because I have never, in 20 years and several different institutions 
been involved with a 'bad' IRB. So maybe the fault is mine. I lack that 
negative experience and therefore cannot understand why some people are so 
negative.

Annette

ps Patricia, where are you?

Quoting "Hetzel, Rod" <[EMAIL PROTECTED]>:

> IRBs should focus on assessing the potential risk for harm to
> participants and should not address psychometric issues of the study. I
> believe this for a few reasons.
> 
> First, an IRB cannot make an educated decision on psychometric issues if
> they do not have expertise in that particular content area. I may decide
> to use a measure that doesn't yield very reliable scores, but may be the
> best measure available. Plus, as the primary investigator for a research
> study it is my responsibilty to choose the measures. If I choose
> measures that are poorly constructed or do not produce reliable/valid
> scores, then the editors reviewing my paper for publication will
> (hopefully) catch it.
> 
> Second, if the IRB is going to evaluate score reliability, then at what
> cut-off point are they going to decide that an instrument poses a risk?
> Are they going to go by the .70 criteria? Higher? Lower? This is a
> slipperly slope that is best avoided.
> 
> Third, technically reliability is a property of scores and is not a
> property of tests themselves. When tests are developed, they do not have
> a reliability coefficient stamped upon them by the almighty publisher.
> Researchers should ALWAYS calculate score reliability and validty with
> their current samples and not rely on previous estimates from other
> samples. In fact, many journal editors are now requiring researchers to
> do this prior to submitting articles for publication.  
> 
> The argument that participants need to be protected from the potential
> risk of wasting their time completing surveys that do not provide
> reliable scores is a weak argument. Maybe we should not let students
> complete paper surveys because they will run the risk of getting paper
> cuts. But we couldn't have computer surveys because participants may be
> at risk of developing carpal tunnel syndrome. Maybe we should require
> researchers to write their surveys backwards so left-handed participants
> won't run the risk of smearing their answers and getting ink on the
> hand. Okay, so I know I'm being ridiculous here (it's the day before
> grades are due!). This whole situation has too much micro-management and
> I wonder what kind of political factors are playing into their decision.
> 
> Rod
> 
> ______________________________________________
> Roderick D. Hetzel, Ph.D.
> Department of Psychology
> LeTourneau University
> Post Office Box 7001
> 2100 South Mobberly Avenue
> Longview, Texas  75607-7001
>  
> Office:   Education Center 218
> Phone:    903-233-3893
> Fax:      903-233-3851
> Email:    [EMAIL PROTECTED]
> 
> 
> ---
> You are currently subscribed to tips as: [EMAIL PROTECTED]
> To unsubscribe send a blank email to [EMAIL PROTECTED]
> 


Annette Kujawski Taylor, Ph. D.
Department of Psychology
University of San Diego 
5998 Alcala Park
San Diego, CA 92110
[EMAIL PROTECTED]

---
You are currently subscribed to tips as: [EMAIL PROTECTED]
To unsubscribe send a blank email to [EMAIL PROTECTED]

Reply via email to