المساعد الشخصي الرقمي

مشاهدة النسخة كاملة : language testing:reliability



سناء احمد
01-10-2010, 10:10 PM
Reliability

Definition: Reliability is the consistency of your measurement, or the degree to which an instrument measures the same way each time it is used under the same condition with the same subjects. In short, it is the repeatability of your measurement. A measure is considered reliable if a person's score on the same test given twice is similar. It is important to remember that reliability is not measured, it is estimated.
There are two ways that reliability is usually estimated:
test/retest and
internal consistency.
There are several general classes of reliability estimates:• Inter-rater reliability [/COLOR
]
is the variation in measurements when taken by different persons but with the same [COLOR="magenta"]method or instruments.
• Test-retest reliability
is the variation in measurements taken by a single person or instrument on the same item and under the same conditions. This includes intra-rater reliability.
• Inter-method reliability is the variation in measurements of the same target when taken by a different methods or instruments, but with the same person, or when inter-rater reliability can be ruled out. When dealing with forms, it may be termed parallel-forms reliability.[1]
• Internal consistency reliability,
assesses the consistency of results across items within a test.[1]

Test/RetestTest/retest is the more conservative method to estimate reliability. Simply put, the idea behind test/retest is that you should get the same score on test 1 as you do on test 2. The three main components to this method are as follows:
1.) implement your measurement instrument at two separate times for each subject;
2). compute the correlation between the two separate measurements; and
3) assume there is no change in the underlying condition (or trait you are trying to measure) between test 1 and test 2.
Internal Consistency
Internal consistency estimates reliability by grouping questions in a questionnaire that measure the same concept. For example, you could write two sets of three questions that measure the same concept (say class participation) and after collecting the responses, run a correlation between those two groups of three questions to determine if your instrument is reliably measuring that concept.
One common way of computing correlation values among the questions on your instruments is by using Cronbach's Alpha. In short,
Cronbach's alpha splits all the questions on your instrument every possible way and computes correlation values for them all (we use a computer program for this part). In the end, your computer output generates one number for Cronbach's alpha - and just like a correlation coefficient, the closer it is to one, the higher the reliability estimate of your instrument. Cronbach's alpha is a less conservative estimate of reliability than test/retest.

البـارع
01-10-2010, 10:18 PM
of course, reliability is a major chracterstic in testing
all teacher should be aware of this point

well done
thank you

M.o_o.N
01-10-2010, 10:28 PM
Thank you sister :)
Allah may bless you

Lolita 1
04-10-2010, 12:55 AM
Thank you for the information
and thank you for sharing it with us
Best Regards

maan002
31-10-2010, 03:51 PM
thank for you

Lolita 1
31-10-2010, 08:31 PM
it is a wonderful

May Allah bless and protect u
go on

شكرا بحجم السماء

جزاك الله كل خير
الله يوفقك ويحقق جميع امنياتك