Psychological Assessment Test Validity and Types Explained

m a semester iv paper ii psychological assessment n.w
1 / 17
Embed
Share

Learn about psychological assessment test validity, different types of tests, the concept of validity, and the importance of validation in test development. Understand how validity is crucial for accurate measurement of behaviors and traits in various contexts.

  • Psychological Assessment
  • Test Validity
  • Types of Tests
  • Validity Concept
  • Test Development

Uploaded on | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

You are allowed to download the files provided on this website for personal or commercial use, subject to the condition that they are used lawfully. All files are the property of their respective owners.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author.

E N D

Presentation Transcript


  1. M.A. Semester IV Paper II Psychological Assessment Test Validity Topics covered- What is Test? Types of Test The concept of Validity Types of Validity Content Validity Criterion-Relate Validity Concurrent validity Predictive validity Construct Validity Consulted Books Prof. Dhananjay Kumar Department of Psychology DDU Gorakhpur University

  2. What is Test? A psychological test is a standardized instrument designed to measure objectively one or more aspect of a total personality by means of samples of verbal or non verbal responses, or by means of other behaviours . (Freeman1962) A test is a measurement device or technique used to quantify behaviour or aid in the understanding and prediction of behaviour. Anastasi (1988), defined a test as an objective and standardized measure of a sample of behaviour. This definition focuses our attention on three elements: 1) objectivity, 2) standardization, 3) a sample of behaviour

  3. Types of Test Ability tests: Measure skills in terms of speed, accuracy, or both. Achievement: Measures previous learning. Aptitude: Measures potential for acquiring a specific skill. Intelligence: Measures potential to solve problems, adapt to changing circumstances, and profit from experience. Personality tests: Measure typical behaviour traits, temperaments, and dispositions. Structured (objective): Provides a self-report statement to which the person responds True or False, Yes or No. Projective: Provides an ambiguous test stimulus; response requirements are unclear.

  4. The Concept of Validity Validity, is an estimate of how well a test measures what it aims to measure in a particular context. In other words, test validity is the extent to which a test accurately measures what it is supposed to measure. It is a judgment based on evidence about the appropriateness of inferences drawn from test scores e.g., an anxiety test is valid when it measures the anxiety. But if it measures something else than anxiety, then the particular test is not valid. Recently test validity is defined in a somewhat different way. According to the standards for educational and psychological testing (AERA, APA & NCME, 1999), Validity refers to the degree to which evidence and theory support the interpretations of test scores entailed by proposed uses of tests .

  5. Validity should not be thought as a characteristic of the test, rather a discussion is made about whether there is evidence that supports the proposed use of the obtained test scores. The current view of validity focuses more on the probable interpretations that can be inferred from a test scores. An important part of the development of any test is investigating whether the concepts or characteristics that the test developer are interested in measuring are actually being measured. Validation is the process of gathering and evaluating evidence about validity. It is the test developer s responsibility to supply validity evidence in the test manual.

  6. Types of Validity A. Content Validity B. Criterion-Related Validity a) Concurrent Validity b) Predictive Validity C. Construct Validity

  7. Content Validity Content validity is related with the systematic examination of the test content to ascertain whether or not it is a representative sample of the behaviour domain to be measured. It is the only type of evidence besides face validity that is logical rather than statistical. When a test is developed, content validity is often secured to make sure that the sample of behaviour, that is the test, is truly representative of the domain being assessed. It requires a thorough knowledge of the domain. If a researcher is developing a test of depression, you must be very familiar with depression and know whether depression includes affect, sleep disturbances, loss of appetite, restricted interest in various activities, lowered self-esteem, and so on.

  8. Test developer include information on the content areas and the skills or objectives covered by the test in the manual. Determination of content validity evidence is often made by expert judgment. Generally, various experts made judgements about each item in terms of its match or relevance to the content. Statistical methods have been used to determine whether items fit into conceptual domains. Two new concepts relevant to content validity evidence were emphasized in the latest version of the standards for educational and psychological tests (AERA, APA, & NCME, 1999): construct underrepresentation and construct- irrelevant variance. Construct underrepresentation describes the failure to capture important components of a construct. Construct-irrelevant variance occurs when scores are influenced by factors irrelevant to the construct.

  9. Criterion-Related Validity Criterion-related validity gives an idea about the effectiveness of a test in predicting an individual's behaviour in specified situations. To fulfil this purpose, performance on the test is checked against an external criterion, Criterion validity evidence indicated just how well a test corresponds with a particular criterion. Such evidence is provided by correlations between a test and a well-defined criterion measure. A criterion will be any standard against which the test is compared. For a mechanical aptitude test, the criterion might be subsequent job performance in future, whereas, for a scholastic aptitude test, it might be college grades. Criterion validity is generally divided into concurrent and predictive validity based on the timing of measurement for the "predictor" and outcome.

  10. Predictive Validity The major goal of predictive validity evidence is to determine the relationship between test scores, which are obtained before making decisions and which are obtained after making decisions (criterion score). Predictive validity is an index of the degree to which a test score predicts some criterion measure. The predictive function of tests is a type of criterion validity evidence known as predictive validity evidence. For example, a Reasoning Test can serves as predictive validity evidence for a recruitment process if it accurately forecasts how well these candidate will do in their job in future.

  11. The term prediction in the broader sense refers to prediction from the test to any criterion situation, whereas, in the specific sense it refers to prediction over a time interval. It is in the latter sense that it is used in the concept of predictive validity. Predictive validity is most implicated to tests used in the selection and classification of personnel. The information provided by the predictive validity of tests is proved to be useful to hire job applicants, to select students for admission to college or professional schools, or to assign military personnel to occupational training programs. Other examples will be the use of tests to screen out applicants likely to develop emotional disorders in stressful environments or the use of tests to identify psychiatric patients most likely to benefit from a particular therapy.

  12. Concurrent Validity Concurrent validity is an index of the degree to which a test score is related to some criterion measure obtained at the same time (concurrently). Concurrent evidence for validity comes from assessments of the simultaneous measurement of the test and the criterion i.e., between a disability test and school performance. Here the measures and criterion are administer at the same time because the test is designed to explain why the person is now having difficulty in school. Concurrent evidence for validity applies when the test and the criterion can be measured at the same time.

  13. Concurrent validity is a practical alternative to the predictive validity, in which test developer obtain scores of test and criterion from same predetermined population and then compute a correlation between the two scores. For certain uses, concurrent validity is an appropriate type for some test and can be justified in its own right. The logical difference between predictive and concurrent validity is based on the objectives of testing. Further, predictive validity coefficient is obtained from random sample of the population, whereas concurrent validity coefficient is obtained from a preselected sample. Concurrent validity is utilized to tests that are mainly employed for diagnosis of existing status.

  14. Construct Validity The construct validity of a test is the degree to which the test may be claimed to measure a theoretical construct or trait. A construct is a idea defined operationally and constructed to describe a specific behaviour. intelligence, mechanical aptitude, verbal fluency, depression and anxiety are some example of such constructs. Constructs are unobservable, presupposed (underlying) characteristics or concepts that a test developer may cause to describe test behaviour or criterion performance. Construct validation requires the collection of information from different sources.

  15. The main goal of construct validation is to accertain whether test scores provide a good measure of a specific construct. The process of outlined a detailed description of the relationship between specific behaviours and abstract constructs, referred to as construct explication. The process of construct explication consists of three steps: 1. identification of such behaviours that are related with the construct to be measured 2. identification of other constructs and decision making about whether they are related or unrelated to the construct to be measured. 3. identification of behaviours that are related to each of the defined construct, and, relationship among the constructs, determine, whether each behaviour is related to the construct to be measured.

  16. Methods of Construct Validation The most basic method is to correlate scores on the test in question with scores on a number of other tests. Another common method involves the multivariate technique called factor analysis, in which this analysis provides an analytical method for estimating the correlation between a specific variable( score on a test) with scores on a latent factor. A latent factor is underlying independent construct for a group of manifested variables (test scores), which is explored after the analysis. Multitrait- Multimethod Approach: Campbell and Fiske (1959) developed this approach to assess construct validity. They hypothesized that measurement by different methods of various traits, and correlation among these measures will take a form of multitrait- multimethod matrix.

  17. Consulted Books American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999) Standards for educational and psychological testing. Washington, DC: American Educational Research Association. Anastasi, A. (1976). Psychological Testing (4th Ed.). Macmillan Publishing Co., Inc. New York. Cohen & Swerdlik (2009). Psychological Testing and Assessment: An Introduction to Tests and Measurement (7th Ed.). TMH. Domino, G. & Domino, M.L., (2006). Psychological Testing: An Introduction. Cambridge University Press. Kaplan, R.M. & Saccuzzo, D.P., (2009). Psychological Testing: Principles, Applications, and Issues (7th Ed.). Wadsworth, Cengage Learning.

Related


More Related Content