MenuMenu

(Standardized and Locally Developed)

Discipline Specific Tests

Discipline-specific tests are one of the better ways to assess instructional outcomes. Academic departments can develop their own tests to evaluate students' mastery of the curriculum. Or they may purchase standardized tests from reputable testing companies. Information on how to develop your own test can be found in the following module.

There are both advantages and disadvantages to using standardized tests. One major advantage is that testing companies will report departmental results in relation to national norms. Consequently, departments will be able to learn how their students perform compared to students in the same major at other colleges and universities. Testing companies generally spend a considerable amount of time and resources ensuring that their tests demonstrate reliability (consistency of scores over time) and predictive validity (ability to forecast students' performance on a criterion, such as first-year GPA in graduate school).

The major disadvantage to standardized tests is the lack of a guarantee that the instrument will cover a program's learning objectives. Sometimes standardized test items do not assess the key goals of the curriculum, and the resulting lack of content validity should discourage use of the test. Moreover, standardized tests can be expensive to administer on a yearly basis. Despite these disadvantages, departmental faculty should periodically review the market for testing instruments that dovetail with their curriculum. Testing companies, such as the Educational Testing Service (ETS), will often send sample tests to departments for their review. Departments are encouraged to develop their own tests. But faculty should not underestimate the costs involved in constructing reliable and valid testing instruments.

ETS has developed a series of widely used 'off-the-shelf' exams, the Major Field Tests. Briefly, these multiple choice tests take two hours to administer and are designed to evaluate students' understanding of basic concepts, principles, and knowledge covered in the core undergraduate curriculum. ETS contends that the Major Field Tests "go beyond measurement of factual knowledge, however, because they also evaluate students' ability to analyze and solve problems, understand relationships, and interpret material. Departments are encouraged to include locally written questions (up to 50), giving faculty direct input into test content." Departments may administer the tests when it is convenient to do so, and ETS will score and return the exams within about three weeks. National norms are provided to departments to assist them in the interpretation of their students' test results.

Major Field Tests have been developed to assess student learning in the following disciplines: Biology, Business, Chemistry, Computer Science, Criminal Justice, Economics, Education, English Literature, History, Mathematics, Music, Physics, Political Science, Psychology, and Sociology. To learn more about the Major Field Tests, please visit the ETS web site

Embedded Testing

An economical way of testing students is to embed questions related to key learning goals into exams or other assignments. The performance of students on these selected items is evaluated as part of the department's ongoing assessment process. Typically, the instructor grades target questions or assignments as part of course requirements. In addition, one or two other faculty members from the department grade the questions or assignments in terms of the degree to which specific student learning objectives have been met. Embedded testing motivates students to perform their best without requiring them to engage in further assessment-related activities. Listed below are two examples of embedded testing:

  • Faculty members in the Spanish Department offer freshmen and seniors selected passages to translate on their final exams. None of the students has previously encountered the text to be translated. Three faculty members, including the instructor, rate the students' proficiency.

  • Faculty members in the Political Science Department develop questions to be embedded in the final exams of freshman- and senior-level courses. Two or more faculty members develop embedded questions from the department's learning objectives and score them according to rubrics developed by an ad hoc assessment committee. Answers to the key questions are compared according to the level (freshmen vs. seniors) and major (political science majors vs. other majors) of the students.

Graduate School Admissions Tests

In certain disciplines, test results from graduate school admissions tests may provide another source of assessment data on graduates. The Educational Testing Service can furnish departments and graduate schools with special reports on students who took the Graduate Record Examination in a given discipline. Similar reports can be obtained for the GMAT, LSAT, and MCAT.

There are limitations to the use of graduate school admissions tests for assessment purposes. Most importantly, the graduates who take these tests are not a representative sample of the students in a program. These students are more motivated to succeed and likely to be among the best students to complete their degrees. In addition, the number of students who take an admissions test varies greatly by both department and college. In many instances, the small number of students tested may call into question the reliability of test statistics. Consequently, results cannot be generalized to the average student who completes a degree program. Despite these drawbacks, departments can use the results to shed some light on how effectively they prepare students for graduate-level study. And graduate programs can use the test results to evaluate the preparedness of their admitted students.

Professional Licensure Exams

Some colleges and departments can evaluate their students' professional training by tracking their performance on professional licensure exams. Standards for passing licensure exams vary greatly from one profession to another. So a relatively high or low pass rate does not necessarily mean that the training a student received is effective or ineffective. Programs are best served by monitoring the pass rates of their students over time.

It is important to note that simply tracking the percent of students that pass a licensure exam will not necessarily result in program improvement, the primary goal of all assessment efforts. Programs can use licensure exams to improve by obtaining detailed information about students' performance on various portions of the exam and by taking corrective actions to address identified deficiencies.

A partial list of professional licensure exams includes those in the following disciplines: accountancy, architecture, dentistry, education, engineering, law, medicine, nursing, physical therapy, psychology, and social work.