UCCS Assessment Terminology



 

A - B - C - D - E - F - G - H - I - J - K - L - M - N - O - P - Q - R - S - T - U - V - W - XYZ 

 

A

 

 Accountability

 The demand by public officials, employers, and taxpayers, for school officials to prove that time and money invested in education has lead to measurable learning. Accountability is often viewed as an important factor in education reform. (Horizons, 1995).

 

Achievement Target

An achievement target should be aspirational, set the skill/knowledge achievement target where you would like each student to be. There is no penalty for not meeting an achievement target - it is a goal. (Calhoun & Moon, 2010).

 

Achievement Test (see also; norm-referenced test and criterion-referenced test)

A standardized test designed to efficiently measure the amount of knowledge and/or skill a person has acquired, usually as a result of classroom instruction. Such testing produces a statistical profile used as a measurement to evaluate student learning in comparison with a standard or norm. (Horizons, 1995).

 

Action Plan

At UCCS the goal is that programs will gather viable data with which to make data driven changes to improve student learning. An action plan documents what inspired the change, what steps will be taken to implement the plan follows up with the results of the changes in a timely manner.

 

Action Research

School and classroom-based studies initiated and conducted by teachers and other school staff. Action research involved teachers, aides, principals, and other school staff as researchers who systematically reflect on their teaching of other work and collect data that will answer their questions. It offers staff an opportunity to explore issues of interest to them in an effort to improve classroom instruction and educational effectiveness. (Bennett, 1994).

 

Action Verb

Link to like of typical Action Verbs.

 

Affective

Outcomes of education involving feelings more than understanding: likes, pleasures, ideals, dislikes, annoyances, values. (SAAC, 1996)

 

Affective Learning: Attitudinal Development

The affective domain of learning is concerned with the attitudes and feelings of the learning in regard to knowledge and behaviors acquired in the other two domains. In most learning environments, affective learning is incidental to both cognitive and behaviors learning. This domain encompasses attitudes toward what has been learned cognitively and motivation to perform learned behaviors. (Horizons, 1995)

 

Alternative Assessment

Many educators may prefer alternatives to traditional standardized norm- or criterion- referenced traditional tests. An alternative assessment might require students to answer open-ended questions, work out a solution to a problem, demonstrate a skill, or in some way produce work rather than select an answer from multiple choices. Portfolios and instructor observation is also an acceptable alternative form of assessment. (Horizons, 1995)

 

Analytic Scoring

A type of rubric scoring that separated the whole into categories of criteria that are examined one at a time. Student writing, for example, might be scored on the basis of grammar, organization, and clarity of ideas. An analytic scale is useful when there are several dimensions on which the piece of work will be evaluated. (See Rubric) (Horizons, 1995)

 

Assurance of Student Learning

At UCCS, assurance of student learning plans and activities are focused on gathering information that will inform us as to the levels of student achievement related to the student learning outcome(s) in question prior to completing a degree in your program. (Calhoun, 2014)

 

Assurance of Student Learning Coordinator

The faculty member(s) in each major and stand-alone minor at UCCS that have accepted or been assigned responsibility for coordinating the departmental assurance of student learning plans and activities. The coordinators will work with departmental faculty members to identify and coordinated instruments or activities to gather data that will address to what degree of competency students are acquiring skills or knowledge as identified in the departmental student learning outcomes. The coordinators will also lead the department through activities designed to address any gaps in the curriculum or shortcomings in student achievement that are discerned by the information gathered in the assessment activities. the coordinators are responsible for keeping the departmental activities on the departmental schedule and for maintaining the data gathered in a central location. (Calhoun & Moon, 2010)

 

Assurance of Student Learning Plan

A collaborative, group effort within a department to develop an Assurance of Student learning Plan     including: identifying appropriate student learning outcomes, developing or selecting measures/artifacts, set achievement targets and sampling schedules. (Calhoun, 2014) 


 

B

 

Behavioral Learning: Skills Acquisition

The behavioral domain of learning is concerned with psycho motor skills.  Skills are viewed as the ability of an individual to perform certain behaviors.  Skills can be learned and possessed by the learner, then they can be demonstrated through performance as observable behaviors.  This domain of learning encompasses the content of a field.

 

Benchmark/baseline(see also; Achievement Target/Cutpoint)

Where your students are currently performing or where they were performing when you began the data collection for your assessment project.  You may reset your benchmark at reasonable intervals or when you change your assessment instruments or assessment project.  You use the benchmark/baseline to compare progress over time.  (Calhoun & Moon, 2010)

 

 

C

 

 

 Cognitive Learning – General and Specific Knowledge

 

The cognitive domain of learning is concerned with knowledge, understanding and synthesis.  At the lowest level, this domain focuses on specific facts.  At the middle level, the cognitive domain focuses on principles and generalizations.  At the highest level of cognitive learning, the focus is on synthesis and evaluation based on learning that has already taken place at the lower levels. This domain of learning encompasses the content of a field and the general education core curriculum. (SAAC, 1996)

 

 

Competency Test

A test intended to establish that a student has met established minimum standards of skills and knowledge and is thus eligible for promotion, graduation, certification, or other official acknowledgment of achievement.

 

 

Competency Standards

            In education competencies are typically discussed related to, benchmarks (where students are currently performing, or have historically performed), minimal acceptable competency level to be considered eligible for graduation or moving on to next level and aspirational competencies are considered to be where the faculty/department want to take the students. (Calhoun, 2013)

 

Criterion-Referenced

Composed of items based on specific objectives, or competency statements.  The criterion-referenced test defines the performance of each test-taker without regard to the performance of others.  The CRT interpretation defines success as being able to perform a specific task or set of competencies.  There is no limit to the number of people who can succeed on a CRT and the instruction that the test-takers receive in anticipation of the test is usually addressed specifically to these competencies.  Criterion-referenced tests should be used whenever you are concerned with assessing a person’s ability to demonstrate a specific skill.  There are five basic purposes for criterion-referenced tests:

·         Prerequisite tests are used to ensure that the learners have the background knowledge required for success in the course.  If there are minimum skills required for the course, the prerequisite test is designed to assess mastery of these skills.

·         Entry tests are used to identify the skills to be taught in the course that the entering student may already possess.  The entry test can be used to allow students to bypass a module of instruction, if the students demonstrate in advance the skills to be covered in the module.  This test can also be used to identify the range of skills the students have, over and above the prerequisite skills.

·         Diagnostic tests are used to assess mastery of a group of related objectives in an instructional unit.  Whereas entry and prerequisite tests are used before instruction, the diagnostic test will typically be used during instruction (when it is often called an “embedded test”) or as part of the post-test process to determine exactly where a learner is having difficulty.

·         Equivalency tests are used to determine whether a learned has already mastered the course’s terminal objectives without going through instruction.  The tests are used to determine if a test-taker can bypass – “test out of” – an entire course.

You will find that often the same questions can and will appear on different types of tests.  The type of test is determined by the purpose the test serves – not by the text of the items it contains.  (Shrock & Coscarelli, 1989)

 

Culminating Project (see also; Senior Project)

 

 

Cutpoint (see also; Benchmark/Achievement Target)

The minimal acceptable performance level for a student to achieve to be considered competent in specific skills, knowledge set, or program. The performance/skill/knowledge you are assessing is related to your student learning outcomes. (Calhoun & Moon, 2010)

 

D

 

 

Data Collection

Is the process of gathering data. 

 

Direct Measure (see also; Indirect Measure)

Requires students to demonstrate their skills or knowledge.

 

Discipline Specific

            Domains specific to a discipline that students are expected to gain focused competencies in (not general education competencies, although they may overlap.

 

Domain

            A focused area of knowledge (e.g Science or History),or a specific group of skills such as biology lab skills versus electronics lab skills.

 

 

                                          

 

                                                                                                    E

 

 

Educational Testing Service – ETS

            Academic assessments and services that enable administrators and faculty to make the most of their resources through the use of standardized and customized exams.

 

 

Essay Test

A test that requires students to answer questions in writing. Responses can be brief or extensive. Tests for recall, ability to apply knowledge of a subject to questions about the subject, rather than ability to choose the least incorrect answer from a menu of options.

 

 

Evaluation

Is the process of making judgments regarding the appropriateness of some person, program, process or product for a specific purpose.  Evaluation may or may not involve testing, measurement, or assessment. Most informed judgments of worth, however, would likely require one or more of these data-gathering processes.  Evaluation decision may be based on either quantitative or qualitative data; the type of data that is most useful depends entirely on the nature of the evaluation question. (Shrock & Coscarelli, 1989) 

 

F

 

 

Formative Assessment

Observations which allow one to determine the degree of student competency/proficiency in targeted skills and knowledge at designated stages in the course or program. Outcomes may suggest interventions for teaching and learning prior to the completion of the program. It is during the courses leading to completion of the program requirements that students acquire component skills and knowledge. The essential component skills and knowledge should provide the foundation of your student learning outcomes.

 

“The component skills are mediators of the final goal, i.e. mastery of a lower level skill is necessary to achieve the next level of performance; non-mastery of a subordinate skill significantly reduces the probability that the next level task will be mastered.”  (Shrock & Coscarelli, 1989)

 

G

 

Grade Equivalent

A score that describes student performance in terms of the statistical performance of an average student at a given grade level. A grade equivalent score of 5.5, for example, might indicate that the student's score is what could be expected of a average student doing average work in the fifth month of the fifth grade. This score allows for a theoretical or approximate comparison across grades. It ranges from September of the kindergarten year (K.O.) to June of the senior year in high school (12.9) Useful as a ranking score, grade equivalents are only a theoretical or approximate comparison across grades. In this case, it may not indicate what the student would actually score on a test given to a midyear fifth grade class.

                                                                             H

High Stakes Testing
Any testing program whose results have important consequences for students, teachers, schools, and/or districts. Such stakes may include promotion, certification, graduation, or denial/approval of services and opportunity.

The HigherLearningCommission (HLC) is an independent corporation and one of two commission members of the North Central Association of Colleges and Schools (NCA), which is one of six regional institutional accreditors in the United States. The Higher Learning Commission accredits degree-granting post-secondary educational institutions in the North Central region. http://ncahlc.org

I

Indirect Measure (see also; Direct Measure)

Indirect measures are based on student or alum, opinions or perceptions of their skills and knowledge.  (Calhoun & Moon, 2010)

 

I. Q. Tests

The first of the standardized norm-referenced tests, developed during the nineteenth century. Traditional psychologists believe that neurological and genetic factors underlie "intelligence" and that scoring the performance of certain intellectual tasks can provide assessors with a measurement of general intelligence. There is a substantial body of research that suggests that I.Q. tests measure only certain analytical skills, missing many areas of human endeavor considered to be intelligent behavior. I.Q. is considered by some to be fixed or static; whereas an increasing number of researchers are finding that intelligence is an ongoing process that continues to change throughout life.

Item Analysis

Analyzing each item on a test to determine the proportions of students selecting each answer. Can be used to evaluate student strengths and weaknesses; may point to problems with the test's validity and to possible bias.

J

Journals

Students' personal records and reactions to various aspects of learning and developing ideas. A reflective process often found to consolidate and enhance learning.

L

Learning Goal (please see; Teaching Goal)

  

                                                                              M

  

Mapping

      
      A process of using an existing template to indicate the relationship between a department's student learning outcomes and the core curriculum, or against selected assessment measures.  Additionally, mapping can be used at the course level to explore course level and program level outcomes relative to the course assessment methods.

  

Mean

One of several methods of representing a group with a single, typical score. It is figured by adding up all the individual scores in a group and dividing them by the number of people in the group. Can be affected by extremely low or high scores and does not provide enough information to evaluate student competencies for strengths or weaknesses.

  

Median

The point on a scale that divides a group into two equal subgroups. Another way to represent a group's scores with a single, typical score. The median is not affected by low or high scores as is the mean. (See Norm.)

Measurement

Is the collection of quantitative data to determine the degree of competency for whatever skill or knowledge set is being measured.  There may or may not be right and wrong answers. A measurement inventory such as the Decision Making Inventory might be used to determine a preference for using a systematic style versus a spontaneous one in making a sale.  One style is not "right" and the other "wrong;" the two styles are simply different.  (Shrock & Coscarelli, 1989)

 

Minimal Acceptable Performance Level (aka cutpoint)

The minimal acceptable performance level for a student to be considered competent in specific skills or knowledge set.  The performance/skill/knowledge you are assessing is related to your student learning outcomes. (Calhoun, 2014)

                  

Multidimensional Assessment

Assessment that gathers information about a broad spectrum of abilities and skills (as in Howard Gardner's theory of Multiple Intelligences.

  

Multiple Choice Tests

A test in which students are presented with a question or an incomplete sentence or idea. The students are expected to choose the correct or best answer/completion from a menu of alternatives.



                                                                                  N


North Central Association

The NCA CASI Office of Postsecondary Education is responsible for the accountability of schools with postsecondary certificate-granting designation in accordance with federal regulations. We strive to work closely with postsecondary institutions to ensure the quality of the education provided to students. The process of institutional accreditation provides an avenue to utilize a variety of criteria to assess the effectiveness of postsecondary programs. The objectives of the postsecondary accreditation process are to:

§  Provide a process for institutional evaluation.

§  Assure accountability in the use of federal funds allocated to the institutions.

§  Promote, strengthen, and assure the operation of quality educational programs for all students.

http://ncacasi.org

  

Norm

A distribution of scores obtained from a norm group. The norm is the midpoint (or median) of scores or performance of the students in that group. Fifty percent will score above and fifty percent below the norm.

  

Norm Group

A random group of students selected by a test developer to take a test to provide a range of scores and establish the percentiles of performance for use in establishing scoring standards.

  

Norm Referenced Tests


A test in which a student or a group's performance is compared to that of a norm group. The student or group scores will not fall evenly on either side of the median established by the original test takers. The results are relative to the performance of an external group and are designed to be compared with the norm group providing a performance standard. Often used to measure and compare students, schools, districts, and states on the basis of norm-established scales of achievement.

  

Composed of items that separate the scores of test-takers from one another. (Shrock & Coscarelli, 1989)


 

                                                                                        O

  

On-Demand Assessment

An assessment process that takes place as a scheduled event outside the normal routine. An attempt to summarize what students have learned that is not embedded in classroom activity.

  

                                                                                         P

  

Pedagogical Goals

Specific expected student learning outcomes.

  

Percentile

A ranking scale ranging from a low of 1 to a high of 99 with 50 as the median score. A percentile rank indicates the percentage of a reference or norm group obtaining scores equal to or less than the test-taker's score. A percentile score does not refer to the percentage of questions answered correctly, it indicates the test-taker's standing relative to the norm group standard.

  

Performance Criteria

The standards by which student performance is evaluated. Performance criteria help assessors maintain objectivity and provide students with important information about expectations, giving them a target or goal to strive for.

  

Portfolio

A systematic and organized collection of a student's work that exhibits to others the direct evidence of a student's efforts, achievements, and progress over a period of time. The collection should involve the student in selection of its contents, and should include information about the performance criteria, the rubric or criteria for judging merit, and evidence of student self-reflection or evaluation. It should include representative work, providing a documentation of the learner's performance and a basis for evaluation of the student's progress. Portfolios may include a variety of demonstrations of learning and have been gathered in the form of a physical collection of materials, videos, CD-ROMs, reflective journals, etc.

  

Portfolio Assessment

Portfolios may be assessed in a variety of ways. Each piece may be individually scored, or the portfolio might be assessed merely for the presence of required pieces, or a holistic scoring process might be used and an evaluation made on the basis of an overall impression of the student's collected work. It is common that assessors work together to establish consensus of standards or to ensure greater reliability in evaluation of student work. Established criteria are often used by reviewers and students involved in the process of evaluating progress and achievement of objectives.

  

Primary Trait Method

A type of rubric scoring constructed to assess a specific trait, skill, behavior, or format, or the evaluation of the primary impact of a learning process on a designated audience.

  

Process

A generalizable method of doing something, generally involving steps or operations that are usually ordered and/or interdependent. Process can be evaluated as part of an assessment, as in the example of evaluating a student's performance during pre-writing exercises leading up to the final production of an essay or paper.

  

Profile

A graphic compilation of the performance of an individual on a series of assessments.

  

Project

An assignment involving more than one type of activity, from inception to completion. Projects can take a variety of forms.  Some examples are a research project, mural construction, a shared service project, or other collaborative or individual effort.

  

Psychomotor Skills:

Psychomotor skills: skills that you have performed so often that you do not have to think about how to do them while you are doing them.[1]

  

                                                                                    Q


Quartile

The breakdown of an aggregate of percentile rankings into four categories: the 0-25th percentile, 26-50th percentile, etc.

  

Quintile

The breakdown of an aggregate of percentile rankings into five categories: the 0-20th percentile, 21-40th percentile, etc.

 

                                                                                       R

  

Reliability

The measure of consistency for an assessment instrument. The instrument should yield similar results over time with similar populations in similar circumstances.

  

Rubric

Some of the definitions of rubric are contradictory. In general a rubric is a scoring guide used in subjective assessments. A rubric implies that a rule defining the criteria of an assessment system is followed in evaluation. A rubric can be an explicit description of performance characteristics corresponding to a point on a rating scale. A scoring rubric makes explicit expected qualities of performance on a rating scale or the definition of a single scoring point on a scale.


                                                                                     S

  

Sampling

A method used to obtain information about a large group by examining a smaller, typically randomly chosen selection, of group members. If the sampling is conducted correctly, the results will be representative of the group as a whole. Sampling may also refer to the choice of smaller tasks or processes that will be valid for making inferences about the student's performance in a larger domain. "Matrix sampling" asks different groups to take small segments of a test; the results will reflect the ability of the larger group on a complete range of tasks.

  

  

Sampling Plan/Schedule

 An annual academic year schedule that indicates when you will be engaging in data collection activities relative to your program academic assurance of student learning plan.

  

Scale

A classification tool or rating system designed to indicate the degree to which an event or behavior has occurred.

  

Scale Scores

Scores based on a scale ranging from 001 to 999. Scale scores are useful in comparing performance in one subject area across classes, schools, districts, and other large populations, especially in monitoring change over time.

  

Scoring Criteria

Rules for assigning a score or the dimensions of proficiency in performance used to describe a student's response to a task. May include rating scales, checklists, answer keys, and other scoring tools. In a subjective assessment situation, a rubric.

  

Scoring

A package of guidelines intended for people scoring performance assessments. May include instructions for raters, notes on training raters, rating scales, samples of student work exemplifying various levels of performance.

  

Self-Assessment

A process in which a student engages in a systematic review of a performance, usually for the purpose of improving future performance. May involve comparison with a standard, established criteria. May involve critiquing one's own work or may be a simple description of the performance. Reflection, self-evaluation, metacognition, are related terms.

 

Stakeholders

Faculty, Administrators, Staff, Students, Employers, Professional Bodies, Communities.

An individual or group with an interest in the success of an organization in delivering intended results and maintaining the viability of the organization's products and services.  Stakeholders influence programs, products, and services.

http://www.gao.gov/special.pubs/bprag/bprgloss.htm#sectS


  

Senior Project/Culminating Project

An opportunity for a student to demonstrate competency in required knowledge or skills through research, performance, writing, speaking, construction of a physical project - any means selected by the department, which would allow the student to demonstrate their level of achievement of the student learning outcomes. The assumption is that the student would draw upon the knowledge and skills gained in the program.  An assessment instrument designed to rate levels of competency/proficiency for a culminating project would be summative.

 

Standardized Test

An objective test that is given and scored in a uniform manner. Standardized tests are carefully constructed and items are selected after trials for appropriateness and difficulty. Tests are issued with a manual giving complete guidelines for administration and scoring. The guidelines attempt to eliminate extraneous interference that might influence test results. Scores are often are often norm-referenced.

  

Student Learning Outcome/Objective

At UCCS, student learning outcomes are statements crafted by departmental faculty pinpointing measurable skills and knowledge domains that a student acquires or improves upon during their pursuit of a degree within a specific discipline at UCCS.  The purpose of the student learning outcome is to state what skill(s) or knowledge domains that a student will acquire competency in prior to completing a degree in your discipline.  These are program level outcomes, not course level.  This statement is not about how or what you will teach students or how students learn.  A student learning outcome is crafted simply to identify for your department and other interested parties what students are expected to know or be able to do by completion of a degree in your program.

  

Subjective Test

A test in which the impression or opinion of the assessor determines the score or evaluation of performance. A test in which the answers cannot be known or prescribed in advance.

  

Summative Assessment

Evaluation at the conclusion of an activity, course or program to assess student competency or proficiency for targeted skills and knowledge.

  

                                                                               T

  

Teaching Goal

Teaching Goals differ from Student Learning Outcomes.  A program goal is a broad, often long-term aspiration of what the department wants to teach, do with students, or see students achieve pre and post-graduation. Whereas an SLO is a specific measurable expectation about knowledge, skills, and competencies that students will be able to demonstrate prior to completing the program.

  

Testing

Is the collection of quantitative (numerical) information about the degree to which a competence or ability is present in the test taker.  There are right and wrong answers to the items on the test, whether it be a test comprised of written questions or a performance test requiring the demonstration of a skill.  (Shrock & Coscarelli, 1989)

 

 

                                                                               V


Validity

Validity has to do with whether or not a test measures what it is supposed to measure.  A test can be consistent (reliable) but measure the wrong thing. (Shrock & Coscarelli, 1989)

  

Voluntary System of Accountability (VSA)

The Voluntary System of Accountability (VSA) is an initiative by public 4-year universities to supply basic, comparable information on the undergraduate student experience to important constituencies through a common web report - the College Portrait.

 

The VSA was developed in 2007 by a committed group of university leaders and is sponsored by two higher education associations - the Association of Public and Land-grant Universities (APLU) and the Association of State Colleges and Universities (AASCU).

 

Development and start-up funding was provided by the Lumina Foundation. Beginning in 2010, the VSA is supported by the participating institutions through annual dues.  (VSA website)