Standard 4: Program Impact

This document is written to include the original self-study (text in black), and for each standard the relevant CAEP offsite Formatie Feedback Review (FFR) narrative (text in blue). COE's response to the FFR is incorporated throughout and at the end of each standard (text in red).


The following link takes the reader to the FFR comments along with the EPP's response.

Click here to see the EPP's response to the FFR

 

Standard 4: Program Impact

The provider demonstrates the impact of its completers on P-12 student learning and development, classroom instruction, and schools, and the satisfaction of its completers with the relevance and effectiveness of their preparation.

The focus of evidence presented to address Standard 4 Program Impact includes a summary of Teacher Work Sample results for the Teacher Education and Alternative Licensure Programs (4A);  abridged details on the new, externally validated assessments related to candidate impact COE is piloting; and survey results regarding program satisfaction (4C.1 AND 4C.2).

Introduction

As with most EPPs, impact on P-12 learning has been an ongoing focus of assessment development. Certain programs have been able to assess the impact of students on classroom instruction through observable measures such as student engagement and productivity  The initial preparation programs house assignments and candidate assessments within portfolio requirements, while measuring program impact within other programs, such as principal preparation, are more technically challenging. University faculty are also including more content related to assessment into coursework and expecting that candidates are able to demonstrate that student learning has occurred with methods such as pre- and post-testing, case studies, and projects.

Candidate Impact

Teacher Work Sample has been used for five years with the TELP and ALP programs, and its use provides tangible evidence of P-12 learning. E-portfolios all contain multiple means of demonstrating impact. The most frequent examples used by candidates are lesson plans, observations, dispositions, assessments, teacher work sample, parent communication log, and graded student work. TWS addresses multiple facets of classroom instruction (alignment with state and local standards; variety of methods; adaptation for individual needs) as well as measures of student growth and development (pre- and post-assessments; interpretation and disaggregation of data; interpretation of student learning, etc.). The summary (4.A) provides three-years of data for TELP (Elementary and Secondary levels) and ALP candidates. The quality is determined by the faculty and professional practices. Some TWS components have multiple scorers and all combine to address a wide range of knowledge and skills that each candidate is responsible for demonstrating. Candidates are rated on a four-point scale, with '4' being high. Candidates' mean scores were generally 3.5 and higher and a quick look at data from across 3 years of cohorts reveals that the high score means are very similar for Alternative, Elementary, and Secondary while there's a little more variance between the means of the low scores. It's heartening to see that the scores for 'Alignment with National, State or Local Standards' are high for all groups. Results are studied by faculty to determine which program areas need attention and adjust curriculum accordingly. It is important to remember that TWS is just one measure of candidate's competence and other requirements demonstrating their success are included in their portfolios. TELP and ALP are piloting edTPA this semester but some faculty believe the transition will not be overly onerous as there are several parallels between the requirements for TWS and edTPA.

While the assessments generally used by the College are acceptable and frequently exceptional, there is a shift from using internally, independently-developed and monitored assessments (lesson plans, observations, surveys, portfolios) as primary means of determining candidate success to assessments with wider, cross-program or cross-department collaboration in their development and implementation. Further, newer practices tend to reflect a trend of implementing assessments similar to what are currently used but expanding the validity through nationally-normed administration and scoring.

The state of Colorado has required that candidates pass Praxis or PLACE for many years, so the programs have had generally a single cut score as one means of assessing content knowledge. The College is piloting new assessments that do a better job of measuring the complexity of candidate performance and provide far more nuanced detail on how candidates demonstrate they possess the knowledge, skills and dispositions necessary to be effective educators. The three instruments, CLASS, edTPA, and the newly developed Student Perception Survey, reflect the shift from internal to external validation. There is also an interest in using instruments that may serve as indicators of predictability on how candidates will perform in their own classrooms. The Classroom Assessment Scoring System (CLASS) is an observational tool (see Standard 2) being piloted by Special Education and UCCSTeach this year (2013-14). CLASS focuses on the language and interactions that occur in the classroom. According to Teachstone, the CLASS tool:

  • focuses on effective teaching
  • helps teachers recognize and understand the power of their interactions with students
  • aligns with professional development tools• works across age levels and subjects

The College chose to use the instrument in part because of the strong validity and reliability framework already done by the University of Virginia's Curry School of Education. The instrument supports multiple administrations so that it can be utilized for monitoring student growth. The domains for the instrument target the craft of teaching. The results will be used in both candidate feedback and program improvement. Fourteen faculty participated in the CLASS training during the fall, 2013 semester and another attended a 'train the trainer' workshop spring, 2014 so that she's able to prepare classroom and site supervisors on the use of CLASS in observing and assessing teacher preparation candidates during their internships.

The second assessment is edTPA. Developed at Stanford to predict classroom readiness of candidates, it is a performance-based assessment of teaching, with requirements very similar to those for National Board Certification. While edTPA has some of the same features as TWS, the greatest difference is that it is nationally scored and normed. COE educators will 'predict' how their candidates will do before the submission date and also score candidates' work to learn more about how their interpretation compares to results from the national level. Teams of faculty will score candidate artifacts independently of national scorers. That work will reveal what COE candidates know compared to their peers nationally. Comparisons will inform programs of where their strengths lie and where, if any, changes need to occur. UCCSTeach, TELP, and ALP are piloting the use of edTPA this year. Artifacts for edTPA are required to be submitted electronically, and faculty and candidates have been receiving training on what will be expected with this new initiative and how it will be assessed. All programs maintain electronic portfolios which require candidates to submit artifacts in a variety of applications (Word, Excel, PowerPoint, etc.) and further technological expertise as candidates learn to create and submit videos for collection and review. Candidates also need to utilize technology in addressing student learning, which will require data collection and analysis in both formal and informal assessments. EdTPA also documents candidates' ability to effectively teach diverse populations including different types of learners and their modalities.

The College is collaborating with the Student Perception Survey (SPS) jointly developed by the Gates Foundation, Tripod Project, and Colorado Legacy Foundation. The instrument is directly aligned to the Colorado Teacher Quality Standards and measures multiple domains of student perceptions, including learning environment and classroom management. A welcome addition was the validation of the instrument on English Language Learners.

Another external perspective on program impact will come from the Colorado Department of Education. As a result of SB 191 (mentioned earlier), the state has begun tracking EPP graduates, their employment location and annual contribution to student growth-typically expressed as a value added model. This data will be provided to EPPs to help with multi-year data on employment retention, mobility, added endorsements, and to some degree, impact on P-12 learning. This data will be incorporated with initial employment surveys, including the Colorado Teacher's Perception Survey and candidate exit surveys. The Colorado Teacher's Perception Survey is aligned to the Colorado Principal Standards in order to capture information about teachers' perceptions of their principals. Like the SPS, the TPS was developed in conjunction with the Gates Foundation and has both published psychometrics and validation in the Colorado population. As all of these initiatives are new, there is no data yet but abridged details are provided in 2b.

Program Satisfaction

TELP and ALP have surveyed candidate on a number of critical subjects, including satisfaction with site professors, site coordinators, and clinical teachers. Two surveys are related to program satisfaction and are included as artifacts 4c.1 and 4c.2. Both surveys contain summary data for Elementary candidates and Secondary candidates for years 2011, 2012, and 2013. The survey on how well candidates perceive their preparation was (4c.1 TELP CANDIDATE SURVEY OF PREPARATION ADEQUACY) has 18 questions that apply to both Elementary and Secondary candidates followed by separate elements that are level specific and uses a five-point scale (1. Inadequate, problem area; 2. Needs improvement; 3. Adequate/average; 4. Good/Professional quality; 5. Outstanding/Excellent). The second survey (4c.2 TELP PROGRAM SATISFACTION SURVEY) consists of six questions and rating options of satisfied, somewhat satisfied, somewhat unsatisfied, and unsatisfied. The survey instruments were created by TELP faculty and reflect faculty expertise and professional practice in determining what aspects of the program and its delivery need to be assessed. The results provide a treasure trove of interesting detail, particularly related to how adequately the candidates felt prepared. Secondary candidates rated 'Professional interactions with teacher and staff'' highest while Elementary rated Lesson planning' highest. Both Elementary and Secondary rated 'Parent-teacher relationships' lowest. Examining the content area preparation, Elementary candidates rated their Mathematics preparation highest and Social Studies lowest. Secondary rated 'Other' highest (followed by 'Small Group Activities') and "Laboratory Teaching' lowest. On the program satisfaction survey, the mean score on 'Rate your satisfaction with your interactions with the program assistants, student employees, and other college personnel' was highest for 'Satisfied' and (not surprisingly) the application process rated highest of 'Unsatisfied'. The data derived from the surveys not only inform the programs and departments but also the College and the campus. It is apparent that customer service must be a focus for all of us and that candidate satisfaction depends as much on their interactions with the individuals within the college and across campus as the content and instructional knowledge they receive. The creation of the Student Resource Office, the hiring of an Assessment and Operations Specialist, and establishment of the Assessment and Accreditation Office are all outgrowths of lessons learned from surveys, data analysis, interviews, and other assessment measures.

Standard 4: Summary

A criticism long leveled against teacher education programs is that we resisted external validation and argued for the quality of our own self assessments. That era is gone, and we must now allow our candidates to be held to standards beyond our own conclusions. As with so many other assessment reforms, the central challenge is often the cultural shift rather than technical requirements. The College is engaged in remarkable discussions about where we need to go to develop assessments and processes that document the impact our candidates have on P-12 learning. It has hired an Assessment Operations Specialist and created an Office of Assessment and Accreditation, both of which will ensure the systematic administration of an assessment agenda. Although there is a shift in the College toward using externally validated assessments, it doesn't signify COE relinquishment of how candidate quality is determined. The College has over a dozen faculty trained in the use of CLASS and is participating with Teachstone in a train the trainers model that will allow further preparation for COE faculty and site supervisors. Programs piloting the instrument have engaged in planning and implementation strategies, including projecting results and determining score interpretation.

For edTPA, faculty have been involved in discussions about the instrument, completed the scoring training, and are participating in the local scoring process. Since edTPA is a relatively new instrument, local scoring will be a valuable exercise for faculty in determining how candidate quality is measured, both qualitatively and quantitatively. Colorado, like many other states, has been involved in discussion about adopting edTPA as a requirement for licensure. Nationally, initial scores are relatively low. Please see here for our first year of scores. Knowing that, and by engaging in the process early, programs can help shape results by adjusting course content, assignments, assessments, and programmatic practices to help subsequent cohorts be increasingly successful.

In summary, the College seeks to emphasize the evolution of the use of data for internal consumption with localized standards of quality to a broader-based and more complex system of assessment that provides programs and stakeholders nationally-normed, externally-evaluated feedback on the quality of our program completers and their programs. This transformation is multifaceted and incorporates technical, procedural, and cultural changes. Notable among them: expanded technical capabilities on the part of faculty and staff in the use of assessment tools and data reporting; the implementation of multiple measures, replacing siloed program data with multiple, integrated software packages; development of routine data reporting cycles to internal and external users; and increased use of multiple measures of candidate outcomes within programs to form conclusions about candidate quality and program health.

Together these changes are designed to ensure excellent preparation of educators and continuous improvement of program elements well into the future. The increased scrutiny of EPPs both on the state and national level, in addition to dynamic changes in the teacher labor market, will require the College to remain vigilant in assessing the quality of its programs in order to achieve its goal of recognized completer quality.

UCCS Teach CLASS data adobe symbol

SELP CLASS data excel symbol

TWS Handbook adobe symbol

TWS Overview (shows crosswalk with the old PB Standards and CTQS) adobe symbol

TELP secondary data 2011-13 excel symbol

TELP elementary data 2011-2013 excel symbol

ALP data 2011-2013 excel symbol

edTPA local vs national scores adobe symbol