The Department of Education and Teacher Preparation
Quality Assurance System for Continuous Improvement Initiatives

The College of Coastal Georgia (CCGA) Department of Education and Teacher Preparation employs an evidence-based quality assurance system to evaluate program effectiveness with emphasis on the impact of candidates on P-12 student learning. Through systematic data collection and ongoing program review mechanisms, the Department of Education and Teacher Preparation (DETP) provides the context for continuous improvement to occur. The quality assurance system is rooted in the growth mindset philosophy indicating that, while faculty recognize that candidates bring assets to the teacher education program, candidates are expected to demonstrate growth across their respective programs. Candidates are evaluated using multiple measures with consideration to departmental teacher education goals and Interstate Teacher Assessment and Support Consortium (InTASC) standards. The revised quality assurance systemwas developed through a collaborative initiative involving input from CCGA Arts and Sciences faculty and other relevant P-12 partners and stakeholders.

In effect beginning the 2017-2018 academic year, DETP faculty began incorporation of revised standards for both InTASC and the Council for the Accreditation of Education Preparation (CAEP) across all initial teacher preparation programs.The newly revised departmental vision emphasizes the fundamental notion that meaningful learning only occurs for teacher candidates in the context of authentic classroom settings with diverse P-12 learners. DETP faculty support candidate development related to A) the learner and learning, B) content, C) instructional practice, and D) professional responsibility. As an initial step in this revision process, DETP faculty conducted a collaborative thematic analysis exercise using accrediting agency expectations and standards resulting in development of key departmental goalsthat will serve as the driving force for future continuous improvement initiatives. The revised departmental goals are included in Table 1.1 below.

Table 1.1

1 / Teacher candidates will demonstrate acceptable levels of content knowledge expertise.
2 / Teacher candidates will respond to diversity as it relates to student learning and the decision-making process for effective teaching.
3 / Teacher candidates will apply relevant research/theory and developmentally appropriate practice for planning, instruction, and assessment of P-12 learners.
4 / Teacher candidates will employ evidence-based practices with increasing levels of proficiency.
5 / Teacher candidates will invest in opportunities to develop individual beliefs and values that will positively impact P-12 student learning.
6 / Teacher candidates will analyze their own teaching practices to monitor their professional growth.

The DETP evidence-based quality assurance system is grounded in the fundamental concepts of the learner and learning, content, instructional practice, and professional responsibility and is responsive to appropriate state and national standards. As a part of the continuous improvement initiatives of the department, the overall vision will be regularly reviewed and revised (as needed) to ensure that it is current, relevant, and clearly aligned to present-day standards driving the field of teacher education. This review and revision process will occur within the context of the data analysis meetings described below. With an emphasis on the four fundamental concepts previously described, the DETP employs a comprehensive and integrated assessment system. The purpose of this system is: a) to document and monitor candidate proficiency development at key transition points in their respective programs and B) to conduct systematic and purposeful continuous improvement initiatives that are sustained and evidence-based.

Continuous Review Schedule for Faculty and Relevant Stakeholders

A systematic review of relevant data, across multiple measures, is conducted annually with both program faculty and community and P-12 stakeholders. For DETP faculty, this review occurs in the third month of the academic year in October during the monthly formal faculty meeting. For relevant stakeholders, this review of data, with opportunities for stakeholder input to be contributed, occurs in the context of the annual Teacher Education Advisory Board (TEAB) meeting.Membership on the Teacher Education Advisory Board (TEAB) is determined based on eligibility as defined by local school system administrators. Representation on the board may include, but is not limited to, administrators, teachers, instructional coaches, other support staff, and community stakeholders who express an interest in the DETP. For TEAB input opportunities, one meeting is held during the spring semester. Ongoing informal feedback opportunities are provided throughout the year with scheduled meeting with partnering administrators and mentor teachers.These meetings are documented on the shared DETP Partnership Meetings documented. For DETP faculty review meetings, program coordinators provide a summary of data collected across key assessmentswithin the program. For TEAB meetings, the department chair is responsible for representing relevantprogram data (both aggregated and disaggregated, as needed) in a way that provides ready access to external reviewers within the community to create optimal opportunities for input. As the revised quality assurance system is piloted during the 2017-2018 academic year, with the goal for full implementation in 2018-2019, participants will dialogue, using the framework below as a guide, toaddress the major questions that need to be asked annually as indicated in the CAEP standards.

Quality Assurance and Continuous Improvement Framework for Analysis

  1. How are we recruiting and supporting completion of high-quality candidates from a broad range of backgrounds and diverse populations? (CAEP Standard 3)
  2. How are we addressing community, state, national, regional, or local needs for hard-to-staff schools and shortage fields? (CAEP Standard 3)
  3. How are we effectively monitoring candidate progress from admissions through completion across programs? (CAEP Standard 3 and 5)
  4. How do we ensure that our programs are effectively preparing candidates with the knowledge, skills, and dispositions necessary to address the learner and learning, content, instructional practice, and professional responsibility? (CAEP Standard 1 and 5)
  5. How confident are we that our key assessments across programs are valid and reliable measures? (CAEP Standard 4 and 5)
  6. How satisfied are employers of program completers that graduates are prepared for their assigned responsibilities in working with P-12 students? (CAEP Standard 4)
  7. How well are we preparing candidates to have a positive impact on P-12 student learning? (CAEP Standard 2 and 4)
  8. How are we ensuring that clinical partnerships, field experiences, and candidate expectations are co-constructed with and mutually beneficial to our P-12 school and community stakeholders? (CAEP Standard 2)
  9. How well are we preparing, evaluating, supporting, and retaining high-quality clinical educators in the P-12 school setting and within the institution? (CAEP Standard 2)
  10. What program and/or unit revisions are needed or being implemented to ensure that evidence-based continuous improvement is ongoing? (CAEP Standard 4 and 5)

As a mechanism of the quality assurance system, DETP faculty integrate the disaggregated datacollected, using the previously described data analysis framework, within the context of the CCGA annual academic program review report for institutional effectiveness (see appendix J for template). This mechanism provides the opportunity for faculty to systematically reflect on how data informs the decision-making process through responses addressing the following: A) mission statement, B) program-specific course outcomes, C) methods for measurement, D) success criteria, E) discussion of findings, F) analysis/evaluation of findings, and G) implications and use of data for program improvement. As a mechanism forEPP-level review, DETP faculty examine the aggregate data related to candidates, programs, and other unit operations via an annual unit report required by CCGA via the institutional effectiveness office. Within this process, DETP faculty identify expected outcomes consistent with the mission of the unit and aligned with CCGA’s strategic plan. Looking at the goals from the previous year and the strategies used for their attainment, DETP faculty focus on particular assessment(s) to determine whether or not particular goals were achieved and/or led to improvement, the impact of the improvement activity on the intended outcome, and future implications based on the results of the improvement activity. Using these mechanisms, data are collected and monitored annually at program/department and EPP levels, using VIALivetext, Microsoft Excel, and Microsoft Word to store, manage, and generate reports for the purpose of analysis. DETP faculty, in conjunction with community and P-12 stakeholders, conduct annual program and unit assessment reviews each October and November, using data reports run usingVIALivetext and completed academic program and annual unit report templates. CCGA administrators, faculty, and stakeholders review the finalized reports.

The DETP faculty use candidate and program data to measure the progress of individual candidates throughout the program and then use aggregated candidate data to determine the effectiveness of the program and EPP. Within the unit assessment system, six key assessments are embedded across every program. Assessments used to evaluate candidates across their respective programs include: 1) Candidate Assessment on Performance Standards (CAPS) (see Appendix A), 2) Educator Disposition Assessment (see Appendix B), 3) Impact on Student Learning Rubric (see Appendix C), 4) Georgia Assessments for the Certification of Educators (GACE), 5) edTPA, and 6) Intern KEYS (see Appendix D). Assessments conducted using a rubric are entered into the VIALivetext platform and reviewed by both candidates at an individual level and program faculty at a collective level. Reports generated fromVIALivetext focus on specific rubric elements tied to InTASC standards, overall rubric scores, and program-specific data for candidates. Using reporting features that allow for dynamic and highly specific representations of data, DETP faculty are able to analyze data from multiple perspectives. As previously described, when this data is collected and shared with faculty, a systematic program review is conducted to highlight areas of excellence and to indicate areas to adjust with the goal of continuous improvement during the annual review meeting.

Table 2.1 documentsthe internal and external EPP-level and program-level assessment mechanismsthat are currently being used by the DETP. Assessments indicated as being internal measures are assessed by CCGA faculty, supervisors, and candidates. Assessments indicated as external measures include assessments conducted at the state and national level; assessments completed by P-12 stakeholders, CCGA graduates, and employers.

Program-Level Assessments / EPP-Level Assessments
Internal Measures: assessments completed by CCGA faculty, supervisors, and candidates /
Program-specific requirements at program admission and program completion transition points
Impact on Student Learning Rubric: evaluation of content-specific edTPA portfolios, following official submission to Pearson, to document impact on P-12 student learning
Educator Disposition Assessment: disaggregated data within programs /
Common requirements at transition points of program admission, in-progress during the program, program completion, and follow-up to graduation
CAPS Observation Instrument: completed by supervisor and candidate for self-assessment purposes
DETP End of Program Survey: completed by candidates as a part of program completion
Educator Disposition Assessment: aggregate data across programs
Intern KEYS: used during the summative conference for evaluation of evidence presented by the candidate in the conference and via the developed professional growth plan
External Measures: assessments conducted at the state and national level; assessments completed by P-12 stakeholders, CCGA graduates, and employers / GACE Basic Skills
GACE Content Assessments
edTPA
CAPS Observation Instrument: completed by P-12 clinical partners
Educator Disposition Assessment: completed by P-12 clinical partners / State-Provided Impact Data:collected from a sampling of graduates at the close of each academic year via TAPS and TKES
DETP Graduate Follow-Up Survey: collected via a survey sent to graduates one-year post-graduation to document the extent to which candidates felt prepared to confront responsibilities of the teaching profession
Employer Satisfaction Survey:collected via a survey sent to employers of graduates one-year post-graduation to document the extent to which employers felt that graduates were prepared to confront responsibilities of the teaching profession

Decision-Making at Program Transition Points

Key assessments are integrated within and across programs to monitor candidate progress. DETP faculty identified four transition points that serve as milestones for candidate progress, remediation, or exit. Candidates admitted to the early childhood and special education program, the middle grades education program, or the secondary education program will be assessed at four transition points: (1) program admission, (2) in progress during the program, (3) program completion, and (4) as a follow-up to graduation.

For each initial program, candidates will be vetted using specific admission and exit criteria to document competenciesprior to program entry as well as prior to recommendation for graduation and certification. In addition, candidates must demonstrate growth throughout their respective programs to continue each semester. Candidates receive feedback related to five domains developed to align with the Teacher Performance Standards used by the Georgia Department of Education within the Teacher Keys Effectiveness System. Faculty have identified power domains specific to transition points in the program. These power domains require candidates to demonstrate a minimum level of proficiency in order to advance in the program. Through integration of identified power domains, the increasing expectations and levels of complexity across the program are more visible to candidates. Candidates receive formative feedback through multiple assessments including, but not limited to disposition evaluation, work samples, and the Candidate Assessment of Performance Standards (CAPS). Using input from all of these formative assessments, candidates receive a summative evaluation occurring within the context of a candidate-led conference at the close of each semester using the Intern KEYS.

Program Admission: For admission to any teacher education program, candidates will be required to provide evidence of: (1) a cumulative GPA of 2.5 or higher on all course work, (2) successful completion of all three basic tests on GACE Basic Skills or the exemption equivalent on SAT/ACT, (3) successful completion of the Georgia Educator Ethics Assessment, (4) a grade of C or better in all Area F courses, (4) completion of Areas A-F in the core curriculum, (5) proof of insurance for tort liability, (6) a successful criminal background check (conducted by the GaPSC), and (7) a GaPSC pre-service certificate application and lawful presence form.

In Progress During the Program: Upon entry into the program, candidates are expected to maintain a cumulative GPA of 2.5 across all academic work to meet requirements for graduation. In addition, all course work must be completed with a grade of C or higher. As candidates progress in the program, they are expected to demonstrate growth across three fundamental areas of knowledge, skills, and dispositionswithin course work as well as the context of increasingly complex field experiences. Candidates engage in a cyclical feedback process that occurs within the context of a candidate-led summative conference framework (see Appendix E). During this conference, candidates provide documentation that they have met the performance standards incorporated within the CAPS instrument using evidence from the field, course work, and disposition evaluations through the lens of three categories: A) The Learner and Learning, B) Instructional Practice, and C) Professional Responsibility. Following this presentation by the candidate, faculty provide oral and written feedback to the candidate that is expected to be incorporated within the written submission of a professional growth plan (see AppendixF)by the candidate. Following submission of the professional growth plan and with consideration to the summative conference presentation, faculty provide a summative evaluation to the candidate via the Intern KEYSciting evidence provided in the candidate-led summative conference and the professional growth plan to indicate whether the candidate falls at the emerging, developing, practicing, or leading level across three elements: A) The Learner and Learning, B) Instructional Practice, and C) Professional Responsibility. Candidates are expected to revisit their professional growth plan as they prepare for their summative conference the subsequent semester to address feedback provided in the Intern KEYS summative evaluation and to self-monitor progress toward the goal set for professional development within the results section of the professional growth plan.

The CAPS instrument is aligned to both InTASC standards and the knowledge, skills, and dispositions specified in standard 1 of the 2013 CAEP standards. To document growth across field experiences and to be accountable to increasingly challenging expectations, candidates are evaluated using the CAPS instrument by college supervisors. Using the CAPS instrument for evaluation, candidates are accountable to increasingly complex and spiraling expectations across field experiences. Faculty have identified power domains for each semester to indicate minimum level expectations for candidates to proceed in their teacher education program. These power domain criteria are included on the CAPS instrument as aligned to specific programs. For candidates who score below the minimum requirement on any particular power standard, there is a mechanism in place to provide remediation and support to the candidate via the Professional Improvement Plan (PIP) process (see Appendix G). The PIP process clearly indicates how progress monitoring will occur for the candidate in order for a decision to be made for the candidate to continue in the program or to exit the program in the event that a candidate is unable to meet minimum levels of expectations at a given point in a program. Additionally, candidates are expected to receive a grade of “C” or higher within professional coursework to advance in the program.

Previously, the Teacher Candidate Disposition Evaluation (TCDE) instrument was used to document dispositional evidence and growth across the program for candidates. Beginning in fall of 2018, the DETP faculty voted to adopt the Educator Disposition Assessment to ensure that the instrument used to evaluate dispositions is both valid and reliable. As recommended in the technical guide accompanying this instrument, the assessment will be administered at strategic points throughout the program to document growth and determine eligibility for continuation in the program. The EDA will be administered at the close of each practicum experience and at the close of the clinical practice experience.The EDA will be administered once each semester at the close of field-based experiences (practicum and clinical practice). The assessment will be administered by the college supervisor, the mentor teacher, and the practicum instructor. Additionally, the candidate will use the assessment to self-evaluate. The purpose of the EDA is to determine the extent to which candidates hold beliefs and values that influence them to behave in ways that are supportive of student learning in the P-12 classroom setting. Candidates are expected to score at a minimum of Level I – Developing across all dispositions at all points throughout the program. In the event that a candidate is at risk of receiving a Level 0 – Needs Improvement, the departmental faculty member responsible for assigning this rating must initiate a departmental alert to notify the candidate of concerns related to the disposition(s). Mentor teachers are responsible for notifying the college supervisor in the event that a candidate is at risk of receiving a Level 0 – Needs Improvement so as to provide the opportunity for the supervisor to initiate a departmental alert on behalf of the mentor teacher. If a candidate receives a Level 0 – Needs Improvement on any disposition in the program, he or she will be placed on a Professional Improvement Plan (PIP) where specific goals tied to the disposition under question will be established. Following the departmental PIP protocol, candidates will engage in progress monitoring following the initiation of the PIP at a date to be determined by the faculty member initiating the PIP. If goals are unmet at the progress monitoring meeting, the departmental faculty member has the option of granting an extension on the PIP for completion of recommending dismissal from the program. If dismissal from the program is recommended, this recommendation is voted on by all departmental faculty members. Candidates must receive a rating of at least Level 1 – Developing in order to advance in the program. In order to graduate, candidates must score at a Level 2 – Meets Expectations across all dispositions.
The Impact on Student Learning rubric is administered in a designated course taken by all candidates just prior to the start of clinical practice. This assessment is again administered at the close of clinical practice and is applied to the submitted edTPA portfolio. This assessment is administered once per academic semester during the senior year of study for candidates. This assessment is administered by the instructor of program-specific courses for the associated programs at the appropriate point of progression in the program. As an integral component of the teacher education program at CCGA, candidates must be able to develop and effectively implement a learning segment to support P-12 student learning of a clearly defined central focus (i.e., learning goal). A critical part of preparation as an educator is the ability to demonstrate impact on student learning. As a part of the program of study, candidates will be expected to provide evidence of their ability to plan, implement, and evaluate implementation of a learning segment. The Impact on Student Learning rubric provides a mechanism to evaluate these proficiencies. In order to advance to clinical practice and be eligible to take the associated edTPA, candidates must score at a minimum of “Meets Expectation” across the three criteria of: 1) Design of Instruction and Assessment, 2) Analysis of Student Learning, and 3) Reflective Practice included on the Impact on Student Learning rubric. Instructors will indicate the rating of: 1) Does Not Meet Expectation, 2) Meets Expectation, or 3) Exceeds Expectation across all three criteria based upon where the majority of indicators listed under the rating are assigned. If a candidate fails to meet the required criterion prior to the close of the fall semester of senior year, the candidate will be placed on a professional improvement plan (PIP) at the discretion of the course instructor. In order to be eligible for completion of edTPA, the candidate must complete the goals indicated in the PIP. Submission of the edTPA portfolio is a program completion requirement. This assessment mechanism serves as a gatekeeper to ensure readiness of candidates for completion of this summative assessment.