Skip to main content

QRIS Resource Guide

Data Collection and Evaluation

Data Collection and Evaluation picture

Data collection and evaluation are often an afterthought when developing a new quality rating and improvement system (QRIS). Because the driving force is the urgent need for change, the emphasis is usually on the design of standards and the implementation of the rating process and provider supports. Typically, by the time that thought is given to data collection and evaluation, there is a shortage of resources because most have been committed to implementation. It is often only when implementation issues arise and/or there is a need to document the success of the new system that data collection and evaluation become critical to the QRIS. At that point, much of the opportunity is lost to collect baseline data and maximize the use of  data in existing data collection systems. Then data collection and evaluation become much more expensive. This section poses questions to consider early in the process of design which may help to plan for data collection and also result in an improved evaluation process, as well as a more successful design and implementation process. Discussions on the use of data in planning and implementation are included in the “Initial Design Process” and “Approaches to Implementation” sections of this Guide.

pdficon Print this Resource Guide Chapter (PDF - 766 KB)

Collecting Data

All States have data systems that contain information on early and school-age care and education programs. Some of the sources of data that may be helpful in a QRIS include licensing; registries of license-exempt providers; subsidy administration; practitioner and training/trainer registries; child care resource and referral (CCR&R) databases; technical assistance tracking systems; program profiles; classroom assessments; economic impact research studies; and Head Start, prekindergarten, and other education systems. An initial step in planning for a QRIS is to compile a list and description of existing state/territory data systems, including where they are located, how to access them, who has access to them, what information is collected in them, and how they interface with other data systems.

Data Resources Analysis for Decision Making

Completing an inventory of the available data at the beginning of the planning and design stages is a helpful first step. The information gathered during this process can then be used to guide decisions during the implementation phase. For example, data from the licensing system or Head Start Program Information Reports may help the QRIS design team determine, at least initially,which types of programs (centers, homes, prekindergarten, Head Start) to include in the QRIS and which and how many programs may be able to achieve the standards. Data from workforce studies or professional development registries can provide a needs assessment for scholarships and the accessibility of educational offerings.  This information will help estimate participation rates and predict the resources necessary to support projected participation. Looking at these data elements may reveal existing information that can help document compliance with proposed standards. Reviewing an inventory of existing data can also help determine whether it is best to begin with a pilot and, if so, which programs to include.

Child care subsidy data can also be helpful. For example, examining these data may lead to the conclusion that tiered subsidy reimbursement will not be sufficient as a support of higher program quality for a number of reasons.  For example, if only 20% of the enrollment of a typical program are children who receive a child care subsidy, that may not be sufficient to support the cost of higher quality for the progam as a whole. The balance of the cost must be passed on in tuition fees to other families.  Or the enrollment may fluctuate enough that programs cannot rely on tiered subsidy reimbursement to maintain quality. Therefore, examining subsidy data may be a good indicator of the potential impact of tiered subsidy reimbursement, pointing out the need to explore additional provider incentives.

Data for QRIS Management

Using existing data systems can help make QRIS implementation more cost efficient and ensure consistency in data across systems. Adding reporting capacity or data elementsor aligning data elements to an existing data system, such as licensing or a professional development registry, can be much less expensive than creating a new data collection and processing system specifically for a QRIS. This may or may not be possible depending upon who administers the QRIS and what data systems can be tapped for the information. For example, if the existing data system is in a State agency and the QRIS will be operated outside of the State government structure, it may not be possible to use the State data system. Even when data exist in several separate systems, it may be cost-effective and ensure consistency if data can be transferred from one system to another, rather than entering all data anew for each child care program that wants to participate. For example, one QRIS requirement for participation might be a license in good standing or a license with no serious violations. It would be critical to have continuing, current information on the status of a license to produce reliable ratings. Similarly, if programs that participate in the QRIS are also rated or assessed by other entities, such as national accrediting organizations or the Head Start monitoring system, using data from those systems can make participation easier, more cost-effective, and more reliable. Linking to data in professional development registries or credentialing and certification systems is another cost-effective way to verify staff qualifications, ensure consistency, and eliminate duplicative work in the rating process.

In summary, an accurate inventory of existing data systems, their accessibility, accuracy, and reliability is helpful in determining QRIS system design. A good introduction to data elements, collection, management and governance is found in the slides and videos of a series of data webinars presented by INQUIRE (Quality Initiatives Research and Evaluation Consortium) in Spring 2013.  A link to those materials can be found the the list of references at the end of this section (Child Trends, 2013).  An overview of the use of data to monitor and evaluate QRIS in five states may be helpful in thinking about the broad perspective of using data (Caronongan et al., 2011).

States are increasingly relying on comprehensive data systems that they either purchase or develop to help with the administration of their QRIS. This section of the QRIS Resource Guide focuses on identifying the data that is needed and whether it can be collected from existing systems or if new data collection mechanisms need to be developed.

 

Indiana QRIS Data Systems are Interactive

The Indiana Paths to QUALITY program uses a live, interactive database that draws facility and practitioner information from the State regulatory system. Mentors from the CCR&R agencies and Indiana Association for the Education of Young Children help develop facility quality improvement plans, which are submitted, along with contact notes, into this Web-based system. Paths to QUALITY raters may also enter their data directly. The database was developed through a contract with TCC Software Solutions and includes their subsidy and licensing data. Additional information is available at  http://www.in.gov/fssa/2554.htm.

Tennessee QRIS Data Collection System Provides Monthly Geographic Data

Tennessee uses the State Regulated Adult & Child Care System (RACCS) to maintain QRIS data. The system includes the provider’s Star-Quality Child Care Program rating and Child Care Report Card System component scores by program year. Users can request provider QRIS information for the entire State or by specific geographic region. The data system automatically generates monthly reports on ratings by provider type and county. The RACCS system also includes various provider-specific program data, updated annually, that can be queried by accreditation, curriculum, enrollment, environment, fees, meals, program, rates, rate policy, schedule, staff, and transportation. Additional information is available at http://www.tennessee.gov/humanserv/adfam/ccrcsq.html.

The VI Uses Data to Inform the Development of Standards in Their QRIS

The Virgin Islands launched its pilot of their QRIS, Virgin Islands Steps to Quality (VIS2Q) in the summer of 2013.  In their development of their QRIS, they looked back to local studies and data for inclusion in the standards, as well as how they graded some of the indicators to better fit the VI context and support continuous quality improvement y helping programs view movement throughout the QRIS as something attainable.  The literacy standards were influenced by the kindergarten entry data collected by the VI Department of Education, as this was an area where children were scoring most poorly.  They also included a standard for Dual Language Learners as they know they have increasing numbers of children for whom English is not their first language.  In the area of Professional Development, they graded the steps with full knowledge of where most of their teachers would be when their programs entered the QRIS, based on data from a workforce study conducted for the Department of Human Services.  Data were also used to inform the indicators in the Learning Environments Standard, based on a pilot study of quality in VI ECE settings conducted in 2009.  This data helped inform cut-off scores for both the ERS and CLASS assessment tools. 

Looking closely at each QRIS standard and determining how compliance will be verified, what data for documentation will be needed, who will review the data, and where it will be stored are essential steps in QRIS planning. New data may be needed to assign a rating or to guide follow-up activities, such as development of an improvement plan. For example, QRIS standards may require that all teaching staff receive training in a State’s early learning guidelines for a certain rating level. If completion of the training is collected in the professional development registry, it may be possible to import information from that system for the rating process. If the information is not currently collected, it may be necessary to develop a process for collecting that data, such as requiring program staff to document their training by submitting a successful-completion certificate, requiring rating assessors to enter information into a new QRIS database, or asking early learning guidelines trainers to input their class lists into the professional development registry. A thorough review of the rating assessment and monitoring process is needed to identify data needed to document compliance with QRIS standards.

 

Maine QRIS System Links Professional Development and Technical Assistance

Quality for ME, the QRIS in Maine, is a partnership of the State's professional development project, called Maine Roads to Quality. The Quality for ME automated system includes shared data linkages that populate forms with data from the professional development registry, the State licensing database, and National Association of Child Care Resources & Referral Agencies software. These automated data links minimize the amount of data entry required of an applicant; because an applicant must confirm the information, the process results in more accurate data across these State systems. Maine is developing an automated technical assistance tracking system that will be linked to the professional development registry and will enable individual providers to note on their transcript that they are receiving technical assistance on particular topics. Additional information is available at  http://www.maine.gov/dhhs/ocfs/ec/occhs/qualityforme.htm

Tennessee's Assessment Data System Also Supports Technical Assistance

The University of Tennessee Social Work Office of Research and Public Service (SWORPS) created an automated system to maintain statewide data on early childhood program assessments. When SWORPS receives the completed observation score sheets from Department of Human Services’ assessors, the assessment data are entered into the Star-Quality Child Care Program database along with supplemental data (teacher and classroom/family child care home characteristics). The system generates a provider profile sheet that contains assessment information, including item, subscale, and observation scores) and an overall program assessment score. The system also generates a “Strengths Page” for the provider that details the indicators that the assessor scored positively. The provider receives a copy of the profile sheet, the Strengths Page, and the assessor’s notes. Copies of these documents are also mailed to the relevant licensing unit for completion of Report Card scoring and entry into RACCS.  A duplicate copy of the assessment results are mailed to the relevant CCR&R site. The Stars database generates monthly, quarterly, yearly, and ad hoc reports, and analyzes the data in a multitude of ways. Additional information is available at http://www.tennessee.gov/humanserv/adfam/ccrcsq.html.

Data systems are a valuable resource for staff who manage the QRIS provider support system. Two types of data may be useful to them: (1) data on supports for individuals working in the early and school-age care and education programs, and (2) data on supports for the programs that seek a QRIS rating.

Data on supports for individuals working in the programs are helpful in projecting and managing the cost for scholarships for staff education  and any type of retention incentives, such as wage supplements. These data can also help determine the effectiveness of various supports. Is the education level of the staff across the State going up? Are there any geographic areas not using scholarships? If not, why? Answering these questions requires data that is specific to QRIS participation. If, for example, a State currently has a scholarship program that is available to all early and school-age care and education providers, knowing which of these staff work in programs that participate in the QRIS is crucial. These data, coupled with broader data on staff qualifications, can help identify trends and inform decisions regarding the capacity of practitioners to meet QRIS standards and how to best support continuous improvement.

Collecting data on technical assistance and other supports for programs is usually a more complex process than collecting data on individuals working in the programs.  Often programs that participate in a QRIS have access to various technical assistance, consultative and coaching supports.  These supports might be available to a broad group of programs, including those that do not participate in the QRIS. Thus, it will be important to create data systems that identify which supports and how much of each is received by each program participating in the QRIS. It is important to think carefully about what data about program supports needs to be collected, including data on new supports that may be created and accessible only to programs participating in the QRIS.

The QRIS planning team should think carefully about how program support information will be used. Will the data identify participating programs that access supports and how often? Will it be used to determine the correlation between supports accessed and improvements in program ratings? Will it be used to manage the cost of such supports or to monitor the effectiveness of support service providers? Being clear about the projected use of data will help to define what is collected and how.

Collecting data on financial supports for programs that participate in QRIS, such as grants, bonus payments, tiered reimbursement, loans, or tax benefits, can help project and manage budgets. Again, it may be very useful to correlate data with the maintenance or improvement of a rating.  This will help to identify which supports are most critical.

In many States, the QRIS becomes an organizing framework for a wide range of program and practitioner supports designed to promote quality improvement. States have moved from providing technical assistance and financial supports that are believed to improve child care quality to using the QRIS to track whether these supports are actually associated with changes in quality. {section7_question3_example1}

 

Automation Helps Miami Manage the QRIS

Early childhood leaders in Miami, Florida, report that their Web-Based Early Learning System (WELS) makes the Quality Counts QRIS stronger because the system offers real-time feedback on participation, classroom profiles and assessments, professional development, technical assistance, coaching and mentoring, and other essential data for a wide range of purposes. Data are available in the aggregate, as well as for a particular program. This information makes it possible for planners to have the data they need and, at the same time, for program managers to receive alerts when one of the centers they are working with is not making timely progress. Additional information on the Quality Counts Portal is available at https://miamiqualitycounts.org/.   Additional information about WELS is available at http://www.welsfoundation.org/about_wels.html.

Nevada's Integrated Data System

Nevada’s Silver State Stars has adopted an integrated Q-Star data system which allows the following QRIS teams to communicate and track progress: 1) the administration and quality rating team which manages the project and assigns star ratings; 2) the assessment team; 3) the coaching team which uses the system to track their activities and build quality improvement plans based on the assessments conducted;  and 4) the research and evaluation team which will use the data gathered by the system to analyze the efficacy of services delivered and quality improvement over time.  The Q-Star system links to an ERS Data System for mobile assessment used to conduct Environment Rating Scales assessments and EasyFolio which serves as a portal for program applicants to manage their application process.  The Q-Star system was developed for Nevada by the Branagh Information Group (BIG). 

North Carolina QRIS Data Collection Guides Evidence-Based Adjustments

The North Carolina Division of Child Development and Early Education has for many years collected data to monitor the Star Rated License system process and used these data to guide revisions in the system. Early on, results from environment rating scale (ERS) assessments showed significantly lower scores on the Infant Toddler ERS than on other classroom assessments. To address this concern, the State developed a short-term technical assistance project focused on providing child care health consultants to programs and a long-term technical assistance project that involved adding infant and toddler specialists to the CCR&R agencies. School-age specialists and behavioral specialists were also added to the CCR&R agencies to help with program improvements. Orientation of providers to the ERS was added to the system as well. Similarly, when data indicated that the licensing compliance standard in the QRIS was not linked to statistically significant differences in quality, this rating standard was eliminated from the QRIS. Additional information is available at http://ncchildcare.nc.gov/parents/pr_sn2_ov_sr.asp

 

Georgia Built Their Own Data System to Manage QRIS

Georgia has a comprehensive online data system to manage all of the QRIS process from a child care program’s application to the QRIS, Quality Rated, to data collection and analysis including  program information; training and technical assistance from registration to tracking; portfolio submission including a continuous quality improvement (CQI) plan; incentives management; resources for families, programs, and technical assistance and training professionals; reports and data; and communication.  The system captures all information on a child care program and allows the program to track their progress through the process from application to rating.  The development of the system was guided by Georgia’s work with their researcher, Frank Porter Graham Child Development Institute, who helped them develop a logic model, develop a validation and evaluation plan for their QRIS, create a data dictionary and create reports.  The system is used by Quality Rated staff, technical assistance and training staff, CCR&R agencies, programs enrolled in Quality Rated, incentive partners, the research team, and most importantly, by families seeking child care and resources. 

Michigan's Online Platform

STARS is Michigan’s Great Start to Quality QRIS on-line platform, developed by Mosaic, Inc.  Licensed and registered programs and providers interact with the platform to complete their Self-Assessment Survey, upload evidence documents, develop a Quality Improvement Plan, and access Resources.  Administrative users validate Self-Assessment Surveys and complete observations, log technical assistance efforts, and perform monitoring functions. STARS also offers administrative users reporting capabilities such as, ‘at-a-glance’ summary information, click-button reports and export reports.  Housing all functions on one platform supports Great Start to Quality to streamline each component of the QRIS.   Additional information is available at www.greatstarttoquality.org.

Pennsylvania Uses Integrated Data Systems in Support of a QRIS

Pennsylvania Enterprise to Link Information for Children Across Networks (PELICAN) is an integrated child and early learning information management system. In addition to automating and centralizing many of the functions required to administer the subsidized child care program (Child Care Works), it expanded to automate the inspection and certification (licensing) process of child care providers, the administrative processes and data collection efforts for the PA Pre-K Counts and Pennsylvania Early Learning Keys to Quality (Keystone STARS) initiatives, and the data collection and analytics to support the Early Learning Network (ELN), which is a longitudinal database and tracking system for children in Pennsylvania early learning programs. PELICAN users include Child Care Information Service agencies, County Assistance Offices, Regional Keys (the administrators of the Keystone STARS program), PA Pre-K Counts grantees, as well as teachers and administrators for Head Start State supplemental programs, school districts that provide prekindergarten, providers of child care, and others. Families are also able to screen themselves for potential eligibility for child care subsidies, search for providers, and apply for services online. This initiative allows the Office of Child Development and Early Learning and the Regional Keys to track providers, manage STARS, identify resources that were deployed at a program, and manage STARS grant information in the Keystone STARS rating system.  A next phase of development will focus on how to integrate the existing trainer and training registry, the Pennsylvania Quality Assurance System, into PELICAN and make it accessible from the PA Key Web site. An overview of the ELN is available at http://www.ncsl.org/portals/1/documents/Educ/PAEarlyChild-Stedron.pdf.

The exploration of what data might be needed is best done early in the process and with a broad view to future needs.  In the planning and design phase, considering how to validate the standards has become increasingly important to states.   Assessing the impact of key interventions to assist programs in improving quality is critical to project management.  Within the rating process, it is becoming critical to coordinate assessment of the rating across sectors, i.e., child care, prekindergarten, Head Start, in a way that reduces the duplication of multiple assessment processes.  In preparation for evaluation, consider the benchmarks that are being set and how to document their achievement, including coordination of standards using data from other from other assessment processes such as accreditation, Head Start performance standards and prekindergarten standards assessment.

Evaluating Outcomes

Evaluation can be much more effective if it is considered as a part of the QRIS planning and design and implementation processes. Experienced evaluators can help review research on effective program quality, which in turn can guide the development of QRIS standards. Evaluators can also assist with creating a logic model for the QRIS, which can determine what to evaluate based on desired outcomes and long-term impact. Evaluators can advise on the design of standards, pointing out those that will be possible to assess and evaluate and those that will be difficult or not possible to measure. Based on the standards chosen, they can advise on baseline data to be collected before implementation of a QRIS and how to most effectively collect valid data that will track changes over time from the baseline. They can also help in planning for evaluation based on prospective use: validating standards, assessing the implementation process, measuring changes in program quality, and measuring child outcomes. Evaluators can also advise on the needed frequency of evaluation. All of these decisions connect directly to data collection. The greatest cost savings and the best chance for a well-executed evaluation are realized when an evaluation plan is created as a part of the QRIS design process. Although there may not be adequate resources to invest in a comprehensive, academically rigorous evaluation, it is important to at least capture and use existing data as much as possible from the outset. If this is not done, it may not be possible to capture the data retrospectively.

 

Rhode Island QRIS Evaluation: A Unique Partnership Focused on Informed Revision

A broadly representative community-based group developed the draft standards and quality criteria for BrightStars over several years. Researchers from the Frank Porter Graham (FPG) Child Development Institute at the University of North Carolina, who were selected for their depth and breadth of expertise and experience in evaluating program quality, conducted a pilot and random sample evaluation. The evaluation was conducted as a partnership between FPG and the Rhode Island community agency, Rhode Island Association for the Education of Young Children, that manages BrightStars. This partnership facilitated training for BrightStars staff to collect data in a valid and reliable manner. The draft center framework included 62 criteria across 28 standards. The evaluation in the pilot revealed that using all 62 criteria resulted in small quality distinctions and many programs had no stars or only one star. A review of the standards ensured that each criterion (1) was not already in State licensing, (2) had an actual outcome, and (3) adequately measured the differences in quality. This review pared the number of criteria down to 22, which were then grouped into nine standards. The final frameworks are an effective scaffold for quality improvement; differences between the levels are meaningful but achievable. The evaluation not only improved the BrightStars standards and measurement tool, it also provided a baseline measure of program quality in a random sample of centers, homes, and afterschool programs in Rhode Island, which will be useful for tracking progress in the future. It has also been helpful to have expert evaluators give the Steering Committee specific advice and recommendations to improve the framework. Additional information is available at http://www.brightstars.org/.

Massachusetts Revises Standards

Massachusetts Department of Early Education and Care has been working throughout 2013 to evaluate the MA QRIS. This work has been done in collaboration with the UMass Donahue Institute, the EEC Board, the EEC Program Quality Unit, and a QRIS Working Group comprised of representatives from the field. Several policy changes and minor revisions to the standards will be implemented in 2014.

To date, most State QRIS evaluations have focused on validating the standards and tracking the progress of programs in improving their rating level. The Race to the Top – Early Learning Challenge grants direct states to validate their QRIS. In addition to guidance provided in the grant application, there are a number of resources that are available to help with planning for validation: the QRIS Evaluation Toolkit (Lugo-Gil et.al., 2011), Validation of Quality Rating and Improvement Systems for Early Care and Education and School-Age Care Validation (Zellman & Fiene, 2012) and Key Elements of a QRIS Validation Plan:  Guidance and Planning Template for a QRIS validation plan (Tout & Starr, 2013).

More recently, States have begun to use evaluation to help assess the effectiveness of technical assistance, financial supports, and other incentives designed to help participating programs meet QRIS standards as well as the effectiveness of the rating process.  Researchers are providing additional tools and direction to support this effort.  Implementation science and its application to early childhood program interventions is one of those promising tools.  Any State that is considering assessing the effectiveness of the supports they provide to programs to help them meet the QRIS standards, should also assess the effectiveness of their implementation of those supports.  A series of research briefs on applying implementation science in early care and education is available to help with this work.  (Downer & Yazejian, 2013; Paulsell, Austin, and Lokteff, 2013; Wasik, Mattera, Lloyd, and Boller, 2013).   In another resource, Paulsel, Tout, and Maxwell (2013) offer guidance on the application of implementation science specifically to Quality Rating and Improvement Systems with a list of research questions to ask at each stage of implementation for the components of a QRIS. They offer specific applications of the core implementation components to QRIS development and implementation.  They provide an description of an ideal QRIS that is supported by the concepts of implementation science.  “…QRIS is not a static system…. Rather, an ideal QRIS assumes that knowledge will continue to be gathered…to make system changes that promote continuous improvement”  (p. 288).  They encourage the creation of a QRIS implementation team and offer a step-by-step guide to the work of such a team to improve QRIS.

 

QRIS State Research

The following list cites some of the studies of individual quality rating and improvement systems:

Arkansas

Colorado

Indiana

Kentucky

  • Tout, K., Starr, R., Isner, T., Daily, S., Moodie, S., Rothenberg, L., & Soli, M. (2012). Executive summary of the Kentucky STARS for KIDS NOW process evaluation (Evaluation Brief #1). Washington, D.C.: Child Trends.  http://www.kentuckypartnership.org/starsevaluation.aspx

Massachusetts

Miami-Dade

Minnesota

  • Tout, K., Starr, R., & Cleveland, J. (2008). Evaluation of Parent Aware: Minnesota’s quality rating system pilot: Year 1 evaluation report. Minneapolis, MN: Child Trends.
  • Tout, K., Starr, R., Isner, T., Cleveland, J., Soil, M., & Quinn, K. (March 2010). Evaluation of Parent Aware: Minnesota’s quality rating and improvement system pilot: Year 2 evaluation. Minneapolis, MN: Child Trends.
  • Tout, K., Starr, R., Isner, T., Cleveland, J., Soil, M., & Quinn, K. (November 2010). Evaluation of Parent Aware: Minnesota’s quality rating and improvement system pilot: Year 3 evaluation. Minneapolis, MN: Child Trends.
  • Tout, K., Starr, R., Isner, T., Cleveland, J., Albertson-Junkans, L., Soli, M., & Quinn, K. (December 2011). Evaluation of Parent Aware: Minnesota’s quality rating and improvement system pilot:  Final Evaluation.  Minneapolis, MN: Child Trends.

All Minnesota reports are available at http://www.melf.us/index.asp?Type=B_BASIC&SEC={8A0DB572-32D1-4306-BE29-55A93C907F2C}.

Missouri

  • Thornburg, K., Mayfield, W., Hawks, J., & Fuger, K. (2009).  The Missouri quality rating system school readiness study.  Columbia, University of Missouri, Center for Family Policy & Research. http://www.marc.org/mcel/assets/qrsfindings.pdf

New York

North Carolina

Oklahoma

Pennsylvania

Rhode Island

Tennessee

  • Pope, B.G., Denny, H.H., Homer, K., & Ricci, K. (2006). What is working? What is not working?: Report on the qualitative study of the Tennessee Report Card and Star-Quality Program and Support System. Knoxville, TN: University of Tennessee College of Social Work, Office of Research and Public Service. http://www.state.tn.us/humanserv/adfam/rept_insides.pdf
  • Cheatam, J., Pope, B., & Myers, G. (2005). Evaluating quality in state child care licensing: The Tennessee report care and star-quality child care program.  Report prepared for the Tennessee Dept. of Human Services by the University of Tennessee Social Work, Office of Research and Public Service.  https://www.sworps.tennessee.edu/ann_rep_2005/docs/germanypaper.pdf
  • University of Tennessee College of Social Work, Office of Research and Public Service. (2004). Who cares for Tennessee’s children? Report prepared for the Tennessee Department of Human Services. https://www.sworps.utk.edu/PDFs/3-2-04STARSsimplex.pdf

Vermont

Washington

Arkansas Validates Standards

Arkansas developed their QRIS standards over several years affording them the opportunity to have a group of researchers compare the standards to what research says about these measures, i.e., validated their standards and offering the opportunity to make adjustments.  Recommendations from the researchers were as follows.

  1. Reduce Redundancy—Exclude content areas in the Strengthening Families component that are already measured with the PAS and ERS assessments.
  2. Use Measures as Written and Tested—Do not exclude items from the PAS assessment since this reduces it validity and reliability.
  3. Designate teacher-child ratios above the minimum in licensing. This standard is present in other state rating systems and other systems such as accreditation and Head Start.
  4. Incorporate process measures because they are stronger predictors of child outcomes and collect evidence of them through independent observation.
  5. Address lower levels of quality in the QRIS—Level 1 should be designated “getting ready” and the minimum ERS score to designate quality should be revisited.
  6. Address higher levels of quality—Develop levels of quality beyond the current highest level to encourage programs to improve to level that promotes optimal child development.
  7. Include child screening as a measure—It will lead to better child outcomes and early intervention is more effective.

Additional information is available at http://www.arbetterbeginnings.com/downloads/FULL%20REPORT.pdf.

Indiana Reviews Standards for Validity

Purdue University has been evaluating the Paths to QUALITY (PTQ) to determine if the program is effective:  does it increase the quality of participating programs and are children in the higher level program learning more or developing optimally?  In the first phase of their study they examined the validity of the quality indicators and concluded that the 10 main indicators are significantly associated with established measures of child care quality, have significant support from the child development and early education literature, and are recognized and endorsed as best practices by national early care and education organizations in their polices and position statements.   There is significant evidence that the indicators support children’s development, learning, or well-being.  The validation study report is available at http://www.in.gov/fssa/files/PurdueValidityReport2007.pdf.

Having established the validity of the standards, the evaluation then assessed PTQ’s impact on child care following implementation. The overall goals of the evaluation research were to validate the quality rating system and describe the experiences of child care providers, parents, and children with the program as it was implemented. The evaluation questions addressed by the Purdue research team were:

  • When providers attain higher PTQ levels, does this result in higher quality care for children?
  • Are child care providers entering the PTQ system?
  • What are the incentives and the challenges for providers?
  • Are providers using available training/technical assistance (T/TA) resources?
  • Are providers advancing to higher PTQ levels?
  • Are parents aware of PTQ?
  • Will PTQ affect their parents’ child care decisions?
  • Are children and families at all education and income levels gaining access to child care at the highest PTQ levels?
  • Are children in higher PTQ levels developing more optimally than children in lower PTQ levels?

The researchers found that the PTQ rating system measures meaningful differences in child care quality and children from all income levels were gaining access to higher quality care within PTQ. Provider participation levels were high—more than 2100 providers were enrolled at the time of the study, including 82% of licensed child care centers and 52% of licensed family child care homes.  However, public surveys and interviews with PTQ parents revealed that parent awareness of PTQ was low. Those parents that were aware of the system received the information from their child care providers. Additional findings and details about the study are available at http://www.in.gov/fssa/files/PurdueValidityReport2007.pdf,  http://www.cfs.purdue.edu/cff/documents/project_reports/PTQ_TechReport2_Meas.pdf, and  http://www.cfs.purdue.edu/cff/documents/project_reports/PTQFinalReportRev11012.pdf.

In their research study, Tout, Zaslow, Halle, and Forry (2009) discuss the issues that some States are addressing as QRIS evaluation moves to new research questions:

  • Who is participating?
  • Who is improving and what resources are used for improvement?
  • Do parents know and use QRIS to choose care?

These evaluations are also examining child outcomes, which require a carefully designed strategy to address child attrition and focus on change in a program’s development over time. Other issues to address in the evaluations are the challenges of assessing quality with children in a wide range of ages, cultures, languages, and abilities in various types of program settings. This group of researchers has also suggested the usefulness of creating a logic model of the QRIS, which can help in creating a plan of evaluation.

QRIS research and evaluation has evolved from the early studies designed to determine if the quality ratings differentiated scores on quality measures and tracked improvement and implementation to state specific studies that  included child outcomes and the assessment of parents’ understanding of the ratings.  (Tout, 2013)  Tout asserts that the “next generation of QRIS will rely more heavily on data and evidence to shape design, implementation, ongoing management, and integration of QRIS with other systemic efforts to support young children and families” (p.72), and that the use of collaboarative research going beyond individual states to cross-state and national efforts will be even more helpful in identifying “effective QRIS strategies and linkeages” (p. 76). An example of the collaborative approach between state QRIS administrators and university researchers is described in Elicker et al.’s study on Indiana’s QRIS (2013).

Section 10 of the Compendium of Quality Rating Systems and Evaluation (2010) provides information on QRIS evaluations.   Eighteen of the 26 QRIS reported some type of evaluation had been or was being conducted on the QRIS.  Nine reported an ongoing evaluation and nine reported periodic evaluations.  Seven reported that their evaluations examined issues regarding the implementation of the system.  Seven included validation of the quality ratings.  Four included links between the QRIS and child outcomes in their research.   http://www.acf.hhs.gov/programs/opre/resource/compendium-of-quality-rating-systems-and-evaluations

 

Maine Evaluation Looks at QRIS Standards and Supports

Quality for ME, the QRIS in Maine, is engaged in an evaluation effort built into the system and based on random site visits. The focus is on ensuring that the Quality for ME standards, levels, and implementation strategies accurately measure significant differences in quality. The study uses ERS (Early Childhood Environment Rating Scale, Infant/Toddler Environment Rating Scale, School-Age Environment Rating Scale, and Family Child Care Environment Rating Scale) to evaluate the quality of early learning in the classroom or home. Additional information is available at http://www.maine.gov/dhhs/ocfs/ec/occhs/qualityforme.htm

Minnesota Parent Aware Validation and Evaluation Strategies

Child Trends, a nonpartisan, nonprofit research organization is conducting the Parent Aware Evaluation from 2012-2016 with funding from Parent Aware for School Readiness (PASR) and Greater Twin Cities United Way.

To address the research question about effectiveness of the quality indicators and structure of the Parent Aware Rating Tool in differentiating quality, a validation study is being conducted. The study will:

  • Collect data from participating early care and education programs to test whether the interactions between children and their teachers/caregivers and the learning environments of programs are distinct at the four quality levels in Parent Aware.

To address the research question about linkages between children’s development and the Parent Aware quality levels, the validation study will:

  • Collect and analyze data from children and families in rated programs. Programs will be selected to participate that represent the range of center-based and family child care settings rated in both the Parent Aware full rating process and the Accelerated Pathways to Rating (APR). Children will complete direct assessments of their school readiness skills in the fall and spring in the year before Kindergarten. Teachers and parents will also complete assessments of children’s skills and provide information about their background and family characteristics. Rigorous analytic models will be conducted to identify whether and how the rating levels, process (the full rating compared to the APR process) and select quality indicators relate to children’s gains.

Results from both components of the Parent Aware validation study will be available in 2015-2016 after a sufficient number of programs have enrolled in Parent Aware and implementation of Parent Aware has stabilized statewide. Additional evaluation questions focus on understanding how implementation of Parent Aware is proceeding, how quality is improving over time, and how Parent Aware is contributing to Minnesota’s early care and education.

North Carolina Evaluation Validates Rating System

When North Carolina first developed the Star Rated License in 1997, it wanted to ensure that the State’s evaluations were actually using measures that would differentiate between levels of quality. A team from the Frank Porter Graham Child Development Institute at the University of North Carolina at Chapel Hill collected extensive data on randomly selected child care centers. “We concluded from these results that the 5-star licensing system does accurately reflect the overall quality of a child care center. Parents can be assured that there are meaningful program differences among centers that have a 3-star, 4-star, or 5-star rating….” Additional information is available at http://www.fpg.unc.edu/resources/validating-north-carolinas-5-star-child-care-licensing-system.

At two different times since the ratings began, an analysis of ERS scores and teacher education level has been completed by the North Carolina Rated License Assessment Project at the University of North Carolina–Greensboro. The results show that classrooms with a teacher with an associate’s degree or higher scored significantly higher on the rating scale than classrooms with a non-degreed teacher. This conclusion supports the staff qualification standards of the QRIS. (Cassidy, Hestenes, Hegde, Mims, & Hestenes, 2005).

The QRIS in most States is administered by a State agency; however, classroom assessments (such as ERS or CLASS) for the rating system and evaluations of the QRIS are often conducted by external early childhood experts such as those at universities. In some cases, the same group of researchers has done multiple studies in an ongoing series of evaluation research. Choosing an evaluator is an issue that States must address within the restrictions of their resources and the State bidding and contractual requirements. Other considerations that also influence the choice of evaluator should be incorporated in the request for proposal, including:

  • Qualifications and experience—States look for evaluation teams with qualifications that match the task, i.e., early childhood and research qualifications and experience with delivering this particular type of research. They also look for evaluators who have experience completing the research within contract requirements.
  • Credibility—Potential evaluators should be highly credible to the primary target audience. This is one of the reasons that many States use their own State universities, even though those universities may bring in national or out-of-State experts to partner on selected portions of the evaluation.
  • Stability—If plans call for conducting a series of evaluations, an organization’s longevity in the field and probability of continuing in the work will be important traits in an evaluator.

Many evaluation studies serve dual purposes: (1) to provide evidence-based insights into the design or implementation process, and (2) to inform funders and policymakers of the impact of the QRIS on child care programs and child outcomes.

Researchers at a meeting of the Child Care Policy Research Consortium (2008) identified some additional issues and questions to consider.

  • Evaluation design and measurement options:
    • How do we best convey to stakeholders the implications of selecting a given design and the limitations on the kinds of causal statements that can be made about QRIS and its relationship to quality and outcomes?
    • When and what types of evaluation should occur as part of the pilot and as part of statewide scale up and rollout?
    • What are the benefits and costs of focusing on process and outputs early on and delaying evaluation of quality and child outcomes? What is the right design for ongoing data collection and evaluation of mature and longstanding QRIS?
    • Researchers and experts can play important and helpful roles in development of a State's QRIS. How is third-party objectivity at the evaluation stage maintained? What are the potential tradeoffs of not keeping the evaluation team separate from QRIS development?
  • Measuring child outcomes in QRIS evaluations:
    • What child outcomes are expected as a result of QRIS? How realistic are our expectations?
    • What design options should be considered for examining child outcomes?
    • Timing – At what point in system development should child outcomes be assessed?
    • Strategy – Should child outcomes be assessed in cohorts, or should children be tracked over time?
    • Outcomes – What can be learned from a point-in-time assessment compared with an examination of change over time (for example, fall to spring changes)?
    • Type of outcome measure – What measures are feasible, reliable, and valid, such as direct child assessments, teacher-caregiver ratings, and authentic assessment tools?
    • Age of child – Should child assessments include children of all ages or focus on 4- and 5-year-old children?
    • What are the options for assessing children in culturally and linguistically diverse communities and children with special needs?
  • Evaluating QRIS as a market- or system-level intervention compared with a program evaluation:
    • Sampling issues – For how many providers must data be gathered to understand market impacts?
    • How can impacts on wages and prices in an entire market area be analyzed?
    • What aspects of parent knowledge, attitudes, and behavior must be understood? How should parents be sampled, in the general community or through providers?
    • How can we differentiate impact of QRIS from other factors affecting early and school-age care market?  (Child Care Policy Research Consortium.  2008, April, 23–24).
  • In addition to the above questions suggested by researchers, other aspects need to be considered:
    • When considering measurement of child outcomes, further attention should be paid to dosage—how long should a child be in a participating program before being included in a sample? What if the program’s rating had changed over the time the child was in the program?
    • How do evaluations of the QRIS as a whole relate to evaluations of specific types of programs that participate in the QRIS, such as Head Start and Educare? This aspect addresses the issue of evaluating QRIS as a system compared with evaluating specific program models. Many types of program models participate in a QRIS.
  • Acknowledging QRIS as a system is a reminder of the need to evaluate QRIS as a system alignment strategy, including the following measures:
    • Increasing participation in QRIS across various subsystems, such as child care, Head Start, prekindergarten, and early intervention
    • Increasing participation in shared planning across various subsystems by using QRIS data and benchmarks
    • Developing common data definitions and shared data
    • Increasing percentage of funding from multiple sources linked to QRIS participation
    • Reducing paperwork by sharing and coordinating fiscal and program monitoring
    • Simplifying fiscal management by sharing and coordinating fiscal and program monitoring

In summary, evaluation is usually prompted by following needs:

  • Is the QRIS model valid, and does it differentiate quality?
  • Does the rating and monitoring process of QRIS work well?
  • Do the program and practitioner supports result in improved quality?
  • Are the various parts and subsystems of the early and school-age care world working to support the QRIS and benefiting from the QRIS? Are the parts of the system in alignment?
  • Is the QRIS increasing the quality of care available to all parents?

QRIS State Research

The following list cites some of the studies of individual quality rating and improvement systems:

 

Arkansas

 

Colorado

 

Indiana

 

Kentucky

  • Tout, K., Starr, R., Isner, T., Daily, S., Moodie, S., Rothenberg, L., & Soli, M. (2012). Executive summary of the Kentucky STARS for KIDS NOW process evaluation (Evaluation Brief #1). Washington, D.C.: Child Trends.  http://www.kentuckypartnership.org/starsevaluation.aspx

 

Massachusetts

 

Miami-Dade

 

Minnesota

  • Tout, K., Starr, R., & Cleveland, J. (2008). Evaluation of Parent Aware: Minnesota’s quality rating system pilot: Year 1 evaluation report. Minneapolis, MN: Child Trends.
  • Tout, K., Starr, R., Isner, T., Cleveland, J., Soil, M., & Quinn, K. (March 2010). Evaluation of Parent Aware: Minnesota’s quality rating and improvement system pilot: Year 2 evaluation. Minneapolis, MN: Child Trends.
  • Tout, K., Starr, R., Isner, T., Cleveland, J., Soil, M., & Quinn, K. (November 2010). Evaluation of Parent Aware: Minnesota’s quality rating and improvement system pilot: Year 3 evaluation. Minneapolis, MN: Child Trends.
  • Tout, K., Starr, R., Isner, T., Cleveland, J., Albertson-Junkans, L., Soli, M., & Quinn, K. (December 2011). Evaluation of Parent Aware: Minnesota’s quality rating and improvement system pilot:  Final Evaluation.  Minneapolis, MN: Child Trends.
  • All Minnesota reports are available at http://www.melf.us/index.asp?Type=B_BASIC&SEC={8A0DB572-32D1-4306-BE29-55A93C907F2C}.

 

Missouri

  • Thornburg, K., Mayfield, W., Hawks, J., & Fuger, K. (2009).  The Missouri quality rating system school readiness study.  Columbia, University of Missouri, Center for Family Policy & Research.
    http://www.marc.org/mcel/assets/qrsfindings.pdf

 

New York

              

North Carolina

 

Oklahoma

 

Pennsylvania

  • Office of Child Development and Early Learning.  (2013)  Keystone STARS Technical Assistance (Research Brief, Vol 2, Issue 2).  Harrisburg, PA.

http://www.ocdelresearch.org/default.aspx

 

Rhode Island

 

Tennessee

  • Pope, B.G., Denny, H.H., Homer, K., & Ricci, K. (2006). What is working? What is not working?: Report on the qualitative study of the Tennessee Report Card and Star-Quality Program and Support System. Knoxville, TN: University of Tennessee College of Social Work, Office of Research and Public Service.
    http://www.state.tn.us/humanserv/adfam/rept_insides.pdf
  • Cheatam, J., Pope, B., & Myers, G. (2005). Evaluating quality in state child care licensing: The Tennessee report care and star-quality child care program.  Report prepared for the Tennessee Dept. of Human Services by the University of Tennessee Social Work, Office of Research and Public Service.
    https://www.sworps.tennessee.edu/ann_rep_2005/docs/germanypaper.pdf
  • University of Tennessee College of Social Work, Office of Research and Public Service. (2004). Who cares for Tennessee’s children? Report prepared for the Tennessee Department of Human Services.
    https://www.sworps.utk.edu/PDFs/3-2-04STARSsimplex.pdf

 

Vermont

 

Washington

  • Caronongan, P., Kirby, G., Malone, L., & Boller, K. (2011). Defining and measuring quality: An in-depth study of five child care quality rating and improvement systems (OPRE Report #2011-29). Washington, DC: Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services.
    http://www.acf.hhs.gov/programs/opre/cc/childcare_quality/five_childcare/five_childcare.pdf
  • Child Trends. (2013). Early childhood data:  Building a strong foundation webinar series.  http://www.researchconnections.org/content/childcare/federal/inquire.html
    • Child Trends. (2013, March). Overview and application of the INQUIRE data tools to support high quality early care and education data. Early Childhood Data:  Building a Strong Foundation Webinar Series. This webinar begins with an introduction to the data tools developed by INQUIRE.  The tools include a comprehensive data matrix, a data dictionary, and examples of linking data with key policy questions.  Each tool is described briefly with information about how it is useful to states. Next the webinar provides an in-depth description of each tool as well as state participants who can provide examples of the "real world" application of the tools. http://www.researchconnections.org/content/childcare/federal/inquire.html        
    • Child Trends. (2013, May 16). Data management webinar:  Best practices for producing high quality data. Early Childhood Data: Building a Strong Foundation Webinar Series. This webinar will provide an overview of best practices in data management. These practices promote data integrity and ensure that high quality data are available for reporting, monitoring and evaluation. http://www.researchconnections.org/content/childcare/federal/inquire.html
    • Child Trends. (2013, May 6). Data management webinar:  Developing a data governance structure. Early Childhood Data: Building a Strong Foundation Webinar Series. This webinar reviews essential data governance structures that can be put in place to support the development of data sharing and linking. http://www.researchconnections.org/content/childcare/federal/inquire.html  
  • Child Trends & Mathematica Policy Research.  (2010).  The child care quality rating system (QRS) Assessment: Compendium of quality rating systems and evaluations.  Washington, DC: Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services. http://www.acf.hhs.gov/programs/opre/resource/compendium-of-quality-rating-systems-and-evaluations
  • Tout, K., Zaslow, M., Halle, T., & Forry, N. (2009). Issues for the next decade of quality rating and improvement systems (Publication No. 2009-14, OPRE Issue Brief No. 3). Washington, DC: Prepared by Child Trends for the Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services. http://www.acf.hhs.gov/programs/opre/resource/issues-for-the-next-decade-of-quality-rating-and-improvement-systems