When considering a revision or redesign of a quality rating and improvement system (QRIS), a pilot can be a prudent approach to test QRIS elements before moving to full implementation. This section includes issues to consider when planning for and conducting a pilot QRIS. It also describes how some states used a phased-in approach as an alternative to full implementation when launching a QRIS.
Print the Approaches to Implementation section of the QRIS Resource Guide.
Conducting Pilot Programs
States may implement a pilot to examine the efficacy, sustainability, and applicability of new QRIS features, a QRIS redesign, or an entirely new system (in a state without a QRIS) before launching statewide. Some possible reasons to engage in a small-scale pilot or field test include the ability to do the following:
- Target available funding in order to build support. Stakeholders may feel it more appropriate to start slowly and produce some positive results on a smaller scale as a way to garner support for statewide implementation.
- Allow time for implementation approaches to be tested and refined before large numbers of programs are involved in the process. By investing the time and effort to conduct a pilot, a state can enjoy the benefits of customer and community feedback to better inform and revise the QRIS process.
- Evaluate aspects of the system, such as rating scales or professional development supports. For example, a state may be considering different rating scales and may like to compare them in a controlled way rather than launch something on a larger scale that needs later revision.
- Assess potential program participation and capacity for implementing a new QRIS statewide. A pilot can allow for better budget estimates and planning processes.
According to the QRIS Compendium Fact Sheet: History of QRIS Growth Over Time (2017), by the National Center on Early Childhood Quality Assurance, in addition to the 41 fully operational systems in 2016, 4 QRIS (10 percent) were in a pilot phase.
Many factors influence how and where to conduct a QRIS pilot, including the availability of funding and whether the features to be tested in the pilot are best examined in a specific area of the state or with one type of program. When piloting a new QRIS before going statewide, some states started with a limited number of program participants, a selected geographic area, or particular program types. When making decisions about how to target the pilot, it is important to consider the context and questions of interest. A state assessing the climate and overall response to a QRIS may pilot with a limited number of programs but recruit participants across program types and geographic regions. In contrast, a state interested in understanding the resources needed to implement the rating process (including observational assessments) may pilot with one program type in one or two geographic regions.
For example, if a new coaching model is being tested for a QRIS, a state may choose to pilot the model in a selected geographic area where coaches are already trained as a way to minimize start-up costs. The focus of the pilot would be on providers’ responses to the coaching and making a determination of its effectiveness. If, however, the state is more interested in understanding the feasibility of implementing a coaching model (learning whether coaches can be trained to deliver the model with fidelity), they may instead conduct the pilot in multiple regions statewide and focus on the process of recruiting and training coaches.
The length of time a state will maintain its QRIS pilot phase is often determined by the amount financial resources; stakeholder, participant, and community support; and whether the goals for the pilot have been met. Pilots of QRIS features or a redesign can grow slowly by adding new communities or additional provider types. Pilots can last from a few months (Pennsylvania) to 1 or 2 years (Delaware, Kentucky, Missouri, and Ohio) to multiple years (Indiana and Virginia).
The goals the state and its partners set for the pilot will influence what data will be collected and by whom, how it will be recorded, and how it will be analyzed and used for adjustments and refinements. QRIS standards are generally informed by and aligned with existing standards such as licensing, national accreditation, Head Start, prekindergarten, or state early learning guidelines. The pilot is often used as a way to test a major change or a redesign. The following can be tested in the pilot: procedures for program application, rating processes, documentation methods, level assignments, the provision of quality improvement supports, and ways to communicate outcomes. Efforts to address equity in the QRIS among participating programs and the children and families they serve may also be addressed in a pilot.
The following are the types of data that can be collected in a pilot:
- Participation rates (overall rates, as well as rates by facility type, size, level, and geographic location);
- Characteristics of children served (race, income, subsidy status, home language, special needs) in the QRIS programs;
- Percentage of providers that are able to meet various quality criteria (such as degree requirements);
- Usage rates for incentives and support services, such as professional development or training opportunities, technical assistance supports, or financial incentives;
- Number and percentage of children receiving subsidies served by participating providers;
- Program participation rates at varying levels of quality;
- Baseline data from assessment tools;
- Parent/consumer awareness of QRIS; and
- Feedback from providers on clarity and ease of process and forms/documents.
Data can be collected in a variety of ways and from a variety of sources. The centers and homes involved in the pilot can provide critical feedback through self-assessments, self-reporting, and documentation. The staff involved in managing the pilot can collect feedback through interviews, observations, and document reviews. Staff can collect information about the following: the clarity of explanatory documents, standards, and the application process; sources of evidence or documents to include or accept; the amount and complexity of paperwork; time required to complete various requirements; and availability/accessibility of appropriate training opportunities.
It is important to consider a state’s capacity to gather appropriate and sufficient data to assign accurate ratings, redesign standards, implement procedures, or develop or change providers’ supports. Gathering data that seems interesting is only a worthwhile exercise if it is used at some point to inform the system. Otherwise, the process can become costly and frustrating, and can be perceived as unresponsive. Many states have asked researchers to evaluate their QRIS pilots. Researchers can be helpful in selecting the most appropriate data elements for monitoring and implementation as well as for process and formative evaluations.
Once a state and its partners determine they are ready to move from a pilot to statewide implementation, it is important to develop a detailed plan and timeline for implementation. An analysis of available funding, along with each partner agency’s capacity to implement and manage the system, will also be critical factors in this process.
Most states subcontract the management of some QRIS components like technical assistance and coaching or onsite data collection. States may have an existing systems in place, like professional development systems, that can be leveraged to support the QRIS and the new features being added as a result of the pilot. States may add to the scope of work in existing contracts they have with child care resource and referral networks and postsecondary institutions to support QRIS activities. States may also issue a request for proposals process to select and engage organizations in implementation.
As a state makes changes to its QRIS based on a pilot, it is critical to consider the implications for consumer education and a QRIS website. It may be necessary to communicate changes to the system and the possibility that program ratings may change as a result of the redesign or new QRIS feature. Additional information on communicating with families is available in the Consumer Education section of the QRIS Resource Guide.
A pilot or field test is not always feasible. If a state chooses to move forward with changes to the QRIS or implementation of a new system without piloting, it is critical to engage providers and other partners and stakeholders in a strategic implementation process. Although much information can be gleaned from research and lessons learned in other states, it is important to remember that each state is unique. A state must consider its landscape, history, infrastructure, and overall early and school-age care and education environment, and adapt the information to its particular set of circumstances. Data collection and monitoring during implementation are vital activities. States can engage an evaluation partner or use internal resources to administer web surveys or to conduct focus groups with parents and programs to supplement QRIS administrative data.
QRIS Compendium Fact Sheet: History of QRIS Growth Over Time (2017), by the National Center on Early Childhood Quality Assurance, notes that, as of 2016, 12 QRIS (29 percent) were rolled out statewide without first going through a pilot phase.
Phasing In Programs
A phased-in approach to a redesign or a new QRIS may be necessary due to limited funding and staff resources or a lack of broad support. However, policymakers should be aware that anticipated changes in program quality may not occur with incremental implementation as each element of a QRIS is dependent on the other. States will need to consider what resources and supports are needed to increase participant quality while also addressing gaps in existing capacity or infrastructure. A phased-in strategy requires careful consideration of which approaches to administration, monitoring, provider supports, and incentives are most likely to be cost-effective in terms of improving quality, ensuring accountability, and increasing participation.
It is also important to realize that a limited implementation strategy is only the first step toward a comprehensive, statewide QRIS. The value of expansion to a statewide QRIS is that it allows all parents and providers to benefit, provides a consistent standard of measurement, and improves opportunities for resource realignment. Planning for full, statewide implementation and the projection of total costs should be part of the process, even when a phased-in approach is necessary.
Making decisions about how and when to phase in implementation of a QRIS can be guided by the cost projection process. The Provider Cost of Quality Calculator (PCQC) described in the Cost Projections and Financing section of the QRIS Resource Guide can help with projecting costs at scale. It can also help guide decisions regarding where and when to reduce costs, if necessary. It is possible to develop multiple cost projections for a statewide program using the PCQC. Projections can be made for strategies, such as the following:
- A comprehensive plan that anticipates full funding for the next 5 years for each component of a fully implemented QRIS;
- A midrange or scaled back plan to get started and build support for future expansion (e.g., limited participation, reduced provider incentives); and
- A basic program with fewer provider supports and incentives and fewer accountability measures.
In addition to projecting the cost of various implementation strategies, several other factors may influence decisionmaking about when to fully implement a QRIS. These include the following:
- Rate at which changes are made to QRIS standards or criteria. Changing them too quickly after implementation may be difficult for providers and could potentially erode their trust in the system and their feelings of success and confidence. Generally, states revise QRIS approximately every 3 to 5 years. Small changes can be made annually, especially changes that are responsive to participant feedback.
- Financial incentives and supports. Making a range of financial incentives and provider supports available early on is likely to increase provider participation. Limiting or targeting incentives and supports is likely to slow participation growth.
- Level of participation. Early and high levels of participation will affect how people view the success and value of the program and are likely to help build support for increased funding.