Summative evaluation informs judgments about whether the program worked i. Outcome evaluation focuses on the observable conditions of a specific population, organizational attribute, or social condition that a program is expected to have changed. Whereas outcome evaluation tends to focus on conditions or behaviors that the program was expected to affect most directly and immediately i.
For example, assessing the strategies used to implement a smoking cessation program and determining the degree to which it reached the target population are process evaluations. Reduction in morbidity and mortality associated with cardiovascular disease may represent an impact goal for a smoking cessation program Rossi et al. Several institutions have identified guidelines for an effective evaluation. For example, in , CDC published a framework to guide public health professionals in developing and implementing a program evaluation CDC, Although the components are interdependent and might be implemented in a nonlinear order, the earlier domains provide a foundation for subsequent areas.
They include:. Five years before CDC issued its framework, the Joint Committee on Standards for Educational Evaluation created an important and practical resource for improving program evaluation. The Joint Committee, a nonprofit coalition of major professional organizations concerned with the quality of program evaluations, identified four major categories of standards — propriety, utility, feasibility, and accuracy — to consider when conducting a program evaluation.
Propriety standards focus on ensuring that an evaluation will be conducted legally, ethically, and with regard for promoting the welfare of those involved in or affected by the program evaluation. In addition to the rights of human subjects that are the concern of institutional review boards, propriety standards promote a service orientation i. Utility standards are intended to ensure that the evaluation will meet the information needs of intended users. Involving stakeholders, using credible evaluation methods, asking pertinent questions, including stakeholder perspectives, and providing clear and timely evaluation reports represent attention to utility standards.
The purpose of an academic program assessment plan is to facilitate continuous program level improvement. Program assessment plans provide faculty and instructors with a clear understanding of how the program is assessed e. Plans also show the alignment of course curricula to the stated program outcomes, describe how the outcomes will be assessed and outline how results will be used. At UW-Madison, every academic program — undergraduate, graduate, certificate and general education — must have an active assessment plan.
Quality program assessment plans should:. Table The focus is on quality improvement—collecting information about the program and its components on a continuous basis to identify what is working well and decide what changes may be needed. For ACEN accreditation, distance education nursing programs must demonstrate compliance with 10 critical elements in addition to the appropriate accreditation standards for the level of program:.
The CCNE accreditation process incorporates specific elements related to effective distance education in three of the four accreditation standards, as the following examples illustrate:. In some of the standards, specific statements are included to guide faculty and administrators in how the standards apply to online programs. For example, one statement specifies that governance structures in the school should facilitate including distance education students. There are nine guidelines, each with specific areas to address.
These are summarized in Exhibit Online programs should be appropriate for the mission and goals of the institution. There are plans for developing, sustaining, and, if appropriate, expanding online offerings, which are part of the regular evaluation processes. Online learning is incorporated into the governance and academic oversight of the institution. There is an evaluation of online learning offerings with its results used for improvement.
The faculty is qualified and supported. The institution provides effective student and academic support services. There are sufficient resources to support and expand if indicated online course offerings.
The institution assures the integrity of online offerings C-RAC, Each of these standards, listed in Exhibit These areas are applicable to assessing online programs in nursing. Institutional mission, effectiveness, and strategic planning.
Program outcomes, curriculum, and supplemental materials. Assessment plan to track student achievement and satisfaction. Academic leadership and faculty qualificationsAdvertising and promotion of the institution and programs, and recruitment personnel. Admission criteria and practices, and enrollment agreements. Facilities and supplies including record keeping; DEAC, Saewert described eight models for program evaluation: objective-based, goal-free, expert-oriented, naturalistic, participative oriented, improvement-focused, success case, and theory-driven.
Each of these models has positive and negative considerations for its use. Nurse educators should seek a model that will help organize the program evaluation and produce the most useful information for various stakeholders. Some nursing education programs use an eclectic approach in which they design their own model by selecting features from more than one Saewert, Another type of model is decision oriented.
With these models, the goal of the evaluation is to provide information to decision makers for program improvement purposes. However, the existence of assessment data is no guarantee that the program will use the data to improve Stufflebeam, Decision models focus more on using assessment data as a tool to improve programs than on accountability.
Decision-oriented models usually focus on internal standards of quality, value, and efficacy. Other models are systems oriented. These examine inputs into the program such as characteristics of students, teachers, administrators, and other participants in the program, as well as program resources. These models also assess the operations and processes of the program as well as the context or environment within which the program is implemented.
Finally, systems models examine the outcomes of the program: Are the intended outcomes being achieved? Are students, graduates, their employers, faculty, staff, and others satisfied with the program and how it is implemented?
Is the program of high quality and cost-effective? Regardless of the specific model used, the process of program evaluation assists various audiences or stakeholders of an educational program in judging and improving its worth or value. Audiences or stakeholders are those individuals and groups who are affected directly or indirectly by the decisions made. Key stakeholders of nursing education programs include students, faculty and staff members, partners healthcare and community agencies , employers of graduates, and consumers.
The purpose of the program evaluation determines which audiences should be involved in generating questions or concerns to be answered or addressed. When the focus is formative, that is, to improve the program during its implementation, the primary audiences are students, teachers, and administrators.
Summative evaluation leads to decisions about whether a program should be continued, revised, funded, or terminated. Audiences for summative evaluation include program participants, graduates, their employers, prospective students, healthcare and community agencies, consumers, legislative bodies, funding agencies, and others who might be affected by changes in the program.
When planning program evaluation, an important decision is whether to use external or internal evaluators. External evaluators are thought to provide objectivity, but they may not know the program or its context well enough to be effective.
External evaluators also add expense to the program assessment. In contrast, an internal evaluator has a better understanding of the operations and environment of the program and can provide continuous feedback to the individuals and groups responsible for the evaluation.
However, an internal evaluator may be biased, reducing the credibility of the evaluation.
0コメント