Assignment #5 – Program Evaluation Survey

Assignment 5 was an exercise in survey development.  I have made many surveys before, however, not to this detail or scrutiny.  The original survey was created and shown to 8 individuals with an age range of early 20’s to mid-60’s.  Here is the link to my original survey.


Overall Findings

  • Developing a survey for sensitive programs requires time, patience, and the proper wording. Questions with improper wording may offend or identify individuals in a program.
  • Surveys that are sensitive in nature must be administered to a large enough population to alleviate the fear of being identified by particular questions (demographic).


Specific Findings/Modifications made

Page 1 (Introduction to Demographic Questions)

  • Changed text from “These few questions will allow me to organize the data appropriately.” to “The following questions are needed to organize the data”. Reason: Better grammar and clear language.

Question 1

  • Added ‘Consultant’ and ‘Counselor’ to the list of choices Reason: The survey may be used beyond the confines of the school, or may be used at other schools.  Also, Consultants play a key role in the Diverse Learning Teacher (DLT) program.

Question 2

  • Added periods to choices. Reason: grammar and consistency
  • In the last response, I changed “attend” to “attend, work at….”.  Reason: to allow the question to meet the requirements of people who both are involved with the school or the general community.

Question 3

  • No changes made.
  • One person responded that this question, along with question 2 may identify an individual.  I am considering adding an option “I prefer not to say” to the survey, but not at this point in time.  If the survey is given to a large enough sample, then the issue of identity is null.

Page 2 (Position Profile)

  • Changed text from “This section will gather data about your position within the school.” To  “The following questions provide information about your position within the school”. Reason: clearer language

Question 4

  • work/meet” changed to “work or meetReason: clear language
  • In the selections, “couple” is changed to “2 to 3”. Reason: to be consistent within the survey.

Question 5

  • How connected do you feel to…….?” Is changed to “How connected are you to the following people?Reason: the word feel implies and weights the question towards emotional attachment.  Although emotional attachment is a factor in this question, the revised wording clarifies the question.
  • “Somewhat connected.” is changed to “Somewhat connected – we seldom work together. Reason: The other options have descriptors.
  • Consultant and Counselor are added as choices. Reason: survey consistency.

Question 6

  • “SAP” changed to “Student Assessment Portfolio (SAP) – English Language Learners (ELL).  Reason: Identify the acronym.
  • “SRT” changed to “Student Resource Team (SRT)”. Reason: Identify the acronym
  • Added choice “Report Cards”

Question 7

  • Identify the level of support you need to complete the documents properly” changed to “Identify the level of support required to properly complete the following documents”. Reason: clear, concise language.
  • Add the word “full”  in front of  “support” in the first column. Reason: clarification – it will help to narrow down responses.
  • Adding “Student Resource Team (SRT)” and “Report Cards” to the list of documents. Reason: I missed these documents in the initial survey.

Question 8

  • Changed the end of the sentence from “……agencies have you attended meetings with?” to “………agencies have you met with this school year?”Reason: clarification of language and to specify a particular time period – to focus responses.
  • Changed any terms ending in ‘apy’ to ‘ist’. For example physiotherapy to physiotherapist. Reason: consistency in grammar with other terms in the list.
  • Changed “The Welcome Centre” to “English Language Learner (ELL) Reception Centre”. Reason: updated and correct terminology.
  • Added “High School Transition Meetings” Reason: this was identified by a respondent as an important addition.

Question 9

  • Added periods to “Somewhat familiar” and “Very familiar”. Reason: consistent punctuation.
  • Changed the word “ideas” to “concepts”. Reason: more formal language and better terminology.
  • Added the word “not” to the header in the second column. Reason: unclear header.

Question 10/11  (Similar question, just an alternate ending) – These changes are applied to both question.

  • Change opening statement from “In regards to….” to “With regard to….”. Reason: improved grammar and clarity.
  • Capitalized ‘n’ in “not-well supported”. Reason: improved grammar.
  • Added “Consultant” and “ Counselor” to choices. Reason: consistency in the survey.
  • Added a “N/A” column. Reason: It was pointed out by one of my respondents that not everyone will have an interaction with someone identified in these questions.

Question 12

  • Change opening statement from “What is your responsibility in the following?” to “Identify you level of responsibility to the following documents.” Reason: clarity and grammar.
  • Changed the choices from acronyms to full terminology. Ex. “IPP” to “Individual Program Plans (IPP)”.

Question 13

  • Change question ending from “…….rank the support you need in your job” to “…..rank the support you need to improve your practice.” Reason: ‘job’ was too vague.  I wanted to narrow down how teaching/learning can improve.

Question 14

  • Change question ending from “……following could better help you do your job with students.” to “….. following can help you do your job better”. Reason: unclear language.
  • Change capitalization in selections. Reason: grammar (proper nouns, etc).

Question 15

  • Added “in the past year” to the end of the question. Reason: to narrow down the response time frame and get a concise answer.

Question 16

  • Changed the question from “How does Universal Design for Learning impact your instruction?” to “What does Universal Design for Learning mean to you?” Reason: this is a more general question to appeal to more respondents (other than just classroom teachers).


After taking the responses into consideration and making the appropriate changes, here is the new survey.



Assignment 4 – Logic Model

Logic Model: Diverse Learner Teacher (DLT) Program


Click the above image for a larger version.

Assumptions: -that all stakeholders are committed to the DLT program and its success.  Success is measured by meeting the needs of the diverse learners in our school population.  What does this look like?  That students are meeting their full potential and that through adaptations and modifications students can meet curriculum outcomes.  For any reason if students are unable to meet the curriculum outcomes, the DLT program will allow students to meet their full potential.  The second assumption is that all stakeholders are committed to the principles of Universal Design for Learning and what that looks like and means to our students.

External factors: Factors to the success of the DLT program include time, place,  and the level of involvement of classroom teachers.  The involvement of other students, language, length of time in Canada and attitude also will be contributing factors for success.

Inputs: There are several inputs to the DLT progam.  Most important are the DLT’s themselves.  They play a key role in the success of the program, and as you can see from the logic model above, influence every aspect of the rest of the model.  They are guided by the principle of Universal Design for Learning.  Supporting them in their school-based roles are administration, DLT consultants at the district level, and fellow DLT’s at various locations in the district.  Other inputs to the DLT program include classroom teachers, students, outside agencies, as well as guiding documents.

Outputs–> Activities: The activities can be summarized into 3 key areas.

1) Working with students – direct interactions between DLTs and students.  This can include one-to-one support, small group support, or large group support.

2) Working FOR students – this is an administrative role on assessment, observation and intervention.  This can include reccomendations for special placements and modified/adapted programming.

3) Working with teachers – This role supports teachers in their day-to-day practice and to help them design best-practices related to UDL.

Outputs –>Participation:  From the activities, participation includes:

-DLT team meetings.

-Individualized Program Plan (IPP) Workshops.

-Completed forms for Alberta Education and for District records.


The primary goal of the DLT program is to improve student academic acheivement with a secondary goal of improved over-all student well-being.  All short and medium outcomes, lead to these end-goals.

Medium outcomes are based with outside agencies (such as Inter-cultural Wellness Workers, and Occupational Therapists and Physiotherapists),  formalized learning plans (IPP’s/SAP’s), and improvements in teacher best practice (technology integration and professional development in UDL).

Short-term impacts are closely related to the participation of stakeholders.  These include classroom visits, filling out the appropriate paper work, developing student action plans, and working with teachers on exemplars and examples of best-practice in the classroom.

Assignment 3 – Program Planning Worksheet

Program to be Evaluated: The Diverse Learning Teacher Program

Engage Stakeholders

Who should be involved?

Primary Stakeholders

  • School Administration team (Principal, Vice Principal, and Assistant Principal)
  • The Diverse Learning Teachers (DLT)
  • Classroom teachers
  • Teacher Assistants
  • Students directly affected by the DLT program (those on Individualized Program Plans, S.A.P.’s, etc)
  • Parents directly affected by the DLT program (those who have day-to-day interactions with DLT’s)

Secondary Stakeholders

  • Non-classroom teaching staff (option teachers, counselor, Career and Technology Studies teachers)
  • Students not directly affected by the DLT program
  • Parents not directly affected by the DLT program
  • DLT Consultants (Central Office)
  • DLT’s at other schools

How might they be engaged?

The primary and secondary stakeholders will be engaged through a series of surveys, interviews and consultations.  The data from these assessment tools will be aggregated into a final recommendation report.  Some of the primary stakeholders will also participate in the creation of the surveys and data collection.  Teaching staff will be asked for anecdotal feedback to the DLT program, and students will provide feedback on their experiences in the DLT program.  DLT Consultants may be ask to provide information on the District’s vision of how DLT’s are to operate at their schools and work-sites.  DLT’s operating at other work-sites will provide information on how the DLT program works at their school.

Focus the Evaluation

What are you going to evaluate?  Describe the program (logic model).

The program being evaluated is the Diverse Learning Teacher program that was introduced to our school September 2011.  The Calgary Catholic School District recently decided to adopt a path that includes Universal Design for Learning (UDL) and the Alberta Education document “Making a Difference: Meeting Diverse Learning Needs with Differentiated Instruction”.  The DLT program replaced the former model of the Resource Teacher, the English as a Second Language teacher, and any other ‘resource’ teacher with a single, unified position – the Diverse Learning Teacher.  Replacing the former model with the new model required individual schools to restructure the way support services were delivered to teachers and students.  Each school chose to model the DLT program differently – keeping their unique situations in mind.  In our school, the changes were far-reaching.  There was a change in physical space (where the DLT’s were housed), time-table changes, and most importantly, a change in “the way we do business”.  The DLT’s were given extensive professional development at the beginning of the school year.  One DLT is the DLT Coordinating Teacher (DLT-CT) and is required to attend a matrix meeting once in a 6-day cycle.  Information gathered at these meetings is shared amongst the other DLT’s at the school.

One far-reaching change from the previous year is that the barriers between Resource Teachers and English as a Second Language (ESL) teachers have been removed.  The DLT’s are now in a position to support ALL students, regardless of their needs.  These staff members also support the classroom teacher to deliver quality programs that adopt the philosophy of Universal Design for Learning (UDL).  The end-goal is to meet the unique requirements of every student – not just those who have been identified as ‘ESL’ or coded.

What is the purpose of the evaluation?

The purpose of this evaluation is to gather feedback on current practice – the ‘what is going on right now’ at the school.  The evaluation will gather qualitative and quantitative data and make recommendations on what is working best, what is not working, and what is being done that can be done better.

Who will use the evaluation?  How will they use it?

The evaluation will be used by the Administration Team and the DLT team at the school to make revisions and changes to the program for the 2012-2013 school year.  It will be used to improve DLT service provided to students, classroom teachers, and parents.

1. Administration – Administration will use the report to make staffing and timetabling decisions.  They wil also use the information to make decisions regarding office space, student/teacher workspace, and will have input as to how the over-all program is deliver at our local school.

2. DLT – DLT’s will use the recommendations to keep or change current practices.  This information will help in their day-to-day interactions with teachers, set-up Teacher Assistant schedules, student and classroom timetabling, and to stream-line and make their work more efficient.

3. The DLT Consultants and the District – This group will use the information in the report to see how things are working at our school – to gather a snapshot of information.  The results in the report may be shared with other schools via DLT consultants to make improvements and enhancements both at the local and district level.

What questions will the evaluation seek to answer?

1. Is the current set-up and DLT program working?

2. Are the DLT’s properly scheduled?

3. Do the DLT’s feel they have the proper resources to do their jobs as expected?

4. What improvements, if any, can be made to the existing model of DLT?

5. Do students feel supported in their learning?

6. Do parents feel their children (especially those with special needs or ESL) feel supported by the school?

7.  What recommendations can be made for next year?

What information do you need to answer the questions? What will I need?

  • District
    •  Initiatives mandated from the District
    • Directives coming down from central office
    •  Best-practices and current research in UDL
    • What other DLT models are operating at schools throughout the district
    • Staffing (the number of DLT’s assigned to our school)
  • School Administration
    • Scheduling/Time-tabling
    • Space allocations
    • Support staffing allocations
    • Their vision and understanding of the DLT program
    • Supports in place for the classroom teacher and professional development in UDL
  • DLT
    • Current time-tables
    • A list of current duties
    • Minutes from meetings
    • Schedules
    • Anecdotal notes
    • Student records (SRT’s, IPP’s, SAP’s)
  • Teachers
    • Support schedules
    • Anecdotal notes
    • Teacher plans
    • An assessment of technology skills
  • Parents
    • Informal discussions
    • Feedback forms (parent forms from student program plans)
  • Students
    • Informal discussion
    • Student records
    • Anecdotal notes from teachers and DLT’s

When is the evaluation needed?

The evaluation needs to be completed (including a recommendation report) by the end of April.  Administration and DLT’s will need the information so that proper planning, staffing, and accommodations can be made for the 2012-2013 school year.

What evaluation design will you use?

The evaluation will be formative.  This is the first year of the DLT program and it is scheduled to continue for the foreseeable future.  As this is the rookie year for the program, a proper evaluation will help guide the DLT program for the 2012-2013 school year.

The model that I will use is Sufflebeam’s CIPP model.  This is an ongoing process and because it continues for the next school year, the program needs to be evaluated and recommendations need to be made.  The following questions will be used to guide the evaluation.

“What needs to be done?”

“How should it be done?”

“Is it being done?”

“Did it succeed?”

Collect the Information

What sources of information will you use?

  • Existing Information
    • Previous Resource/ESL models
    • Current DLT models at other schools
    • Resources put out by Alberta Education on UDL and Differentiated Instruction (DI).
  • People
    • DLT’s
    • Administration
    • Teachers
    • Students
    • Parents
  • Pictorial records and Observations
    • Observations of DLT and student interactions
    • Sit in on meetings between DLTs and administration
    • Possibly attend a Day 6 Matrix meeting of DLT Coordinating Teachers (DLT-CTs)
    • Observations of DLT / Teacher planning.

What data collection methods will you use?

  • Surveys
  • Interviews (both informal and formal (video recorded))
  • Observations
  • Photos
  • Document Review
  • Journal/Reflection

Instrumentation: What is needed to record the information?


  1. Staff surveys
    1. Teachers
    2. Support staff
  2. Student survey (general student population)
  3. Parent survey (students directly impacted by the DLT program).

Interviews with:

  1. Administration
  2. DLT’s
  3. Students directly impacted by the DLT program

Observations/Photos of:

  1. Student performance
  2. Classroom operations
  3. Teachers teaching
  4. Teacher/DLT interactions
  5. Student/DLT interactions

Document Review:

  1. Individualized Program Plans (IPPs)
  2. English as a Second Language folders (SAP’s)
  3. Student Resource Team meeting notes (SRT’s) and SRT forms
  4. District job descriptions for DLTs and DLT-CT’s.


  1. DLT entries
  2. Student entries
  3. Teacher entries
  4. Administration entries

When will you collect data for each method you’ve chosen?

  • Surveys
    • Start: Mid-March  End: End of March
  • Interviews
    • Mid-March
  • Observations
    • On-going  – Beginning of March to end of March
  • Document Review
    • Beginning of March to Mid-March
  • Journal Entries/Reflections
    • End of March – Beginning of April

Will a sample be used?

A sample will not be used.  The program is systemic and district wide.  It will be impossible to find a control group.  The assessment tools will be constructed to evaluate the entire population they are suited for (ex. The student survey will be made available to all students in the school).

Pilot testing: when, where, how?

There will be no pilot testing.  All assessments will take place in the general population.  The assessment tools will be designed specifically for our DLT program, however, certain items (such as survey questions, or interview questions) can be dropped so that the assessment tools can be used by other schools to evaluate their DLT programs.

Analyze and Interpret

How will the data be analysed?

  • Surveys  – Results will be tabulated.  Data will be synthesized into graphs and examined for trends.
  • Interviews – Interview responses will be collated into a report.  Trends will be monitored.
  • Observations – Notes will be taken during observations.  This data will be collated into a report and cross-referenced with interviews.  Trends will be monitored.
  • Document review – Documents will be organized, read, and summarized.  These documents will be compared and contrasted with the interview and observation results to find commonalities and discrepancies.
  • Journal Entries – the evaluator will review entries and reflections. The evaluator will look for commonalities and discrepancies between the journal entries, observations, and survey data.  Data retrieved from the Journal entries will help guide the evaluator in making recommendations for future planning.

Who is responsible to analyze the data? 

Because of the sensitive and far-reaching nature of this program, only the evaluator will analyze the data and make recommendations.

How will the information be interpreted?

The information will be interpreted by the evaluator.  Other program evaluators may be asked to help review the data and the final report.


Use the Information

How will the evaluation be communicated and shared?

The information will be presented in 2 phases.  Phase 1 will be presenting the information to the DLTs and Administration.  The report will be presented in a private room with these two groups in late-April (when the report is complete).  The presentation will display data from the surveys.  Results from the individual responses will be presented as quotations.

Phase 2 will involve presenting the results (data and anecdotal notes) to the general teaching staff.  This will be done at a staff meeting (end-April to mid-May).


Manage the Evaluation

Human Subject Protection

All data being collected is being completed by district staff and the evaluation is being completed for district staff.  Since all information is being kept internal, no release forms will be necessary.  Should there be a need to release the information beyond the confines of the Calgary Catholic School District, release forms will be completed.


  • February – Complete the Planning Program Evaluation Worksheet
  • Late-February – Develop Surveys and Interview questions
  • Early-March – Administer surveys, conduct interviews
  • Late-March – Conclude and tabulate surveys, synthesize interviews
  • Late-March – Early-April – Complete report
  • Early-April – Share evaluation.


Data collection, analysis, and the final report are the responsibility of the evaluator.  Students surveys may be administered by homeroom teachers, and staff and parent surveys can be administered by school administration.


As this is a school based evaluation being completed by school personnel, no budget is being provided for the evaluation.




This program evaluation can have far-reaching impact.  The DLT program is new in our district this year and seeing “What we’re doing right” and “What we can do better” can provide strong guidance to the staff at my school and result in meaningful impact in our students’ academics and well-being.


This is a large undertaking and may require more than one evaluator.  As further materials are developed (surveys and interview questions) it may become more evident that the evaluation may need to take a narrow focus.


All materials being developed, evaluation, and results will remain property of the Calgary Catholic School District.


Assessment materials will be looked over not only by the evaluator but also by a fellow program evaluator for the “sober second thought”.  These reviews will be completed before the assessment materials are administered to staff.  This will help minimize any evaluator bias.

Sufflebeam’s CIPP Model

The model that I would choose to evaluate the Prenatal Exercise Program for Aboriginal Women is Sufflebeam’s CIPP (Context, Input, Process, Product) model.  There are several reasons for choosing this method.

Context – “What needs to be done?”

The context of this study is the pre and post natal health of Aboriginal Women in Saskatoon.  The study was funded by the National Health Research and Development program and is not-for-profit.  The goal of the program is two-fold: to educate Aboriginal women in Gestational Diabetes Mellitus (GDM) and to improve the physical health of these women.  The authors of the program thought this to be a worthwhile program as the Aboriginal population is growing across Canada, and many Canadian cities would benefit from similar programming.

Sufflebeam’s CIPP model of evaluation lends itself well to non-profit organizations, community development, community-based youth programs and community foundations (Sufflebean, 2007).  This program fits the above criteria quite well.

The main question behind context is “What needs to be done?”.  The answer is that the education and well-being of Aboriginal women is the goal of this program.  Having a clearly defined outcome is one reason for choosing Sufflebeam’s model.

Input – “How should it be done?”

“How should it be done?”  This is the question to be asked when evaluating the ‘Input’ of the project.  This program has clearly defined steps and stages.

  • The program was conducted in Saskatchewan’s largest urban centre.
  • The program was extended beyond the existing Aboriginal GDM population to all pregnant Aboriginal women.
  • The program reached 7% of the eligible population.
  • The program was offered free of charge.

Process – “Is it being done?

The Klomp, Dyck, & Sheppard article provides a potential evaluator with strong, clear evidence of the path of the project.  They provide data on:

  • Cost for the user – The program is free of charge.
  • Date and time of the program – The program is on Wednesday afternoons.
  • Evidence that their program ideas are rooted in research – The program is based on guidelines from the American College of Obstetricians and Gynecologists.
  • Specific descriptions and data on a typical session (warm-up, aerobic activity, cool down) is provided.
  • Data and information is provided on the personalization of the program – self-monitoring and self-pacing.
  • The look and cadence of a typical class is explained – types and variety of exercises available (walking, water aerobics, select machines, etc).
  • Extra services provided to the client are described – Pre-natal consultations, pamphlets, access to other healthcare professionals.

Product – “Did it Succeed?

Although the article does not offer a concise summary, there is evidence throughout the article which would permit the evaluator to answer this question and evaluate the “end-product” of this program.

Here is some of the information I found that helps to answer the question “Did it succeed?”

  • The authors cite that due to popular demand water aerobics classes were moved to every-other week.  This is evidence that there was strong demand for this criterion of evaluation.
  • Participants in the program started to bring a friend (for drop in and moral support).
  • The program allowed for a flexible drop-in format to meet the needs of its participants.
  • Women were encouraged to attend after child-birth.
  • Participants were telephoned a day before each session as a reminder.

These four questions, along with Sufflebeam’s CIPP EVALUATION MODEL CHECKLIST are intended to evaluate the longevity of a program.  This is a program that is intended to have long-lasting effects on its participants – and the broader community.  The GDM health initiatives go beyond pre-post natal health, to the general health and well-being of Aboriginal women.  As stated earlier in this piece, the Aboriginal population is growing.  Since this was a two-year project (1995-1997), the data produce from an evaluation would be summative.  This end-data would will be useful for future programs in Saskatoon, and other Canadian cities wishing to develop similar programs.

Although I found Sufflebeam’s CIPP model the best fit for a program evaluation, there are some aspects of this program which may not be best-suited for this model.  Information regarding the cost and sustainability of funding were not addressed in this report which is one of the areas where a CIPP model will focus its evaluation.  Also, the CIPP model evaluation questions the needs of the end-user and if those needs have been met.  The report includes some facts and observation, but no anecdotal evidence (interviews and personal interviews of clients).  Sufflebeam’s evaluation method also looks into the “lessons learned” from the program.  Although strong data was provided on the process of the program, there was very little concluding data or evidence.  This would make a proper CIPP evaluation difficult to complete.

The main goal of a CIPP evaluation model is “not to prove, but to improve”. (Sufflebeam, 2007).  An evaluation needs to consider a programs merit, it’s worth, its probity, and lessons learned.  Compared to other evaluation methods that I investigated such as Stake’s Countenance model, and Rippey’s Transactional Model, Sufflebeam’s CIPP model is the most appropriate based on the data provided in this report.  The report showed the merit of the program (by providing data and a vision of how the project wanted to impact the lives of aboriginal women), the worth of the program (by providing evidence that the women involved were able to make small improvements and better choices in their lives and that these women gained access to health care professionals and resource supports), the probity of the program (by showing that what the program was doing was rooted in best-practice and research),  and that there were lessons learned (based on evidence provided in the body of the report even though a complete conclusive summary was not provided).

Klomp, J., Dyck, R., and Sheppard, S. (2003). Description and evaluation of a prenatal exercise program for urban Aboriginal women.  Canadian Journal of Diabetes, 27: 231-238

Sufflebeam, D. (2007) CIPP Evaluation Model Checklist [Second Edition]: A tool for applying the CIPP Model  to assess long-term enterprise. Retrieved January, 17, 2012 from:

Program Evaluation Review

Here we are in 2012 and I have a new batch of courses that I’m taking.  I just finished up Instructional Design and Designing for Distance Education.  My next two courses are Program Evaluation and Advanced Instructional Desgin.  Completeion of these two courses will put me over the 1/2 way mark in my Master’s program.  That’s hard to believe!  This first blog entry (in a LONG time) is a program evaluation review.  Over the coming months, I’ll be posting assignments here on my blog.  I encourage comments!


Review of “The Brandon Middle Years Pilot Project Program Evaluation” (BMYPPPE)

The BMYPPPE was completed by David Patton, Ph D. and Jackie Lemaire B. Sc in August of 2002.  It is an evaluation of a Pilot Program taking place in two Brandon Junior High Schools whose goal it was to increase the knowledge of grade 7 and 8 students in alcohol and drug use, and to see if increased knowledge would change students’ decision making skills.  I found the report after a Google search of “middle years” and “program evaluation”.   Since middle years are my area of interest, I wanted to find a report that dealt with that target group.

What I liked about the Report:

The report was well organized and properly formatted.  It was official and professional.  It followed the general program evaluation format of the title page, table of contents, introduction, research results, summary, and appendices.  There were no grammatical errors and the findings were reported at an understandable language level.

Findings were presented in graphs.  The findings of each evaluated criteria were presented as text first, with a graph to unify the data.  The graphs were clear of any superfluous data.  They were well sized (they did not take up an entire page) and communicated the findings clearly.

The data was analyzed and compartmentalized according to specific criteria.  The program evaluation took into account the differences in:

  •         The two middle schools observed
  •          Gender differences
  •          Difference between Grade 7 and Grade 8 students
  •         Differences (in use and opinion) between alcohol and drug use


I liked how the program was evaluated using different question techniques.

  •         Questions on access to drugs and alcohol were asked using a Likert Scale.
  •         Questions on frequency of drug and alcohol use were asked using a Likert Scale.
  •         Students were also asked short answer and open ended questions when it came to questions about personal feelings.  I like this technique as it allows the respondent to speak as an individual and there are no ‘guiding’ questions or prompts.
  •         Questions regarding specific knowledge pertaining to the knowledge from the Pilot Project (such as the addictiveness of cigarettes, or the legality of certain drugs) were asked using True/False.  I liked how they also had a third option of “I don’t know” to allow for students to be honest in their knowledge.  This is something that I will take with me when I conduct my own program evaluation.

Here is what I did not like about the report:

I found that the report did not address ethnic diversity – cultural backgrounds or the socio-economic backgrounds of the students.  This could have been addressed in the initial introduction where a ‘snapshot’ of the two schools involved in the survey could have been given.  I was left asking questions such as, “Have more schools taken place in this pilot project?”, “Why were these two schools chosen?”, “Is there an existing drugs and alcohol problem which this pilot project was designed to address?”, and “Is there a senior high follow up to this pilot project?”.

Although I like the use of the Likert Scale, I was left wondering if questions regarding attitudes (and changing attitudes) would have been better addressed in a short answer format.  This led me to another question/idea of wondering if there was room in the evaluation to have completed some pre and post interviews with students.  This may not have been a possibility because of the sensitive nature of subject matter and the possibility exists of unauthentic responses in an in-person interview at this age.


Here is a link to the Program Evaluation