Latest Entries »

Sufflebeam’s CIPP Model

The model that I would choose to evaluate the Prenatal Exercise Program for Aboriginal Women is Sufflebeam’s CIPP (Context, Input, Process, Product) model.  There are several reasons for choosing this method.

Context – “What needs to be done?”

The context of this study is the pre and post natal health of Aboriginal Women in Saskatoon.  The study was funded by the National Health Research and Development program and is not-for-profit.  The goal of the program is two-fold: to educate Aboriginal women in Gestational Diabetes Mellitus (GDM) and to improve the physical health of these women.  The authors of the program thought this to be a worthwhile program as the Aboriginal population is growing across Canada, and many Canadian cities would benefit from similar programming.

Sufflebeam’s CIPP model of evaluation lends itself well to non-profit organizations, community development, community-based youth programs and community foundations (Sufflebean, 2007).  This program fits the above criteria quite well.

The main question behind context is “What needs to be done?”.  The answer is that the education and well-being of Aboriginal women is the goal of this program.  Having a clearly defined outcome is one reason for choosing Sufflebeam’s model.

Input – “How should it be done?”

“How should it be done?”  This is the question to be asked when evaluating the ‘Input’ of the project.  This program has clearly defined steps and stages.

  • The program was conducted in Saskatchewan’s largest urban centre.
  • The program was extended beyond the existing Aboriginal GDM population to all pregnant Aboriginal women.
  • The program reached 7% of the eligible population.
  • The program was offered free of charge.

Process – “Is it being done?

The Klomp, Dyck, & Sheppard article provides a potential evaluator with strong, clear evidence of the path of the project.  They provide data on:

  • Cost for the user – The program is free of charge.
  • Date and time of the program – The program is on Wednesday afternoons.
  • Evidence that their program ideas are rooted in research – The program is based on guidelines from the American College of Obstetricians and Gynecologists.
  • Specific descriptions and data on a typical session (warm-up, aerobic activity, cool down) is provided.
  • Data and information is provided on the personalization of the program – self-monitoring and self-pacing.
  • The look and cadence of a typical class is explained – types and variety of exercises available (walking, water aerobics, select machines, etc).
  • Extra services provided to the client are described – Pre-natal consultations, pamphlets, access to other healthcare professionals.

Product – “Did it Succeed?

Although the article does not offer a concise summary, there is evidence throughout the article which would permit the evaluator to answer this question and evaluate the “end-product” of this program.

Here is some of the information I found that helps to answer the question “Did it succeed?”

  • The authors cite that due to popular demand water aerobics classes were moved to every-other week.  This is evidence that there was strong demand for this criterion of evaluation.
  • Participants in the program started to bring a friend (for drop in and moral support).
  • The program allowed for a flexible drop-in format to meet the needs of its participants.
  • Women were encouraged to attend after child-birth.
  • Participants were telephoned a day before each session as a reminder.

These four questions, along with Sufflebeam’s CIPP EVALUATION MODEL CHECKLIST are intended to evaluate the longevity of a program.  This is a program that is intended to have long-lasting effects on its participants – and the broader community.  The GDM health initiatives go beyond pre-post natal health, to the general health and well-being of Aboriginal women.  As stated earlier in this piece, the Aboriginal population is growing.  Since this was a two-year project (1995-1997), the data produce from an evaluation would be summative.  This end-data would will be useful for future programs in Saskatoon, and other Canadian cities wishing to develop similar programs.

Although I found Sufflebeam’s CIPP model the best fit for a program evaluation, there are some aspects of this program which may not be best-suited for this model.  Information regarding the cost and sustainability of funding were not addressed in this report which is one of the areas where a CIPP model will focus its evaluation.  Also, the CIPP model evaluation questions the needs of the end-user and if those needs have been met.  The report includes some facts and observation, but no anecdotal evidence (interviews and personal interviews of clients).  Sufflebeam’s evaluation method also looks into the “lessons learned” from the program.  Although strong data was provided on the process of the program, there was very little concluding data or evidence.  This would make a proper CIPP evaluation difficult to complete.

The main goal of a CIPP evaluation model is “not to prove, but to improve”. (Sufflebeam, 2007).  An evaluation needs to consider a programs merit, it’s worth, its probity, and lessons learned.  Compared to other evaluation methods that I investigated such as Stake’s Countenance model, and Rippey’s Transactional Model, Sufflebeam’s CIPP model is the most appropriate based on the data provided in this report.  The report showed the merit of the program (by providing data and a vision of how the project wanted to impact the lives of aboriginal women), the worth of the program (by providing evidence that the women involved were able to make small improvements and better choices in their lives and that these women gained access to health care professionals and resource supports), the probity of the program (by showing that what the program was doing was rooted in best-practice and research),  and that there were lessons learned (based on evidence provided in the body of the report even though a complete conclusive summary was not provided).

Klomp, J., Dyck, R., and Sheppard, S. (2003). Description and evaluation of a prenatal exercise program for urban Aboriginal women.  Canadian Journal of Diabetes, 27: 231-238

Sufflebeam, D. (2007) CIPP Evaluation Model Checklist [Second Edition]: A tool for applying the CIPP Model  to assess long-term enterprise. Retrieved January, 17, 2012 from:

The Generation Gap

Retrieved from:

One of the assignments for ETAD 874 (Advanced Instructional Design) is to find and review articles that relate to the overall design project for the course.  Let me give you some background on the design project.  The project that the 12 of us is working on is a redesign and re-organization of the Saskatchewan History and Folklore Society.  The class is divided into two teams.  One team is working on the redesign of their website, whilst the other group is working on the organization and distribution of the Baker Slides (a historical image collection).  I am in the second group.

During our first project team meeting, we discussed the initial needs assessment of the client and who their target audience is.  From the information provided to us we discovered that it is mostly seniors who access the website and image database.  The discovery put us at a crossroads.  Do we revamp the website for the ‘next generation’,  do we refresh what is already in place, or is there a way to accomplish both – the win-win situation?

This discussion led me to an article on Instructional Design and the generation gap. Although the article is directed towards higher education, there are strong ties to the website redesign and the impact it has on the age of it’s users.

The article was taken from the Canadian Journal of Learning and Technology and is titled “Digital Learners in Higher Education: Generation is Not the Issue”.  It was written by Mark Bullen – British Columbia Institute of Technology, Tannis Morgan – Justice Institute of British Columbia, and Adnan Qayyum – University of Ottawa.

The article is a reaction to the popular belief amongst many designers that instructional design and technology use is generation-dependent.  The authors make a bold statement in the abstract that says: “A comprehensive review of the research and popular literature on the topic and an empirical study at one postsecondary institution in Canada suggest there are no meaningful generational differences in how leaners say they use ICT’s (information and communication technologies) or their perceived behavioural characteristics.”  (Bullen, Morgan, & Qayyum , 2011)

The authors spend the first few pages of their article arguing against the common belief that the net-generation (which they define as people born in 1982 or later) is better suited for ICT’s in their instruction.  The authors argue that ICT use is not generation dependent – it is USE dependent.  I will discuss this issue more later in this post.  Bullen, Morgan and Qayyum found many studies that support this wideheld belief, however, they had some issues with the research.  In many cases, research was completed on students (multi age) who were already enrolled in technology programs.  They also found a study (Oblinger and Oblinger, 2005) which legitimized the unique learning patterns of different generations, but the report was based mostly on speculation and anecdotal research. (pg. 5)

Bullen, Morgan &Qayyum decided to author their own study which looked at two groups: the Net-Generation (b. 1982 and later), and the non-net generation (b. prior to 1982).  The study was conducted at two British Columbia postsecondary institutions and took place in two parts – Interviews and an empirical data survey.  The study aimed to answer these three questions:

“How accurate are some of the more prevalent claims about net generation learners? Do the students at this Canadian postsecondary institution fit the typical profile of the net generation learner? How are the learners at this institution using various information and communication technologies (ICT)” (P. 6)

Here is a summary of their findings:

1)      Contrary to popular belief that e-mail is for the ‘older generation’, the study found that both groups use e-mail because of its formality and to maintain a certain distance from their professor.  E-mail was also a useful tool when needing to communicate to a group, long messages, and to share files.

2)      When asked what factors could improve their learning, both sets of students deferred answers on ICT’s and cited more physical factors such as better lighting, better lab and library hours, more windows, and better internet access.

3)      There were no significant differences between the generations in computer use, their desire to explore learning, their preference for clear instructions before trying something new, and goal setting.

4)      There were no significant pattern differences in personal or institutional e-mail use.  For both groups the most preferred method of communication with peers was in-person.  In fact, both groups had similar rates of in-person communication vs ICT communication with peers.

5)      The study found that the net-generation was more inclined to use web tools (instant messaging, facebook, etc) to communicate between peers , but this was not the case in communicating with the instructor in a course.

The authors argue that the use of ICT’s are not driven by generation, but driven by the context of the course materials.  My interpretation of their argument is that the use of webtools for learning is driven by the context and the need for the technology not by perceived age needs (which reminds me back to a previous blog post I did on Technology for Technology’s Sake).  They say “..we need to avoid the temptation to base our decisions on generational stereotypes and instead seek a deeper understanding of how students are using technology and what role it plays in learning and teaching in higher education.” (p. 17).

For Bullen, Morgan &Qayyum, they feel that the report has two main findings.  Instructional design decisions should not solely be based on generation and age alone, and that when institutions fund future ICT investments, they should avoid blanket-programming (such as system wide webtool licences) and look at the specific needs of individual programs and fund their needs appropriately.  They say that institutions should avoid making campus wide ICT decisions as they may not be appropriate for all programs (p. 18).

Although this article dealt with higher education, the idea of generation has be discussed in our round-table group meetings more than once and has become a “hot button” issue for us.  It is something that as a group we will need to discuss further before proceeding with our design.  I hope that this article review sheds some light on how different generations interact with technology.  One of my guiding design principles is that good design will transcend the target audience.  I plan on brining this idea forward in our design team meetings.



Bullen, M, Morgan, T., & Qayyum A. (2011) Digital Learners in Higher Education: Generation is Not the Issue. Canadian Journal of Learning and Technology, Spring 2011.  Retrieved from:



Program Evaluation Review

Here we are in 2012 and I have a new batch of courses that I’m taking.  I just finished up Instructional Design and Designing for Distance Education.  My next two courses are Program Evaluation and Advanced Instructional Desgin.  Completeion of these two courses will put me over the 1/2 way mark in my Master’s program.  That’s hard to believe!  This first blog entry (in a LONG time) is a program evaluation review.  Over the coming months, I’ll be posting assignments here on my blog.  I encourage comments!


Review of “The Brandon Middle Years Pilot Project Program Evaluation” (BMYPPPE)

The BMYPPPE was completed by David Patton, Ph D. and Jackie Lemaire B. Sc in August of 2002.  It is an evaluation of a Pilot Program taking place in two Brandon Junior High Schools whose goal it was to increase the knowledge of grade 7 and 8 students in alcohol and drug use, and to see if increased knowledge would change students’ decision making skills.  I found the report after a Google search of “middle years” and “program evaluation”.   Since middle years are my area of interest, I wanted to find a report that dealt with that target group.

What I liked about the Report:

The report was well organized and properly formatted.  It was official and professional.  It followed the general program evaluation format of the title page, table of contents, introduction, research results, summary, and appendices.  There were no grammatical errors and the findings were reported at an understandable language level.

Findings were presented in graphs.  The findings of each evaluated criteria were presented as text first, with a graph to unify the data.  The graphs were clear of any superfluous data.  They were well sized (they did not take up an entire page) and communicated the findings clearly.

The data was analyzed and compartmentalized according to specific criteria.  The program evaluation took into account the differences in:

  •         The two middle schools observed
  •          Gender differences
  •          Difference between Grade 7 and Grade 8 students
  •         Differences (in use and opinion) between alcohol and drug use


I liked how the program was evaluated using different question techniques.

  •         Questions on access to drugs and alcohol were asked using a Likert Scale.
  •         Questions on frequency of drug and alcohol use were asked using a Likert Scale.
  •         Students were also asked short answer and open ended questions when it came to questions about personal feelings.  I like this technique as it allows the respondent to speak as an individual and there are no ‘guiding’ questions or prompts.
  •         Questions regarding specific knowledge pertaining to the knowledge from the Pilot Project (such as the addictiveness of cigarettes, or the legality of certain drugs) were asked using True/False.  I liked how they also had a third option of “I don’t know” to allow for students to be honest in their knowledge.  This is something that I will take with me when I conduct my own program evaluation.

Here is what I did not like about the report:

I found that the report did not address ethnic diversity – cultural backgrounds or the socio-economic backgrounds of the students.  This could have been addressed in the initial introduction where a ‘snapshot’ of the two schools involved in the survey could have been given.  I was left asking questions such as, “Have more schools taken place in this pilot project?”, “Why were these two schools chosen?”, “Is there an existing drugs and alcohol problem which this pilot project was designed to address?”, and “Is there a senior high follow up to this pilot project?”.

Although I like the use of the Likert Scale, I was left wondering if questions regarding attitudes (and changing attitudes) would have been better addressed in a short answer format.  This led me to another question/idea of wondering if there was room in the evaluation to have completed some pre and post interviews with students.  This may not have been a possibility because of the sensitive nature of subject matter and the possibility exists of unauthentic responses in an in-person interview at this age.


Here is a link to the Program Evaluation

A whole new way of Social Networking

(My lineage)

So I’ve done something that has taken social networking to the next level.

How would you feel about being ‘friended’ by someone you have lots in common with, but is a complete stranger? You’ve never met this person, but you are connected to them.

I’m talking about social networking based on your genes and DNA.

If you follow me on twitter, you can probably recall that I sent off a sample of my saliva to have my DNA analyzed.  I sent it to a company called 23 And Me (  They were first featured on Oprah a few years back and more recently on Anderson (Anderson Cooper’s new daytime talk show).  They product is rooted in science.  Here’s how it works.

You sign-up for an account and  pay them money.

They send you a kit.

You spit into the kit, shake it, seal it, and send it off to their lab in California.

4-6 weeks later you get an analysis of your genome.

The results are completely on-line.  You recieve an e-mail saying that your results are ready and to log in to find out.  The findings are grouped into 2 distinct areas each with their own sub-groups.

1. Health (Disease Risks, Drug Interactions, Health Labs, to name a few).

2. Ancestry (Maternal and Paternal lineage, and golbal ancestry).

I knew that I would find out this information (as I had done my due diligance and research before signing up), what happened next was a surprise to me.  When I logged in (one day after I had received the e-mail) I already had a request in my account to make a new connection…..with a 5th cousin!  (It turns out that there are 247 other people on 23 and Me with whom I am related to (mostly 3rd to 6th cousins)).  This brings social networking to a whole new level.

I’ve never met this person…nor have I ever heard of this person.  Is there more of an obligation to accept this friend request and start sharing my DNA information with him?  They always say “You can’t pick your family!”.  It turns out that we’re 5th cousins…which means we share Great-Great-Great-Great-Grandparents.  Whoa!  I’ve probably passed by people in the mall that I could be closer related to.  What about privacy? Just because we’re ‘family’ do I open up and share everything?

How long will it be before either 23 and Me expands their social networking to be more like facebook, or what if facebook decides that 23 and Me would make a good purchase?  Social Networking has already made it easy for us to share the details of our lives.  They’ve also made us comfortable with it.  I know that personally I’ve become more ‘comfortable’ and feel ‘safer’ on the internet.  How long will it be until even more of us is shared with friends…family….strangers?


Image Source:

I’ve been back at school for about two weeks now.  It’s been great to get back in the building.  It’s where I belong and it’s what I love to do.  Writing blog entries and starting my master’s degree in Educational Technology and Design have really opened my mind to thinking about things in different ways and from different angles.  It’s meant a whole new way of approaching and solving problems.

One idea that has stuck with me from my summer course was “What exactly is technology and how do we use it?”.  Let me clarify.  One chat that happened over twitter and in our course discussion thread was that hair colour is a technology.  If we look at technology as being cables, cords, wires and processors, then hair colour does not fit.  If we use a broader definition that technology is a solution to a problem, then hair colour is technology.

So that brings me to this morning when a colleague asked me to go look through a PowerPoint presentation that he was working on to present to his class.  The topic was The Legislative and Judicial Branches of the Canadian Government (Social Studies 9).  His classroom is very well outfitted.  He has a tablet laptop, projector, sound system and a mounted SMART board.  He began by apologizing to me for not using SMART Notebook software.  The apology came because in my school I have been working with staff on improving their SMART board skills.  I told him that he didn’t need to apologize.  From my reading/thinking/processing over the summer, one of the tuths that I’ve come to believe is that technology is most useful when it can meet a need.  But it’s not only meeting a need thats important, but  meeting a need in an efficient and useful manner.

This teacher has interactivity in the classroom.  His students are engaged in conversation, group work, group research, and collaborative learning.  PowerPoint has gotten a bad rap for being a ‘boring’ means of presentation.  There are many other tools one could use (SMART Notebook and Prezi come to mind). However,  is it the teacher or the technology that is going to drive this lesson?  His slides had humour, great infographics, and fantastic images.  There was very little text, which means that it was not PowerPoint that was going to drive the lesson, but a real-life teacher.

I’ve come to a conclusion that using technology for the sake of using technology is not always best practice.  Don’t we have times where reaching for a pad and pen is quicker than opening an application on a laptop to make a reminder or jot a phone number?  Does fancier/flashier software make for a better lesson or teacher?  Aren’t there times when using a whiteboard marker to solve an equation is easier than starting up a computer and using an interactive white board?  What about phoning parents vs e-mail?  Are there times when a phone conversation is more useful than getting into a misunderstood e-mail battle?

In a confusing manner, this blog entry seems to be ‘anti-technology’, but that is far from the truth.  I love technology – I use it all the time in teaching and in my personal life.  I love exploring new web tools and applications and seeing how they can be used to improve the status-quo.  My point is this.  Technology for technology’s sake is not always best practice.  It’s important that we choose the right tool for the right job and make sure that we use our ‘human’ talents and efficiencies in harmony with technology.


Removing the Barriers

In June 2010 Alberta Education released the document Action on Inclusion.  This was a metamorphosis of ‘Setting the Direction’ which was taking a look at special needs education in the province.  What the department found was that good teaching practice is good teaching practice.  If there is a modification, technology, or assistance that can be given to one student, can that not be best practice for all students?

The document identifies that some students will need long term support, but also that some students may only need short term support.  To help us through understanding Action on Inclusion, Kathy Howery from the University of Alberta came and gave a presentation to both the Elementary and Junior High AISI cohorts.   She is an inspiring person and is passionate about Universal Design for Learning (UDL).

Kathy said that many of the technologies we use today, were born out of assistive technologies.  Think about predictive text on our mobile devices.  Predictive text was initially born as an assistive technology and has become common place.  What about sidewalk ramps?  They were initially created as an access for wheel chairs but have gone much further.  People with baby carriages, wagons, small children on bikes, etc use them.  Think about closed-captioning on television.  Initially created for the hearing impaired, closed captioning has become used at gyms, and probably has saved many marriages with televisions in the bedroom.  Spell check is again something that was created for the margins, but has been adopted by the mainstream.  She also makes the argument that some of these tools have become so commonplace, that to remove them would seem unnatural like removing someone’s eyeglasses.

UDL is based on research ( and is based on three principles.

1)   The ‘What’ of Learning – Present information in different ways – Multiple means of representation

2)   The ‘How’ of Learning – Differentiate the ways students express what they know

3)   The ‘Why’ of Learning – Provide multiple means of engagement.

In her presentation Kathy used an example of a novel study for high school English.  I believe the novel was Huckleberry Finn.

So we have the traditional model of Teacher Assigns Novel –> Students Read Novel –>Students synthesize Novel (Formal Assessment).

UDL would take a different spin on this.  Depending on the individual’s need, ‘reading’ the novel could be: listening to an audio book on an iPod, using a ‘coles notes’ version, watching the movie, reading a synopsis on Wikipedia, or using 60secondrecap just to name a few.  Then students may choose a different way to show their work. They could do a voicethread, make a video,  a dramatization,  a photostory, a concept map, a dance, a traditional report or even a blog!

What UDL recognizes is that we as teachers need to take a proactive and inclusive approach to the diverse student population that is before us.  We cannot continue to teach to that elusive ‘middle ground’ one-size-fits all student, because they do not exist.  We need to create pathways and remove barriers to student learning.  Students need choice, options and flexibility in their schooling.

A real obstacle facing UDL (and Action on Inclusion) is curriculum, more specifically the current Program of Studies.  The current POS has a narrow focus and constructs barriers.  As a result of Action on Inclusion, the province of Alberta has embarked on the next phase, Action on Curriculum.  AOC is seeking to completely re-think curriculum and its delivery in Alberta.   Action on Curriculum has the power to completely transform our current model and what we think about ‘school’.  Click this link and watch the short video.  I found it inspiring and forward thinking – we may just do what’s best for kids yet!

Digital Agendas

Image source:

This fall marks the beginning of the end of our paper agendas for Junior High Students in our district.  Our current agendas are probably similar to what others have: a student handbook at the beginning covering school policies, the September to June Calendars for students to write in,  and a reference section at the back with math formulae, periodic table,  the map of Canada, etc.

In September a limited number of agendas will be available for purchase at the main office – they will not be handed out to every student on the first day of school.  The limited agendas will not include the section of school policies and procedures. The section on school policies and procedures will now be published to our school website and D2L homepage.   In 2012-2013 no paper agendas will be ordered.  Click here to link to my district’s press release.

How is this going to work?  D2L has the ability to take events from different courses and populate those items in a single  homepage calendar for each student.  For example, if I make an event for a math project, I can put it in the course calendar, and it will be pushed down to the individual calendar of the students who are enrolled in the course. D2L has a homepage widget called ‘Events’ which will organize what is coming up ‘Today’, ‘Tomorrow’, and ‘This week’.  This event box is front and centre when students log on to D2L.  The user’s individual calendar is a hybrid of course events pushed down from instructors and user-generated events.

Moving to digital agendas has some positives and some areas of concern.


1)   Digital agendas are never lost.  Lost agendas have been a real problem. D2L offers a “one stop shop” for events, content, and grades.

2)   Digital agendas are easily accessible D2L offers a traditional computer log-in as well as a mobile portal.  Students can access their calendar on their mobile devices and personal (or school) computers.

3)   Calendar events can be directly linked to course content.  For example, if a student has a science exam on Wednesday, that event can be linked to multiple sources of content – study sheets, youtube videos, practice questions, etc

4)   Parents will have consistent access.  No longer will it be the scenario of “Show me your agenda”, but parents have the ability to log-into their child’s account and track their child’s events.


1)  With whom does responsibility lie?  With traditional agendas it is up to the student to record their appointments thus the majority of the responsibility lies with the student.  With digital agendas it is the instructor who will add events to the calendar.  Does this take away responsibility for student time management? Can it become a shared responsibility? How do we teach and model time management?

2)   Teachers must consistently use the calendar and events tools.  This relates to the above point.  Does a teacher creating the event take away from the student’s responsibility for his or her own learning?  That is a concern for my staff. Also, what if a teacher is not “pulling their weight”?  What expectations are placed on teachers to input their events?

3)   Digitala agendas cannot be used as a behavioral tool.  In the past paper agendas were used for parent communication, hall passes, bathroom monitoring, etc.  We will have to explore new ways of managing students as well as using our LMS for parent communication.

4)   Parents must be informed and guided through the transition.  Between school administration,  my AISI cohort, and myself a P.R. campaign will take place to inform and help parents transition to the digital agenda.

Have you been using digital agendas in your school?   I would love to hear your thoughts and feedback.


Gone But Not Forgotten

Three years ago I was digging through some dusty old photo albums at my Grandparent’s house. These old albums were windows to yesteryear…seeing my dad as a boy, and seeing my grandparents other than ‘old’.  One photo that struck me was of a beautiful mid-twenties Chinese woman.  She had medium length thick black hair, fair features and was dressed in a black dress with pearl earrings and necklace.  I had no idea who this was so I asked some questions.  It turns out it was my Bak-hoo (Great-grandmother), who at the time was still alive and approaching her 100th birthday.

This past February she passed away at 100 years of age.  What an incredible life!  Can you imagine what you would have seen in 100 years?  Death always takes us to a place of reflection and introspection.  Being two generations removed, and a language barrier, I never really got to know her.  Everything I do know was passed down from my grandparents or my dad.  I knew that she had courage, spunk, and sass.  I knew that she risked everything to leave mainland China to make a better life for her family; of which I am a part of.  However, I really never got to know HER. (Here is one of the few digital items left behind of my Great-grandparents (Chu)).

I caught a TED Talk by Adam Ostrow.  The topic was “After Your Final Status Update”. His talk is what inspired me to write this post.  Living in the media age, are we going to live in a shroud of mystery? I don’t think so.  A complete life profile will take shape by collating our digital ‘bread crumbs’.  Our lives will not be a series of facts passed down through stories, but a rich history and chronology of our time on Earth.  Our posts/tweets/updates are our voice; they are more than factual.  They include our tone, our position, and our emotions.

I can only imagine what Chu Yek-Seen would have tweeted 70 years ago. What would I be able to piece together today from her life if social media had been around?

This has significant implications for our life today and digital citizenship.

1)   What we do and say today will be around long after we’re gone.  Social media is creating a digital time capsules of our lives.  Every time we create a new profile for a website, tweet, blog, or post we are donating one more artifact to our digital archive.

2)   Digital personae are created before we arrive.  Parents are posting pictures of sonograms, names, and details of their children before they arrive.  I’ve even known someone who created a facebook page for their unborn child (they did an excellent job by the way).

3)   Last Will and Testament.  More than our estate each of us should think about how we want our digital persona to live on after we have gone.  In saying this, we may not have a choice in the matter.  All we can control will be how we would like our SM sites to be ‘shut down’.  The who/what/where/when of who will do it.  Do you we leave a final ‘blog’, ‘tweet’, or ‘status update’ from the great beyond?  Complete digital representations of us will most likely be created by software from a synthesis of our digital artifacts.  Will we speak beyond the grave?  Will our avatars live beyond us?

When we teach and model Digital Citizenship to our students, it is important that they understand both the positive and negative aspects of the digital content they create.  I’m not saying that we need to discuss death and mortality with students, but it is important that they realize, good or bad, what they’re saying and doing on-line is creating an archive of their lives.  A colleague of mine has a poster up in his classroom that says “Your grandma just saw the picture you posted on-line”.  I’d like to do a flip and say, “Your grand-kids are going to see that picture you posted on line.”


Letting Go of the Reins

In my last post I wrote about Dan Pink and mentioned his three key points of engagement: Autonomy, Mastery, and Purpose.  I want take a closer look at Autonomy.

Are we ready to let go of the reins?

For many teachers there is a correlation between control over students’ learning and classroom management.  There are fears that if students are left to their own choices that mayhem will ensue.  Who amongst us hasn’t had that back-to-school dream of the class that just won’t listen? I know I have, many times.

How do we foster constructivist learning within the confines of modern curriculum?

I want to share with you my reflection on my Independent Music Project.

This assignment came about as me trying to fill that time in June in the post-festival and post-concert season.  I decided to do an independent music project.  In a nutshell, I told students that they were in charge.  They needed to seek out a musical selection they liked and perform it solo or in a small group.  They were also able to choose to do their performance on an instrument of their choosing; it did not have to be their band instrument.   The sole stipulation was that whatever they chose, it had to be appropriate to a school setting (no foul language).

This assignment had a high degree of engagement.  One student chose not to do the assignment but the majority dove right in.  What was interesting to me was how much more the students got out of this assignment than I thought they would.

I had students choose to arrange their own music which included transposition, theory, and music notation – all 3 of which we only scratch the surface in class.  However,  because there was a need for these skills, they had to teach themselves and seek out the information.  Some students decided to compose their own music.  This was exciting for me!  Composition is difficult (or maybe just difficult for me and I’ve been viewing composition through my own lens).  I also had students seek out their own music – through the internet or through music stores – either way, they had to filter through a vast repertoire of music to find the right level for them to play as well as something that they would enjoy playing.  Students who decided to perform as a group also learned valuable time management and organizational skills – It’s hard to get a group of 5 people together.  Half of the performances were memorized and memory was not a requirement of the assignment.  All students had to learn to deal with performance anxiety which as musicians is something we all learn to deal with.

I was inspired by the vast array of talent that these students had that I had never seen.  I usually only see the students in one dimension – on their band instrument.  Who knew that the back row flute player had such a beautiful voice?  That my lead trumpet player could play Metallica on the drums? That my saxophone player could play classical piano and was about to take her RCM exam?

I also found their sense of community and support for each other amazing.  There was little to no teasing happening, huge applause and support (even if there was a breakdown – breakdowns are inevitable) and an intense curiosity of “What are they going to do?  What’s coming next?”.  Their shared experience of the project brought them to a true place of empathetic support for each other.

I had been scared to let go of the reins, but when I did two things happened:

First, the class did not break into mayhem. Second, there was engaged learning happening that I was removed from.  Next steps are to see if I can find similar ways/methods/assignments to bring this to my other courses (most specifically my math class).  If you have thoughts or ideas, I love to hear them!


Image Source:

Rewards and Engagement

The ever-present struggle in teaching has been the idea of rewards – more specifically the “If you do this, you’ll get that” mentality.  Rewards have been cast in every light from virtuous to villainous.  Is there a happy-medium? Are rewards themselves ‘bad’ or is it what we’re rewarding that’s ‘bad’.  What we know to be real is that motivation and engagement, good or bad, is effected by rewards.

Brought to my attention through 2 of my classmates (I will link to their blogs at the end of this post) was a TED talk from Dan Pink.  Here is his TED video from TedGlobal 2009.

To summarize, Pink’s position is that by offering rewards we stifle creativity and problem solving. Incentives play a strong role in simple procedural tasks, but when it comes to high-level thinking and problem solving, rewards hinder progress and limit views and pathways to solutions.   He speaks of autonomy, mastery, and purpose as being strong motivators – true engagement is a result of self-direction.  He presents the work model of ROWE (Results Only Work Environment) – people are without schedules.  Work (objectives) has to be completed but the how, when, and where is up to the employee.  Are we as teachers willing to give students the same affordances?  I believe there is no black and white answer to this.  Teachers will know which of their students this model works best for and which it may not.  There are several brain development issues at play in young people (especially middle years) and they may not have the self-discipline for complete autonomy.  Careful guiding and crafting may be needed.

I did some investigation and found another TED talk from Tom Chatfield.  His TED Talk on “7 Ways Games Reward the Brain” offered some interesting evidence that support rewards.  Here’s the video below.

Chatfield has this comment on rewards.  “The very intense emotional rewards that playing games offers to people both individually and collectively……..When people play games, they have the “wanting” and “liking” processes.”

Games measure what you do (colleting data points) and create a reward schedule. The reward schedule keeps players engaged and coming back over and over again.  Game software uses probability and data to maintain engagement.

Chatfield’s idea of measuring progress through ‘status bars’  and a ‘reward schedule’ is  substantiated by Driscoll who says “Data can be kept on a leaner’s path through the program – what information has been visited and how much time the learner has interacted with that information.  Such data can also show when learners have achieved certain benchmarks.”.  What this means is that students are motivated by their own progressSuccess begets success.  If students  track meaningful progress, and see that they are ‘getting somewhere’ they stay on the path and continue.

Both Dan Pink and Tom Chatfield have this common thread: the strongest learning (productivity) comes through peer-to-peer interactions.  Strong intrinsic motivation comes about when students see what other students are doing.  Who amongst us wants to be the one who is ‘behind’ or ‘left out’?  We have an intrinsic desire to “keep up” and “hold our own”.

So back to rewards.  On the one hand you have Pink saying that rewards (for the most part) stifle creativity and reduce engagement.  Then you have Chatfield saying that there are strong ties to rewards from game play that can encourage productivity/creativity/engagement.  Who is correct?

When you sit back and synthesize both ideas you realize they both are!  One thing we need to appreciate about Chatfield’s idea of rewards is that it is not the outcome that is to be rewarded but the effort – that people should be credited for what they try to do.  It must also be made clear that the rewards he’s talking about are intrinsic rewards – level-up, finish the game, kill the monster, increase status bars –  and it is those intrinsic motivators that can be translated into teaching & learning.



Paul Webster’s blog (cpbw)

Barclay Batiuk’s blog

Tom Chatfield

Dan Pink