Monday, April 16, 2012

Reflection #1: Administration and Evaluation

So far I have been thinking tentatively about an evaluation plan for my e-learning initiative. I haven't yet determined the exact scope of my initiative. Part of this initiative is a new online course that I am developing within my universities educational technology program. This course is entitled “Educational Innovations” and focuses on school change and professional development. The course has a strong emphasis on using Web 2.0 tools and understanding how Web 2.0 can prompt teachers’ professional development. This course will probably comprise the focus of my evaluation plan. However the course itself is situated within a larger program that will likely become increasingly online with several of the core courses being offered in a fully online or hybrid format. In that sense I'm interested in developing an evaluation plan that has a broader scope focusing on the whole program.
From the group activity we completed this past week I became increasingly aware of the importance of beginning with a set of clearly identified goals and using these goals to give shape to the evaluation plan. Though this makes sense I suspect that a common pitfall of many initiatives is that the evaluation plan is actually performed ad hoc without being clearly derived from the learning goals of the program. So, I'll need to give thought to the essential goals of my initiative especially as they pertain to online teaching and learning. Overall I would like for students to gain confidence, efficacy, and an increasing disposition to use a variety of Web 2.0 tools to fuel their own professional development. The evaluation plan will need to emphasize the process of students exploring a variety of tools as they become more globally connected in their discussions of teaching and professional development. I know I will need to give more coherent shape to these notions so they can be crafted as measurable evaluation questions. In that regard I'm learning that the role of the evaluator and role of a researcher is very similar: to systematically collect evidence around a clear set of goals/questions and then to analyze that data in terms of the original questions. The framework identified in the Sanders & Sullins (2006) is very helpful in identifying a technical process for developing these questions and embedding them in a systematic data collection plan.

1 comment:

  1. Hi Peter,

    I'm very interested in the course you are developing at Loyola, and I agree, that any evaluation is better planned upfront. Otherwise, you are trying to fit your evaluation plan to the data you have, and that may or may not be what you need to determine the efficacy of your project.

    I had this conversation with one of your classmates who was also doing a course evaluation. She was struggling with measuring change in behavior of the learners, as well as elements of the course development itself. She was conceiving it as an either or proposition, but we discussed the way in which you validate the design of the course itself, and then ultimately measure that against a metric of learner performance.

    In your case, you can measure that student behavior in a variety of ways. A survey of behavior change, or even via an instrument such as the Concerns Based Adoption Model (CBAM)(Hall & Hord, 1987; Hord, Rutherford, Huling-Austin, & Hall, 1987; Loucks-Horsley & Stiegelbauer, 1991). You can also set some behavior change benchmarks that you can verify via observation.

    But as you mentioned, by thinking of how you want to measure success ahead of time, you are much more likely to construct a course that meets you desired outcomes.

    Thanks for the post,
    Chris

    ReplyDelete