So far I have been thinking tentatively about an evaluation plan for my e-learning initiative. I haven't yet determined the exact scope of my initiative. Part of this initiative is a new online course that I am developing within my universities educational technology program. This course is entitled “Educational Innovations” and focuses on school change and professional development. The course has a strong emphasis on using Web 2.0 tools and understanding how Web 2.0 can prompt teachers’ professional development. This course will probably comprise the focus of my evaluation plan. However the course itself is situated within a larger program that will likely become increasingly online with several of the core courses being offered in a fully online or hybrid format. In that sense I'm interested in developing an evaluation plan that has a broader scope focusing on the whole program.
From the group activity we completed this past week I became increasingly aware of the importance of beginning with a set of clearly identified goals and using these goals to give shape to the evaluation plan. Though this makes sense I suspect that a common pitfall of many initiatives is that the evaluation plan is actually performed ad hoc without being clearly derived from the learning goals of the program. So, I'll need to give thought to the essential goals of my initiative especially as they pertain to online teaching and learning. Overall I would like for students to gain confidence, efficacy, and an increasing disposition to use a variety of Web 2.0 tools to fuel their own professional development. The evaluation plan will need to emphasize the process of students exploring a variety of tools as they become more globally connected in their discussions of teaching and professional development. I know I will need to give more coherent shape to these notions so they can be crafted as measurable evaluation questions. In that regard I'm learning that the role of the evaluator and role of a researcher is very similar: to systematically collect evidence around a clear set of goals/questions and then to analyze that data in terms of the original questions. The framework identified in the Sanders & Sullins (2006) is very helpful in identifying a technical process for developing these questions and embedding them in a systematic data collection plan.