Evaluating the Impact of Professional Development

Well it is winter quarter in my master’s program at Seattle Pacific University and we are focusing on ISTE Coaching Standard 5: Professional learning Facilitator

Coaches plan, provide and evaluate the impact of professional learning for educators and leaders to use technology to advance teaching and learning.

What stood out to me about this standard is the idea of evaluating the impact of professional learning. I have taken many professional development courses over the course of my career and I often find that the evaluations at the end tend to be generic. So, this standard got me wondering about evaluation practices and how it might connect to educational technology. This lead me to my driving question for this blog post;

How can educational technology help coaches effectively evaluate the impact of professional development and what are best practices in evaluating professional development?


In looking for answers to my question I came across many different resources, all talking about what makes effective professional development (PD). One of the resources that I gravitated towards is the ISTE White Paper, Technology, Coaching, and Community. One of the first reasons I was drawn to this resource was its title as it clearly was about how technology, coaching, and community can be used to provide impactful professional development and I wanted to know more. If we want to provide impactful PD what should it look like and what do we need to incorporate? One thing that ISTE recommends is to incorporate “a three-pronged methodology to achieve 21st-century professional learning experiences, which will better prepare teachers to effectively help students learn. This methodology embraces: an effective coaching model, online communities for greater collaborative idea sharing; and a fully embedded use of technology” (p. 5).

This three pronged approach highlights that impactful PD is not about a one size fits all approach. In fact, PD should be continuous and relevant to the learner to truly be impactful. When designing PD, it is important to consider how technology can and should be used, how are we creating opportunities for individuals to grow their professional learning communities and what support (coaching) are we providing along the one.

If a technology-rich environment is a given, offering job-embedded PD and coaching as a scaffold for ongoing support and growth will allow teachers an opportunity for low-risk practice and lots of feedback. And when teachers can work collaboratively to share ideas and improve teaching practices, a community of practice can emerge to provide a scaffold for support and growth.

ISTE, 2011, p. 7

So how can we evaluate PD that is continuous and job-embedded? What would an evaluation of this sort of PD look like? These were all questions I started asking myself after reading the ISTE resource above. In continuing to look for answers to these questions and my driving question I came across an article by Thomas Guskey. While the article itself is from 2002 it still has some valuable ideas for evaluating PD. According to Guskey (2002) a good evaluation does not have to be complicated however it should provide useful information around whether or not the PD had value and if the goals of the PD were met. There are many different forms that evaluation can take on. Many people thinking of a survey at the end of PD when thinking about evaluation, however that doesn’t have to be the only way. An evaluation might be observational data collected, portfolios, video or audio recordings, anything really that provides valuable information about whether or not the PD is achieving its goals. In the article Guskey describes 5 levels for evaluating professional development.

Level 1 is all about the participants reactions to the PD. While Guskey lists things like “were the chairs comfortable” it is important to think about how some of these questions change over time and given current realities. It might be more important now to ask participants questions around the use of breakout rooms during the meeting, opportunities for movement instead of just sitting in front of the screen, or even opportunities for virtual collaboration.

Level 2 evaluates the learning. Did participants learn something or gain new skills because of the PD. This is a good opportunity to evaluate this using some of the educational technology that was used during the PD. Maybe participants use some sort of video resource like FlipGrid to reflect on what they learned. How are we providing an opportunity for them to practice and reflect on their learning?

Level 3 provides an opportunity to evaluate how well the PD aligns with building or district goals. While this can be harder to evaluate it is important because a “Lack of organization support and change can sabotage any professional development effort, even when all the individual aspects of professional development are done right.” (Guskey, 2002).

Level 4 happens over time. This level of evaluation is all about how participants are using the knowledge or skills that they learned in the PD. This sort of evaluation is going to happen over time and look different. “The most accurate information typically comes from direct observations, either with trained observers or by reviewing video-or audiotapes.” (Guskey, 2002). This is also were ongoing support or coaching comes into play. How are participants building community and getting coaching support to continue their learning?

Level 5 brings everything back to student learning. How did the PD impact student learning? When looking at the impact on student learning it is important to look at the full picture. While there may be a positive impact on student learning in some areas it is also important to look and see if there are any unintended outcomes that result.

All five of these levels are important parts to evaluating PD. Different information is obtained and gathered at each level. However, one thing that Guskey (2002) mentions is the idea of planning backwards. It is important to start with the end results in mind, what are the student learning outcomes that we want to achieve with this PD? Then we can look at the instructional practices that we want to see participants utilizing the knowledge and skills and how things connect to building/district goals. From there we can think about the experiences and knowledge that we want participants to walk away with in order to achieve the outcomes desired in levels 3-5.

So what does this all mean. When thinking back to my driving question,

How can educational technology help coaches effectively evaluate the impact of professional development and what are best practices in evaluating professional development?

I am still left with lots of questions, but also with knew ideas around what impactful professional learning is. Everything boils down to a few key ideas for me.

  1. Professional development is not a one-time thing. It must be ongoing and done in community. Collaboration is important to learning and therefore how are we embedding it into PD.
  2. Technology should be embedded both in the PD and within the evaluation. How am I leveraging educational technology in order to make lasting impacts on the learning experience? How am I continuing to provide opportunities for participants to use the technology both in their classrooms but also in the evaluation process? Are there opportunities for virtual collaboration, digital portfolios, video reflections?
  3. Everything comes back to student learning. When designing and evaluating PD, I need to be sure that I am focused on how it impacts student learning outcomes.

As I continue to think about how I plan and evaluate professional development I will remember to plan backwards by starting with student learning and create opportunities that allow for continued support, embedded technology, and community.


References

5 thoughts on “Evaluating the Impact of Professional Development

  1. Great post, Megan, thank you! I was not familiar with that particular ISTE white paper, so thank you for bringing that to my attention. I appreciate how you are expanding our thinking around how we review PD/PL, i.e., not just at the end of the training itself but looking at the bigger picture in terms of long-term implementation and results. The five levels referenced are incredibly helpful because they provide tangible guidelines and a good place to start in terms of reflecting on how a training might be designed. Thank you for sharing. Great insights and thinking!

  2. Megan, what powerful ideas! I really appreciate how Guskey broke down evaluation into 5 levels. By looking at these different areas we should get a much better idea of the impact of our PD, especially compared to the one-page paper surveys that are given out at most PD sessions. I loved the idea of doing job-embedded PD and supporting teachers throughout the process by coaching, collaboration, and online communities. There are so many great ways to record the ongoing professional growth like portfolios, student work samples, video reflections, etc. I feel like a very small percentage of schools stick to a single PD topic for long enough to truly evaluate teacher’s growth all the way to mastery and how it impacts students’ learning. I am saving your blog and will return to it next time I am planning PD. Thank you!

  3. This is an interesting topic, Megan, thank you. Now that you’ve brought it to my attention, I think I have only ever filled out generic surveys as evaluation. You provide a great list of alternative evaluation formats such as “observational data collected, portfolios, video or audio recordings” which could offer rich feedback about the effectiveness of PD. I also think that these types of evaluations stimulate great conversations about teaching and contribute to a collaborative culture of learning.

  4. Megan, I really enjoyed reading your post. Your quote, “there are many different forms that evaluation can take on. Many people thinking of a survey at the end of PD when thinking about evaluation, however that doesn’t have to be the only way”, really struck me. When I think of a course evaluation or PD evaluation, my brain immediately relates it to a survey. Last school year our building was focusing on implementing number talks into our classrooms as a means to increase math growth on the SBAC. This is one of the only PDs that I don’t remember having to take some form of survey. Instead, they had us record the times we implemented the number talk month to month. Then, when the facilitators came the following month, we would discuss how we worked on it and see photos of other teachers in action.

    Looking back…. this was an incredibly useful professional learning experience because I knew I was being held accountable for trying out this low-stake strategy, but also, we adjusted our methods based on feedback from other teachers.

    Great post! Thank you for sharing!

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php