As the COVID-19 pandemic continues to disrupt most face-to-face professional trainings, many organizations are now acknowledging that they must view remote learning as more than a temporary measure. For them, and for those organizations that had long considered transitioning to remote methods of delivering training content, this is a good time to assess current remote learning activities and the corresponding shift from “making the best of it” to “making it the best.”

Key discoveries that our partnering organizations have made this year on this subject include the following:

  • Presenters must engage and frequently interact with training participants. A remote meeting may be only one of several that participants attend that day, and neuroscientists have learned that “remote meeting fatigue” is very real. Trainers should break up the content with frequent stops for comprehension checks such as polling functions used to confirm comprehension prior to natural breaks for the bathroom or for participants to catch up on email – the “ticket out the door” approach.  
  • Trainers should present content in ways that are appropriate for different learning styles, with a constantly varying mix of teaching methods for reinforcement.  
  • They should conduct a thorough remote review of all training content currently delivered to ensure comprehension checks and content variety are fully incorporated.
  • Technology platforms that enable measurement of participant engagement (i.e., did they remain focused on the training platform, or minimize it and multi-task?) are imperfect but useful.  Depending on the size of the training, it is helpful to employ an audience view (participants on camera) with a member of the training team tasked with periodically monitoring the group for signs of disengagement.
  • As a corollary, trainers should communicate the level of interactivity participants should expect in advance so they can take the necessary steps to prepare their workspaces and schedules to remove interruptions and distractions (even so, we must recognize this may be impossible for some). This will also increase trainers’ level of comfort actively reaching out to participants during some comprehension checks, so they will not be concerned about “putting participants on the spot.”
  • Since even normally reliable technology platforms can fail, organizations should present to participants a comprehensive backup plan in advance. That way, if a remote learning system (or the lead trainer!) loses connectivity, there is a minimum of disruption.
  • Trainers should assess remote learning at least as rigorously, and as frequently, as in-person content delivery. This should include an assessment of the technical platforms’ performance and self-reported levels of engagement. The best approach is to provide a link to a web-based questionnaire for the specific event during the concluding elements of the training, and include a scheduled time (such as an early finish) for participants for completing the instrument.  
  • The measurement cycle should be short to provide as much opportunity as possible for event-to-event learnings and content or delivery method improvements. Trainers should participate in developing the instrument to gauge satisfaction, or at a minimum, should understand how to apply the results to make quick revisions between trainings.

Cloudburst’s instructional designers and training experts are available to review training content that has either been repurposed for remote delivery or was originally planned for that but needs to be refreshed or improved. We specialize in content development and delivery for housing and health grants programs, including complex trainings on reporting requirements for performance and financial data.