Caselaw that supports learning science
Caselaw that sets standards for LE training and is backed by learning science.
Caselaw that sets standards for LE training and is backed by learning science.
Most performance objectives aren’t written correctly or based on the right things. This webinar discusses common mistakes, including using the wrong domains for all training.
Everyone has bad experiences in their jobs. Turning those bad experiences into good practices makes you more marketable, dependable, and consistent. Here are a few ways my bad experiences have helped me.
Consistency in training is necessary for learners to know what is expected of them. Building these relationships rely on something called heuristics.
New instructional designers are always wanting to know what the best tools are for doing their work. I answer this question right here!
How do we determine which employees we observe and analyze for creating training? I submit that we are doing it wrong if we are analyzing and basing our training on top performers. Here’s why.
Businesses and stakeholders measure the best performers to develop training, but this means they are only studying the survivors, introducing bias into their research.
Taking a page from my years as a freelance graphic designer, I developed a “Scope of work performed” matrix to help my Instructional Design team set boundaries and expectations for projects. I use what I call the “Produce, Design, Create” model.
We should answer three “why” questions with our training, especially with the changing generational demographics within the workforce.
Can anyone develop training for anyone for any situation? I say yes, with certain caveats and expectations. I am a pro-anyone-ness advocate.
© 2024 — LETnEC, a division of Jacobs, et al, LLC | Privacy Policy | Website designed by RickJackSnaps