I've been nursing some half-baked ideas around analytics for some time now.
I hand these to the Data Whisperer annually - mostly hoping he can work miracles with my unformed mess of stuff.
THIS time, I also handed him the overview version of the TDRp white paper.
I figured I'd take advantage of the efforts of folks much better at numbers and maths and ROI and data and stuff. In this case - Frank Anderson (former head of DAU), Josh Bersin (Bersin and Associates), Jack Phillips (ROI Institute) among others.
I've been keeping track of this project since the SkillSoft / Knowledge Advisor sessions back in May 2011.
Part 1, Part 2, Part 3.
They have the basic reporting framework built. The next couple of years will be spent fine-tuning definitions, models and recommendations.
The 3 TDRp reports (Quarterly Summary report for the senior execs, Monthly Program report and Monthly Operations report for the learning execs) parse information from three statements:
- the Outcome statement - Goals and the impact of training on those goals
- the Learning Efficiency statement - L&D cost + opportunity cost, cost reduction, "butts in seats" and courses used
- and the Learning Effectiveness statement - whether folks (students and managers) liked the class, planned to apply the stuff from the class, and estimates of value and impact
I've been staring at the white papers over the past couple of days trying to figure out what we would need to come close to this model in our learning analytics.
I came up with the following:
- Objectives. On both a university-wide level and on an individual "intervention" level (projects, technologies, process improvement initiatives and, yes, training)
- Measurable goals mapped to those objectives. This would include a measure of "where we are now".
- A way to track who took which class and how they did (traditionally the realm of the LMS).
- A survey tool. I am lumping evaluation tools in here. Ideally - it would be one tool linked into the LMS so I can put this all together in one spot. It may require 2 separate tools - one that scores (the evaluation tool) and one that just collects information without judgement.
I'm debating whether I want the ability to detach my survey tools from individual learning objects. The reason why I may not want it attached to a particular learning object? Many of the interventions I design these days are not "courses" but collections of on-demand resources that may or may not be connected to a "course" as we understand them. Those resources may also cross programs. I want information from the aggregate of those interventions as well as from the individual pieces. Need to think on this more.....
- A way to run pre and post metrics (before and after the intervention). The system would be dependent upon the objective and the goal. OR if there was one Data Warehouse to rule them all.
Our organization is not there yet. Actually - let me know if your organization has a Data Warehouse that successfully integrates ALL of your ERP, Finance, Payroll, HR, etc systems and allows end-user folks to easily spit out reports with no aggravation. (Hey - and if we really want to get buzzwordy, maybe we can even talk about Tin Can here and having ALL of these systems output in the same language so we might have a fighting chance of putting something like that together :) )
I think the TDRp has a good model for what the reports should look like coming out the other end.
At least - it is a great starting point for further discussion.
I'm hoping that the end-point gives the Data Whisperer and I enough to sink our teeth into...even without objectives and measurable goals.