Monday, November 06, 2006

Are we Overcomplicating our eLearning?

In his comments to my last post, Alvaro pointed to an interview he did with Prof. Daniel Gopher at SharpBrains. Dr. Gopher notes:

The need for physical fidelity is not based on research, at least for the type of high-performance training we are talking about. In fact, a simple environment may be better in that it does not create the illusion of reality. Simulations can be very expensive and complex, sometimes even costing as much as the real thing, which limits the access to training. Not only that, but the whole effort may be futile, given that some important features can not be replicated (such as gravitation free tilted or inverted flight), and even result in negative transfer, because learners pick up on specific training features or sensations that do not exist in the real situation.

For the high-end gaming developers, Dr. Gopher suggests that the emphasis on "realism" may be misguided. He cites a side-by-side comparison between a simple computer game and a sophisticated, graphically rich flight simulator and notes that the simple game was more effective.

For those of us who create low-budget, high-speed eLearning, his findings are very encouraging. His recommendation: analyze the cognitive skills involved, then develop a simulation that trains those skills.

Common sense - but how many times have you seen eLearning that is pretty to look at and useless for training?

Thank you Alvaro for the lead and your nice comments!!!!!

1 comment:

Alvaro said...

Wendy, thanks for such a nice summary. Happy that you enjoyed the interview.

Wanted to let you and your readers know that the US-based professor doing some replication studies based on Prof. Gopher's work came to the site and left a very useful comment, which I copy here

"Your excellent interview with Dr. Gopher reminded me why so many of us have followed his lead in training complex skills. I hope that your interview inspires others. They will find that he is generous with his ideas, time, energy, and infectious positive spirit. Working with him to replicate experiments and extend ideas is both productive and enjoyable.
Your interview includes a reference to an article by my colleagues and me. I want to update the reference and provide a related reference to a Web-based Archive.

Shebilske, W. L., Volz, R. A., Gildea, K. M., Workman, J. W., Nanjanath, M., Cao, S., & Whetzel, J. (2005). Revised Space Fortress: A validation study. Behavior Research Methods, 37, 591-601.

Volz, R.A., Johnson, J.C., Cao, S., Nanjanath, M., Whetzel, J., Ioerger, T.R., Raman, B., Shebilske, W.L., and Xu, Dianxiang (2005). Fine-Grained data acquisition and agent oriented tools for distributed training protocol research: Revised Space Fortress. Down Load Technical Supplement, Psychonomic Society Web-based Archive (see 37,591-601). .

The journal article’s abstract describes both:

Abstract
We describe briefly the redevelopment of Space Fortress (SF), a research tool widely used to study training of complex tasks involving both cognitive and motor skills, to execute on current generation systems with significantly extended capabilities, and then compare the performance of human participants on an original PC version of SF with the Revised Space Fortress (RSF). Participants trained on SF or RSF for 10 sets of 8 3-min practice trials and 2 3-min test trials. They then took tests on retention, resistance to secondary task interference, and transfer to a different control system. They then switched from SF to RSF or from RSF to SF for two sets of final tests and completed rating scales comparing RSF and SF. Slight differences were predicted based on a scoring error in the original version of SF used and on slightly more precise joystick control in RSF. The predictions were supported. The SF group started better, but did worse when they transferred to RSF. Despite the disadvantage of having to be cautious in generalizing from RSF to SF, RSF has many advantages, which include accommodating new PC hardware and new training techniques. A monograph that presents the methodology used in creating RSF, details on its performance and validation, and directions on how to download free copies of the system may be downloaded from www.psychonomic.org/archive/.

The extended capabilities for RSF include a) being executable on current generation platforms, b) being written in a mostly platform independent manner, c) being executable in a distributed environment, d) having hooks built in for the incorporation of intelligent agents to play various roles, such as partners and coaches e) providing a general experiment definition mechanism, f) supporting teamwork through being able to flexibly assign different input controls to different members of a team, g) maintaining all data in a central database rather than having to manually merge data sets after the fact, and h) having playback capability, which enables researchers to review all actions that occurred during an experiment and to take new measures. Experimenters can design measures before an experiment to test specific hypotheses with a rigorous laboratory task. They can also use playback to discover and explore unanticipated events. Although simpler and more complex synthetic task environments can be advantageous for some goals, Danny Gopher, our colleagues, and I believe that Space Fortress remains an important tool for scientists and trainers. Please feel free to contact me (wayne.shebilske@wright.edu) for additional help downloading and using RSF."

Regards,

Alvaro