I've been thinking about
learning analytics again lately and how to do more of it in the projects I'm involved in.
OnExamination.com has always been powered by analysis of activity - sequencing test items based on prior performance. However, not all learners are so fully engaged as they are when they are running through mock questions for a high-stakes exam. Although I had a great time helping to design the learning analytics for exam preparation, most learning is not so intensely focussed. It is more work-based, opportunistic and social.
In clinical medicine learning analytics would most benefit from objective measures of personal performance to help make recommendations and present the data. These outcomes are hard to capture and, even if you do, they are hard to analyse since there is so much variability. The easy things to measure aren't necessarily the most important to observe.
Using electronic health records would be good. Case mix, common diagnoses, common prescriptions, common investigations and findings, would be an ideal way to automatically design a syllabus. Heuristics could be defined to spot quality issues. But how to get in on that gig? Could work more with
BMJ Informatica to link individual GP performance to bespoke learning I suppose. My background is secondary care however, and last time I tried asking to look at and explore solutions for individual physician performance at my local NHS Trust I hit a dead end.
I've worked with the background design of a learning tool
tblable.com which represents knowledge. Looking at patterns of errors with the quiz tools that can be created from this may help identify areas for focus for individuals, and areas for novices or experts to start. Have a look at this
example tblable on MODY. I think this is too niche at the moment though. Too narrow a cognitive tool. I've got over a number of hurdles but it is a solution without a defined problem.
I also analysed millions of Tweets using tools in the
GrabChat idea via the different Twitter APIs but not managed to glean anything particularly inspiring. There's a lot of guff about sentiment analysis of tweets (e.g.
how it bombed in stock picking) but it does not transfer from anything other than the binary of emotionally positive or negative in particular communities and topics.
Analysing tweets is great for finding interesting links and people but it still needs human filtering and a lot of spam gets in there. Nowhere near a tool that would be useful for learning analytics.
So, in all, I'm feeling a bit of a frustrated
skunk worker. Will have to experiment some more.