A meta-analysis of studies on decision support in health care from a Canadian team published in the BMJ.
http://www.bmj.com/content/346/bmj.f657
"We identified several factors that could partially explain why some systems succeed and others fail. Presenting decision support within electronic charting or order entry systems are associated with failure compared with other ways of delivering advice. Odds of success were greater for systems that required practitioners to provide reasons when over-riding advice than for systems that did not."
So, a system that automatically offers opinion - like the challenging vision of artificial intelligence of the future - seems to fail, whereas one that asks you to think and document (your valued) exceptions seems to succeed.
I think this makes sense. Perception of the machine alters how it is used.
I imagined being on a ward round or in a clinic with a colleague who blurted out evidence and guidelines citations. Some of what they would say would be correct obviously but perhaps a lot would be irrelevant or cover areas that I had considered, would love to explain, but weren't directly relevant to the case in front of us. I'd probably have a hard time and need coffee earlier than usual. However, if I had a constructive colleague who asked open questions such as 'so why did you do that instead of the usual' I think I would have a more enjoyable time and engage in some meaningful thought and discussion.
Maybe we need more workplace psychologists involved in human interface design. What is cool to a developer or a priority to management might not be the best way of solving the problem of getting health professionals to interact with IT systems.
http://www.bmj.com/content/346/bmj.f657
"We identified several factors that could partially explain why some systems succeed and others fail. Presenting decision support within electronic charting or order entry systems are associated with failure compared with other ways of delivering advice. Odds of success were greater for systems that required practitioners to provide reasons when over-riding advice than for systems that did not."
So, a system that automatically offers opinion - like the challenging vision of artificial intelligence of the future - seems to fail, whereas one that asks you to think and document (your valued) exceptions seems to succeed.
I think this makes sense. Perception of the machine alters how it is used.
I imagined being on a ward round or in a clinic with a colleague who blurted out evidence and guidelines citations. Some of what they would say would be correct obviously but perhaps a lot would be irrelevant or cover areas that I had considered, would love to explain, but weren't directly relevant to the case in front of us. I'd probably have a hard time and need coffee earlier than usual. However, if I had a constructive colleague who asked open questions such as 'so why did you do that instead of the usual' I think I would have a more enjoyable time and engage in some meaningful thought and discussion.
Maybe we need more workplace psychologists involved in human interface design. What is cool to a developer or a priority to management might not be the best way of solving the problem of getting health professionals to interact with IT systems.
No comments:
Post a Comment