Sunday, 31 March 2013

This weekend has been a bit of a diabetes hack.

Started a project to try and "crowdsource" data on insulin prices across the world. Despite being invented nearly 100 years ago, and the team who discovered it handing over their rights to the University of Toronto so that insulin could be available to all who needed it, 'lack of access to insulin' is the leading cause of death globally for children with Type 1 Diabetes.

The reasons for this are complex and involve different priorities across national health systems, the global market for insulin development and supply chains, and individual choices of healthcare. However, it is the poorest children who develop diabetes that suffer. If their family cannot get them insulin then, literally, their treatment goes back 100 years. We can do better than this.

I felt strongly that since I was well connected to people with an interest in diabetes across the world we might be able to gather data and make it transparent what the variations are. Only when the variations are known can the barriers be addressed.

Please visit for more information, share the links, and complete the survey.

Thanks to Laurie for making a most excellent video!

Wednesday, 20 March 2013

Tapping the Twitter brain

So I've set up a Twitter Streaming API client on a server and started consuming the large amounts of data that comes into it.

It feels a bit like that famous (to medics at least) Gary Larson cartoon with the mosquito who has hit an artery.

It is clearly possible to run some fancy types of analysis on Twitter data such as for collecting the top twitter posters and URLs for a particular hashtag e.g. GrabChat's #NICE2012, or the more recent text analysis of Diabetes UK's #dpc13.

The output from the streaming API though is another order of magnitude and will be promising for identifying, literally, 'trending' resources or individuals within a particular topic. I've just run 48 hours of the keyword 'diabetes' and got nearly 40,000 Tweets.

Tapping into the global discussion of diabetes is one thing but filtering out the good stuff is another challenge. I have a few competing algorithms that are running on the data to see which is best.

I've been filtering tweets for some time, sharing them with followers and relaying them through to our diabetes diploma course but this promises a whole new level - a more systematic approach. What's surprising is the enormous amount of spam and re-tweeting of low-level health and nutrition material that goes on. Thankfully, through the normal interfaces of Twitter, these are tweets you do not see. Hidden among them are useful ones and the trick is to filter them out.

Saturday, 9 March 2013

Visualising Twitter Chats

Been experimenting with different ways of visualising text analysis of Twitter hashtags. Got GrabChat to use document clustering and display the results with HTML 5 allowing it to be read on mobile devices.

Run an example using a recent #CMEchat by doing the usual GrabChat of the hashtag but then included the text analysis.

Here's a screen shot of the text analysis output. Each of the cells are clickable and they send you back to Twitter to see more discussions through specific (and therefore deeper) searches.

This is an interesting way of archiving and analysing hashtags on Twitter. It's more than trawling through all the tweets or just numbers of hits and tweeters. I think it is probably most useful for 'feeling the pulse' of a big meeting - as it is progressing - or capturing the key concepts quickly and easily from a typical Twitter chat.

Here's another of #ACC13. Click on the image to go to the GrabChat text analysis and then click on the topics that take your interest.

As an idea this has come a long way from the semantics and homophily I talked about last year. Well, for the semantics anyhow. Must look more at the homophily.

Friday, 1 March 2013

Why do some computerised decision support systems fail?

A meta-analysis of studies on decision support in health care from a Canadian team published in the BMJ.

"We identified several factors that could partially explain why some systems succeed and others fail. Presenting decision support within electronic charting or order entry systems are associated with failure compared with other ways of delivering advice. Odds of success were greater for systems that required practitioners to provide reasons when over-riding advice than for systems that did not."

So, a system that automatically offers opinion - like the challenging vision of artificial intelligence of the future - seems to fail, whereas one that asks you to think and document (your valued) exceptions seems to succeed.

I think this makes sense. Perception of the machine alters how it is used.

I imagined being on a ward round or in a clinic with a colleague who blurted out evidence and guidelines citations. Some of what they would say would be correct obviously but perhaps a lot would be irrelevant or cover areas that I had considered, would love to explain, but weren't directly relevant to the case in front of us. I'd probably have a hard time and need coffee earlier than usual. However, if I had a constructive colleague who asked open questions such as 'so why did you do that instead of the usual' I think I would have a more enjoyable time and engage in some meaningful thought and discussion.

Maybe we need more workplace psychologists involved in human interface design. What is cool to a developer or a priority to management might not be the best way of solving the problem of getting health professionals to interact with IT systems.