I’m guilty too. There are precious few times in which I put away the blackberry: yoga class (@Karma) and dancing. And bed, I suppose. (usually).
Taking a break is obviously a good idea, but doesn’t address the core problem: what we really need are information tools that adequately respect and value our attention, and give us more control over our attention. I think there are three characteristics that many of these old and new applications need to adopt in order to let us focus better without cutting off cold-turkey: context, feedback, prioritization.
Traditionally, we had different physical places, times, and things for different tasks. To work, you went to the office. To read, you went to the library. If you were reading for work, you read the business section. If you were slacking off, you’d read the entertainment section.
We’re now free of such physical limitations, which is wonderful, but the result is that all of the streams of our activities flow to us simultaneously, and are presented to us merged.
We need to be able to set our context, and then have our software be humane and intelligent enough to respect it. Our IM status, for example, has busy / do-not-disturb settings. Such settings ought to be able to be applied across the whole gamut of information inputs, not just IM.
Feedback and Monitoring
Our information tools ought to help us monitor where our attention is going. Tools for analyzing clickstream, such as the early work from AttentionTrust and perhaps Atten.tv might help us see where attention is going. RescueTime is another neat application that lets us see where time/attention is going (or being wasted). And news.ycombinator has a noprocrastination setting that cuts one off after checking too many times. Making the behavioural changes to focus attention and lock out the attention theives is made much easier if software can provide the right sorts of feedback and incentives.
Software needs to do a better job at figuring out what messages are important and justify disruptions, and which can wait. The blackberry does a nice job of letting one configure different behaviours for different types of messages, such as ‘Level 1′ alerts that match a list of senders. But these tools are crude, and make use of little of the data which they might.
Humans are fallible, and media is often designed and evolve to steal attention, because it is valuable. Our software needs to be designed to recognize our finite cognitive limitations, avoid abusing our attention, and help us to stay disciplined.
Now back to real work!
As all hell breaks loose in the markets, it’s probably healthy to step away from twitter for just a couple of minutes to take a longer-term perspective. I had the pleasure of attending the HBS Healthcare Club conference on Saturday, and thought I’d collect and post my notes on some of the big picture issues in healthcare and biotech that were discussed. I tended to stay at the product and technology focussed sessions, rather than those focussed on services and reimbursement, because, well, I’m a science guy at heart.
Outgoing Lilly CEO Sidney Taurel gave the keynote just after lunch, and he laid out the risks and changes facing the pharmaceutical industry (besides patent expirations!) and how Lilly is adapting to meet them.
- Global aging trend / ‘inversion of the age pyramid’
- Emergence of “Health Technology Assessment” agencies
- limiting access to new technologies
- Patent issues in developing countries
- Legislative risks: government interference with Medicaid in the US
- US is the last bastion of free markets
- Changing perceptions of risk vs. benefit
- Perceptions of drug costs
- ~ 10% of healthcare spend, but consumers are much more exposed to drug costs through copays than to other costs in the system
Changes that are going to help address these challenges:
- Individual patient outcomes
- Measuring outcomes instead of outputs; and focussing on individuals
- pharmaceuticals will become partly an information product
- Better (adaptive) clinical trials; Regulators are slow to adapt.
- Openness of information
- R&D moving towards a virtual organization
- FIBNET rather than FIBCO
- Outsourcing examples at Lilly:
- med chem moving to China and India (ChemExplorer in Shanghai doing 20% of global chemistry)
- Big opportunities to use IT to track outcomes after drugs come on the market
- Open source models for biomarkers
- Systems Biology (As a computational biologist myself, I’m somehow skeptical that this is going to come through for the industry in time to save them from patent expirations
- Sales: fewer people; more competent people
- Merging of pharma, biotech (Biologicals are already a huge part of Lilly)
Given what Mr. Taurel had said about the importance increasing openness and better post-approval outcome monitoring, I was very tempted to ask something about how the industry was increasing transparency and whether Lilly did everything it could in making data available about drugs like Zyprexa, but unfortunately we ran out of time for questions.
Trends in Medical Devices
The most interesting thing to me was the near consensus across the panel about the inevitable convergence of drugs, devices, and data.
Dr. Stephen Oesterle (one of the most provocative speakers, and certainly the funniest) suggested a few specific companies to watch. In drug delivery, Tempo Pharmaceuticals is developing nanoparticle delivery systems for (small molecule?) drugs that allow for control over release rate and timing of drug combinations. (Tempo just did a Series B for $8.1M from Polaris, Bessemer, and Lux Capital.) In biologics, he suggested having a look at Alnylam, which has most of the IP on RNAi therapeutics locked up. The importance of RNAi is motivated by a simplifed model of disease: all disease is caused by too much or too little protein. For those rare diseases caused by too little protein, we can give it back, via protein therapeutics, cell therapy, or gene therapy. Too much protein however, and we need RNAi, antisense, or antibodies. Biologics are going require clever and new delivery systems, and delivery technologies for biologics are still a big long-term opportunity. Finally, Oesterle suggested having keeping an eye on the spine. Back pain is huge, and current therapies (spinal fusion) are lacking, yet the spine is relatively simple and very accessible surgically, and therefore low hanging fruit for new devices and approaches.
Georg Nebgen said trends to watch included: disposable devices; battery powered device; and greater automation, diagnostics moving to point-of-care and away from central labs, and that acquisitions are moving later.
Another interesting trend proposed is the vanity of an aging population – a market that is happily self-paid by patients.
Communication between devices is going to be big: we’re going to see networks of monitors, and start to close the loop between diagnostics, treatment and discovery. Better monitoring is going to let us give drugs episodically, excactly when they are needed and effective, instead of all the time. Implanted sensors are going to start to happen. All these sensors and diagnostics are going to generate data that will require security, storage, and analysis of the resulting signals. Big opportunities; totally unclear who is going to do it yet. (Doctors? Google/Microsoft? Payors? NewCos?)
On the more mundane side of things, we’re going to continue to see big investments in informatics for clinical trials. Still a huge source of pain, with way too much paper.
The closing keynote was by Dr. Robert Langer, who somehow I haven’t heard speak before. Langer offered up a nice checklist of scientific characteristics of successful biotech startups, and then backed them up with examples of successful companies he had started, with the caveat that business issues — like the right team — generally matter more.
- Platform technology – applicable to multiple products
- Ideally a product company
- Seminal paper (Science / Nature)
- Seminal and blocking patents
- In vivo proof of principle
Overall, the conference was fantastic – congrats to the organizers for putting together a great panel of speakers and running everything so smoothly.
Medicine today has entered its B-17 phase. Substantial parts of what hospitals do—most notably, intensive care—are now too complex for clinicians to carry them out reliably from memory alone. I.C.U. life support has become too much medicine for one person to fly. Yet it’s far from obvious that something as simple as a checklist could be of much help in medical care [...] Pronovost and his colleagues monitored what happened for a year afterward. The results were so dramatic that they weren’t sure whether to believe them: the ten-day line-infection rate went from eleven per cent to zero. So they followed patients for fifteen more months. Only two line infections occurred during the entire period. They calculated that, in this one hospital, the checklist had prevented forty-three infections and eight deaths, and saved two million dollars in costs.
Checklists do seem like a remarkably simple way to improve performance of a process as complicated as critical care. Checklists (or more complex algorithms) help to address both individual cognitive limits (attention, short-term memory) and problems in social organization (reluctance to challenge hierarchy). If a checklist is good, I wonder whether a more complicated decision-support system is better – ie. one where the checklist items are dependent on the observed data. Such a system might be harder to implement, and perhaps harder to convince people to use, which could outweigh a slight improvement in performance. Moreover, rather than guiding actions, the greatest bang-for-the-buck of a checklist is in preventing critical errors and oversights.
This got me thinking about potential under-appreciated applications of checklists in other high-risk endeavours, such as investing and trading. Many people have processes for screening and selecting stocks with greater and lesser degrees of rigor. Externalizing a process into a checklist makes it easier to execute in a disciplined manner Indeed, there is already someone selling a software package called Checklist Investor, which comes with a set of checklists outlining investing workflows such as Graham’s Intelligent Investor. One can take the algorithm-driven approach to its logical extreme and use pure quant strategies, but there’s clearly a place for art and instinct in investing, just as there is in the ICU. I think that there is a potentially useful distinction between using algorithms for the constructive part of a process (such as stock selection), and using them as checks to avoid situations that could turn into major disasters — and avoiding those major disasters can have an enormous effect on performance.
Originally uploaded by davefishernc
I spent much of Sunday at Devhouse Boston, hacking away at a clustering project for alluvio.
It’s amazing how productive one can be while crammed into a room full of other people working… I ought to spend more time in the library.
Also learned about another interesting startup on the other coast, persai.
Facebook is strong magic, and is weaving itself deep into daily life. I have been delighted to discover how eager old friends are to re-connect, and I’m pleased that I’m not the only one.
Why does Facebook so effectively capture our attention? And are there general principles to learn which might be applicable to other sorts of open, collaborative social media? While sociological and marketing factors were critical, it runs deeper than that. I’ll sketch out four characteristics of the facebook ‘protocol‘ that I think contribute to its success:
- scalable message size
- information asymmetry
Scalable message sizes
The ‘Poke’ is the ultimate microchunked social interaction. It’s a 1-bit message that manages to eliminate all explicit semantic content, yet remains gratifying because it pushes our built-in attention-seeking buttons. But one of the magical things about facebook is that it scales seamlessly from an an add, to a poke, to a wall post, to a message, to more messages, to flying across the continent to catch up with an old friend (or flame).
Asynchronous communication is key to why facebook works, (as Zuckerberg mentioned in his f8 talk). Scott Karp proposes that Facebook fills a niche of providing asynchronous one-to-many communication. It’s more than just providing the effective platform, though — what Facebook (and other things, like Twitter) do is take interactions that would once have been synchronous and private and make them asychronous and public. Status updates, for example, might once have been passing spoken comments. But by broadcasting these casual comments into persistent public view, the probability that something will be of interest to someone else gets integrated over people and time. Similarly, many wall posts might once have been emails, but are now gratuitously and permanently public.
Information is highly asymmetric in facebook. While friendship is reciprocal, searching (or rather, ‘facebook stalking’) is not, and I have a hunch that such searches are not entirely uncommon. (It’s not just me, right?) Information asymmetry facilitates discovery, but by lettting users hold out other data, the scarcity of nonpublic data creates value in being facebook friends. By giving users full and gradated control over profile privacy, facebook allows each user to find and define an own optimal level of open disclosure while preserving some part of their value for actual friends.
Much (most?) of the information one reads in the news feed is very random. Why do I care that my friend in Japan is baking Laugenbrotchen right now? Yet I do care. Encouraging random data provides two things: a starting point for low energy-barrier, casual conversations, and a noise level that provides a constant stream of data which occasionally resonates (such as when I realized that a friend was going to be in LA the same time as me and we ended up catching up for drinks at Bar Lubitsch). This randomness needs to be controlled, however; as much as I’d like to keep up with all my friends, I can’t deal with my blackberry buzzing with every single baking project. (Yet somehow I can sometimes quite obsessively check the news feed…) Random data works much better asynchronously, at least until it is intelligently prioritized and geographically filtered.
None of the above characteristics are necessarily unique to facebook, nor is this list complete, but perhaps offers a few dimensions along which to understand and design social media – even in very different domains. What else am I missing?