The mission of science is to improve quality of life.
Acceleration of our ability to build knowledge, to generate new insights and innovations is one component towards discoveries that lead to improved quality of life. Another path may be to support better quality of life for oneself in order to have the space to focus on creative enterprises like discovery and innovation.
Both these threads have to do with information: managing it, processing it, creating it, sharing it, and sometimes, getting away from it to create more of it.
Most of us are smart and get that as a society, we eat too much and that health problems related to being just overweight are costing us dear - both in our quality of life now, and especially down the road. Knowing that, why don't we just change? Eat carrots instead of cake?
1. Change is Pain. Physiologically, it hurts - it means our brains have to spend energy and fuel resources to adapt.
2. We're wired to go for richer foods when stressed.
SO change is hard, and even harder when we go against ancient wiring
What if instead of telling people what was good for "them" - we actually designed smart info systems to support what they do where they do it, when they do it? What if by that we could virally introduce alternatives based on current practice. And got the resources to do that right into each other's hands?
In the UK, Jamie Oliver and Co. have been developing programs to help inner city school kids eat better, and folks in the nation cook better at home. But there's only one Jamie. What if we could provide that kind of support to enhance what we're eating, how to prepare it, in a safe easy way, to reduce threat, reduce the pain of change, and improve take up, one step at a time?
We have a lot of the techno and the info to take what someone's eating already or how they like to eat, what they like to bring home, and use that as a foundation for (a) what to do with it (b) how to enhance it for healthfulness (c) and how to give as much step by step-ness around it as possilbe
Here's a challenge that has driven me, and what a bunch of us have agreed to put forward as a challenge at a recent NSF workshop around information seeking strategies:
Imagine a single mother on a limited income with a couple of children to raise. She comes home and says to her computer, that's it, i want a better job. Help.
Right now, how would the computer, the internet, the Web, help this person in her quest for a better job? We could imagine that all sorts of information is online that might be useful: about courses that could be done from home; about types of careers that may be of interest; about what kinds of job opportunities there would be, and what the job requirements/hours are like, or what being entrepreneurial might be like. Sure, lots of information is out there on the Web, but how does someone get at it? How might it be better integrated so that if the computer knows about say education from one source and personal interests, can help do what a professional skills trainer might: put some of this information together to say a ha! here's some programs on X that would help get you to Y where you could do Z.
If this person interacts with the Web even now, the computer has a log of the kind of information that intrigues this person, or at least that she visits regularly, and software could use that potentially as the basis of soliciting interests, enaging to narrow these down; exploring training opportunities from govn't and private sector sources; giving the reality breakdown on paths for progression/expectation of hours and annual income.
This kind of problem has both data integration aspects to it, but it is also a huge interaction problem. By interaction, i mean how will the information be presented to the person so she can interact with it, engage with it - explore it - to build new knowledge?
We have what at the moment what WSRI founders call the Web of Linked Data: vast arrays of information across heterogeneous data sources, but it is not yet sufficiently linked up to support these kinds of related queries across heterogeneous sources in meaningful ways. The cost of doing this kind of job/training/possibility exploration is so high, so time consuming, that the person it would best enable doesn't have that commodity - time - to give it. This is what we mean by the benefit of accelerating access to the information in a meaningful, useful way. This is the kind of interaction i want to help deliver.
My work has been looking at information from four perspectives:
Within these explorations have been underlying questions about methods: how to capture requirements for and design tools for a-typical (in HCI) processes, like science experiments and making tea or, more recently, a-typical (in HCI) considerations, such as designing to support creativity, as described by Ben Schneiderman.
The above is a review of the basis of my research. The following is a quick overview of some of the questions of interest to me in newer work.
Since HAL in 2001, we've had anthropomorphic, conversational agents imagined as *the* interface for computers. It's hard to find examples in science fiction of files, folders and desktops as the Future Paradigm for computing, but avatars are rife: the librarian in Stephenson's Snow Crash, the computer Voice in Star Trek the Next Generation, the AI in any Gibson novel, the Ship in Banks. But what do these Voices actually *do* as interfaces?
We're exploring the idea of the Personal Assistant to see how interations between PA and Boss work/iterate on certain interactions, and to see if we're at a point where similar interactions/negotiations can be translated to our personal computers. Is it so hard, for instance, to say to the computer "remind of this next time i see Xiao"? as one light weight but useful example.
While voice interaction as a conversation may still be some way off, our hypothesis is that there is sufficient information now in the environment across the strata of personal social and private information that there is sufficient context available to join up and make more PA-like, supportive interactions possible.
So what do they look like, and how do we engage with them?
In previous projects such as Signage and Coolbeans, we've looked at how awareness in a work context might enhance social interaction. In other words, does knowing about the availability of others via some digital representation encourage real-time *physical* interaction to get at the benefits of social interactions at work? Related to this, we're investigating how such approaches might work in other awareness monitoring
In work just starting with Desney Tan and Ryen White of MSR, along with PhD student Paul Andre, we're looking at how we might present health status information to folks. The idea is that if people can monitor the status of their health, and hook that up to knowledge sources about what that status means - via perhaps a perfect digital health assistant/trainer - they will be better able to achieve better health.
Imagine the conundrum of being able to see that one's heart rate is high, one is verging on overweight tipping into obesity and there's little time to make a meal, and stuff in the fridge is showing up in the monitor not as what one might call healthy.
This is not the scenario of a simple problem: solutions involve helping to interogate priorities, help create space for new ones; show what better food choices within a budget might be; connect this with fast recipes are for that food. This scenario represents an important information integration problem as well as a strategy problem: how help the person identify an approach to better health that will work for them, and then provision the tools to support those choices.
In other words, the question is not just what the effect of being able to view health status may be, but when knowing what information desires that knowledge or awareness triggers, how can they be supported?
In another context with Ken Woods at MSR Cambridge and Dan Smith, PhD student, we're looking at peripheral awareness of one's own ideas. This is based on Ken's work with the display of notes taken on a tablet and represented on one's screen saver. This presentation has already been seen to be useful for folks, but what are the features of interaction that would support better utilization of an idea when it's redisplayed?
This kind of question is an extension of the earlier Hunter Gather work that looked at capturing components of web pages via address rather than copying in order to create rapid collections of whatever factoids were of interest for sharing out. But not everything lives on the web or in the cloud. What would we want to support the Gathering for ourselves and for versioning for sharing? How can we anticipate possible tasks, when tagging is usually too heavy weight, how can we find a lightweight way to bookmark the future?
These are questions Steve Drucker of Live Labs and i have been considering.
![]() |
All of the above topics really plug into notions of improving Quality of Life - to automate better to involve us less in the mundane to be liberated more to create, discover and innovate. While investigating quality of life, i've become increasingly interested in the components involved in being an effective physical human being, and how physical wellness ties into mental ability. To this end, i've been studying physiology including both physical and nutritional performance. I've also been learning correct techniques for training folks physically and coordinating diet with activity for well being. The tangible outcomes of these activities are a number of certifications like the NSCA CSCS, and several activities in our school more generally, such as the IAMGeekFit blog, forum and mailing list The motivation is simple: if we want to design to support better quality of life, we need to know how to achieve it, model it, perhaps, before we can design effective support for it, no? |
photo: dec 2011 |