Overreactions and decisions

The SILS MSIS curriculum requires a master's paper or project and the professors of even the core required classes encourage the students to begin thinking early about likely topics. Fortunately, it's possible to review a database of previous master's papers from SILS graduates so you can gauge the scope and treatment of the topic areas. As a result, I'm always on the prowl for good topics, for others if not for myself (I may have my own gem of a topic, but it's too early to talk about it now). Earlier this year, I ran across the following Schneier on Security blog posting, on the public overreaction to rare risks, in response to the Virginia Tech shootings. It's a sobering testament to how human we are--which is a mixed blessing, in this case.

I was especially struck by the following comment on the post:

As a student of behavioral decision making, I see irrational decisions made on a regular (and unfortunately, in many cases, predictable) basis. And as you alluded to, the reactions to these can often lead to ridiculous policies and unproductive debate over preventing the effects, not the causes. However, there is something so human about these errors that seems to be impossible to overcome. The real next frontier, in my opinion, is to understand these biases better, and to use them (perhaps through policy) to aid in productive, positive decision making.

The world of economics has its own problems with this, since so many of its models assume rational consumers. Define "rational." (Today, I spent a half hour in Circuit City looking at stuff so I could spend a $25 gift certificate, only to find at the counter it was a Best Buy certificate.)

So, in relation to research for a master's paper, think about how much information does a user need to absorb before making a decision? But that topic has surely been done to death. However, even if you take in just enough information, not too much, when would information overrule emotion in the decision-making process? Can it ever? How can you measure the before and after of an emotional (ie, unconscious or reactive) decision? Or could you build an interface or algorithm that either allowed for users' unique mixes of rational/irrational, naive/experienced, emotional/logical, etc. or confronted them with the results of their choices? How to build in bias when the user wants it but leave it out when the user needs it to be left out?