The last few weeks, I’ve been very busy with the logistics of organizing a mid-size conference that will be held this summer. Some of the decisions, like the city and the approximate timing, have been made for me, but I’ve had to choose between competing hotels, make a schedule with talks, workshops, symposia, keynotes, breaks, and social events. By and large, these are financial decisions: hotels offer different packages, some keynote speakers would have greater travel expenses, and catering offered during breaks can vary a lot in price.
In putting together the conference budget, I find myself having to carefully weigh decisions that all involve spending other people’s money. Do I raise the registration fee by choosing the hotel with the conference package that charges more for meeting rooms, or do I offload more of the cost on the attendees after the fact by choosing the hotel with slightly more expensive guest rooms instead? I know in advance that I’ll have to budget for some of the bare necessities, such as meeting rooms, projection equipment, and coffee – but am I justified in splurging communal funds on frivolities like a nice opening reception, color-printed booklets, or tea? I can tell you exactly what these things cost in Canadian dollars, but what is the actual value of having a nice opening reception, and how should I weigh it against the value of all the other moving parts of a conference budget?
A recent article in Psychonomic Bulletin & Review by Michael Brusovansky, Moshe Glickman, and Marius Usher takes this question to the next level: suppose that we know, exactly, the utility value and relative weight of each attribute of two competing options. That is, suppose that I am choosing between the competing conference packages of two hotels.
The table below shows a comparison of two hotels on four relevant attributes. The “weight” column shows the importance I attach to each attribute, and the two rightmost columns show how high each hotel scores on these attributes. Note that on the most important attribute, the cost of the package, the two hotels score the same, but on the other attributes they differ – sometimes by a lot.
Given these well-specified weights and utility values, I can easily calculate my preference of Hotel A over Hotel B: it is 10 x (3-3) + 6 x (5-7) + 3 x (9-4) + 1 x (7-2) = 0 – 12 + 15 + 5 = 8. This is the normatively correct approach known as weighted additive utility (WADD).
Brusovansky and colleagues considered the scenario where a decision is made under time pressure, where it is difficult to do that same mental arithmetic required to arrive at an optimal choice. Humans performing under these conditions in the lab can exhibit a handful of alternative, heuristic strategies. One such strategy is to consider only the most important attribute on which the alternatives differ. In the hotels example, the most important attribute is “Cost of package”, but since that’s not a discriminating attribute, we consider “Quality of meeting facilities” and choose Hotel B. This fast and frugal strategy is known as take the best (TTB). Such non-compensatory strategies (i.e., those that do not actually consider multiple attributes and do not allow the user to integrate multiple dimensions) are computationally easy but can lead to suboptimal decisions, as illustrated in the figure below.
However, even under such restrictive conditions, human participants often manage to come to normatively correct decisions, and they appear to use a compensatory strategy that is both accurate and fast, and are able to perform numerical averaging at high speed and good accuracy. This leads directly to the question addressed by Brusovansky and colleagues: are human participants able to execute rapid and accurate weighted averaging in these multi-attribute speeded choice scenarios?
Participants in the study were not asked about conference rooms and canapés, but had to choose between two fictional job candidates who had three to five attributes such as intelligence, work ethics, and creativity. Participants were also given the importance (i.e., weight) of each attribute for the job and were asked to provide a decision within a certain short time interval. After the experiment, participants were statistically classified into one of three potential response strategies: the normative WADD, the frugal TTB, and a middle-of-the-road alternative that ignores the attributes’ weights but switches to TTB in case of a tie (this strategy is called the equal weights rule with TTB; EQW-TTB).
Interestingly, a majority of participants (59%) was classified as using the compensatory WADD strategy even in this complex scenario. Only 29% used the TTB strategy. WADD users showed higher accuracy, and this did not come at the expense of slower response times. The difference between WADD users and TTB users (as classified) occurs where it would be expected: WADD users are well calibrated to the task and use the appropriate attribute weights, whereas TTB users tend to give greater weight to a single attribute.
The article does not end there. Moving more strongly towards model-based inference, the authors extended each strategy’s predictions into the reaction time domain. Following the logic of each strategy, in TTB the reaction time should depend only on whether the attribute with the highest weight is a tie between the two choice alternatives. Conversely, according to WADD the reaction time should depend on the difficulty of the decision (i.e., the total difference between the alternatives using all attributes). These predictions were both borne out in the data: WADD users slow down for difficult choices, and TTB users slow down when there is a tie in the top attribute. Compared to the calibration finding—which was really more of a confirmation that the classifier had done something sensible—this finding is quite a bit more impressive since the reaction time data was not used in making the diverging model-based predictions.
Classifying participants into response strategy groups has a number of interesting advantages. Most importantly, it avoids running into an aggregation fallacy in which qualitatively different behavioral patterns are averaged into a nonsensical pattern that does not resemble any single human’s behavior. Secondly, because the response strategies are strongly codified, unique patterns of behavior, each strategy is essentially a predictive model, allowing for stronger inference.
Zooming to the bigger picture, we can look at the recent development of the parallel constraint satisfaction (PCS) mechanism, which provides a plausible mechanistic account of a fast and automatic decision process. Taking the functional PCS together with the current set of behavioral observations makes a strong case for the mechanics of the intuitive decision maker.
For what it’s worth, I chose Hotel A. I preferred it not so much because it had the highest weighted utility but because it was the only option that scored an “adequate” on all dimensions – a “maximin” strategy. The conference is the joint meeting of the Society for Mathematical Psychology and the International Conference on Cognitive Modeling. Register via smp19.ca, or come see us at the next Psychonomics meeting.
Psychonomic article featured in this blogpost:
Brusovansky, M., Glickman, M. & Usher, M. (2018). Fast and effective: Intuitive processes in complex decisions. Psychonomic Bulletin & Review, 25, 1542-1548. DOI: 10.3758/s13423-018-1474-1.