Yuval Salant of Northwestern University’s Kellogg School of Management, notes that research by psychologists and behavioral economists established that humans exhibit predictable biases in making decisions.
He created an algorithm-based mathematical model of how a machine would make choices with limited information.
Some automatons make the same type of predictable errors as humans, including the “primacy” effect (choosing one of the first items on a list) or the “recency” effect (selecting the last item on a list).
One of Salant’s automatons is based the decision-making strategy known as “satisficing”, or establishing in advance the criteria an option must fulfill to be selected.
This type of decision-making may pertain in selecting a meal, a residence, vehicle, vacation, or mate.
These three decision-making tendencies might be considered short-cuts, or heuristics, to avoid the exhaustive task of thoroughly analyzing every possible option.
As a result, computer scientists surmise that this type of “rational” (thorough) decision making does not scale for large problems, due to limitations of processing power and memory.
The same may be true for human decision-making in light of limitations to “working memory” (correlated with IQ), not to mention inevitable time constraints.
Salant’s most human-like automaton is a “history-dependent satisficer,” which may remember previously-considered and may modify its decision criteria based on available options.
He pointed to examples that support the decision biases he identified: people are more likely to vote for candidates who appear first on a ballot, to order one of the first items on a menu, to click on options at the top of a computer screen (such as an airline or hotel option).
-*What decision biases do you experience?
-*How do you neutralize your potential decision biases?