Without first calling attention to context-dependent semantics, and static-local vs dynamic-aggregate context, it's impossible for a population of our size to have a coherent policy discussion.
Allowing confused semantics in fiscal policy discussions is exactly how we strayed into our current confusion even after initially going off the gold std way back in 1933.
Are you frustrated, wondering why we're reliving a problem we solved in 1933?
Are you also wondering why it's so hard to explain the floating value of fiat currency to people fixated on static, commodity value?
Are you becoming aware of the horrifically disorganized semantics displayed in any group discussion of fiscal & monetary policy?
Our goal is coherent system tuning - not heroic, individual effort.
Coherent system tuning depends upon organized campaign design.
A tuning campaign has to first recognize the boundaries of the system to be modified - otherwise the system will respond in ways the campaign couldn't have imagined. Does that sound like the current, degraded state of our politics? There is a better way, so let's look for one.
Let's work backwards through the list above, using the operations of linguistic semantics are a reference point for campaign design.
Campaign design for a growing economy requires Semantic Mobilization.
Since we re-use words & meanings so fluidly, semantic mobilization simply means sharing enough group-wide discourse to know when given semantics apply to what concepts and tasks, local or aggregate-wide. This isn't rocket surgery. Shucks, given rational discourse, it's not even taxing! :)
Let's ignore the vague definitions of both bias & intelligence. The approach itself is subtly misleading. A core, mal-adaptive mistake occurs in sophistry when broadcaster and audience presume that they're using the same semantic paradigm.
The definition of logic itself is looking for consistency in explaining unpredictable reality. Logic is defined as the reliable mapping of theory to discovered operations. In the process it selects sense from nonsense.
The book "Here's Looking at Euclid" provides a useful lesson, by comparing findings from math & linguistics. One key point is that many if not most "natural" sensory operations parse and encode context by relying upon signal ratios, not absolute calculations based on the arbitrary construct of an equally-spaced measuring systems. You have to expect that the same, lean signal parsing characteristic occurs for all semantic parsing, not just for number systems.
It all boils down to how our brains actually re-map representations of changing contexts. Given that actual neural networks report on pattern associations as a shortcut to formal calculation, it's easy to postulate that the questioners are the responsible parties when it comes to coherent Q&A.
The whole purpose of language & semantics - and parsing them - is to free us from the tyranny of disorganization, not to see who can confuse whom.
When inconsistencies appear in dialogue, it's because we're so frequently asking context-dependent questions using QUESTIONABLE METRICS, usually in a inappropriate paradigm - i.e., those we were arbitrarily over-trained to use, after someone else developed them for some completely different sub-context. Everywhere you look we have dynamically degenerate semantics - rather like using the same 5-to-7 variable names in every software program you write.
A famous example is the Nobel acceptance speech by Max Planck. He noted that Ludwig Boltzmann was previously aware of the "Boltzmann Const", but that none had bothered to define it's value. Planck went on to derive it's value, in common units, but then went even further. He'd realized that by sampling different, newly defined, scientific units - & plotting the position of the Boltzmann ratio within different coordinate sets - you could make Boltzmann's Const be 1, or any value you wanted. This example is instructive for novices surprised to hear that there's a difference between currency issuers and currency users. Yes, Tommy, data really is meaningless without context.
The more you look at our use of semi-static semantic variables in very dynamic context-space, the more it's apparent that our own semi-static semantics are rate limiting for our own aggregate operations, which are diverse & dynamic.
Amazingly, there are multiple old books on this very topic as well, e.g: "The Tyranny of Words" by Stuart Chase. He was one of the architects of the FDR New Deal, and realized that much of their policy confusion stemmed from non-scalable semantics. Politics as we know it won't scale simply because it's trying to map semi-static, personal semantics to dynamic, aggregate contexts. Everywhere you look, we need to "dynamically re-define our terms" for multiple, overlapping and changing contexts which can simultaneously scale from local to aggregate-wide.
Ala Max Planck, there's an conclusion here that is completely orthogonal to the paradigm suggested by cognitive psychologists. Smart is a vague term implying hyper-aware context scouts, not dependable, predictable Luddites. A resilient system needs both, just as explorers need a conveniently unchanging foundation to stand upon. Even though there's no consensus, smart people are generally considered to be those with more distributed neural-net mappings & associations. Yes, they're more aware of & hence more easily recruited to subtle or weak associations. The underlying reason, however, is not a mystical "bias." Rather, it's due to the wider breadth and mismatched kinetics of waxing/waning synaptic connection patterns across our huge population of CNS neurons. Similar patterns - though not yet as robust - are expressed in the patterns of discourse expressed nationwide.
Resilient group intelligence gracefully handles semantic exceptions.
Semantic "bugs" in our neural-net operations have been recognized as sophic tricks since the time of Socrates & the Sophists. Adaptive Q&A depends on the distribution & adoption of context-dependent semantics. Managing the utility of that distribution is very similar to appropriately identifying fixed vs dynamic addresses & values in complex software programs. Much information in any optimally engineered system is made implicit by context, thereby achieving substantial resiliency and lower overhead. In an adaptive race, there has not yet been any escape from the need to engineer in as many - but no more - short cuts & hacks as we can get away with.
At any rate, there's a general solution for people constantly bothered by the sloppiness of translating across old-to-emerging and local-to-aggregate slang & formal semantics.
For anyone asking a question, it's a key mistake to presume that the audience is sharing your semantic paradigm. There's many a slip 'twixt the ear & the interpretation, not to mention the calculated implications.
All this still doesn't tell us which came first, psychology of individual stupidity, or the non-scalability of some psychology-specific paradigms. Yet it does make us better aware of the distributed subtlety of our policy context!
For example, for non-psychologists, the article "Cognitive Sophistication Does Not Attenuate the Bias Blind Spot" needs a more generally useful title. The following seems more useful to a diverse aggregate. "If you ask sloppily phrased questions of audience members from diverse contexts, using questionable semantics, you'll get a broad range of initial answers." However, that title might not sell as many books for psychologists. That brings up the topic of willfully - not just innocently - broken semantics, i.e., Bernays propaganda.
As Max Planck noted, ~100 years ago, parsing & leveraging coherent information exchange depends upon adaptive selection of an optimal coordinate system for signal coding & decoding.
Where does this leave 312 million people trying to seamlessly fuse diverse, developing personal paradigms with an aggregate paradigm that is also dynamic and developing? Hopefully with one, starting reference which we can build upon as a bedrock principle. We're constantly mixing semi-static local semantics AND dynamic aggregate semantics. That's a problem mathematically inherent to every growing network, organization, culture or market economy. By not recognizing and mitigating that cost-of-coordination, we constrain our own rate of mobilization and limit our aggregate coordination. Bypassing our return-on-coordination is the primary cause of our ballooning Output Gap.
We don't have to wring our hands over imagined cognitive biases, or console ourselves that ignorance represents things "no one could have predicted." To reduce our Output Gap, we need to be comfortably aware that diverse semantics is a minor cost of a resilient culture. Comically, psychologists also know this task by another name, the "Fallacy of Composition*," which is more aptly named the "Fallacy of Scale." Nevertheless, more advanced solutions come not from psychology, but from comparing the perspectives of other "uniquely blind" professions groping the same context from different perspectives.
The common solution to selecting group intelligence, avoiding fallacies of scale, and coordinating degenerate semantics is just adequate Open Source information exchange.
For your info, this was & is widely discussed in biology, systems theory & neural/analog network fields too. Consensus is that no component can ever fully sample, know or wield all the knowledge available from it's aggregate. Therefore, "group intelligence" is always held in the body of group-wide discourse, and cannot possibly be expressed by any one citizen or subgroup.
Utility of group discourse is, of course, dependent on group practice, above all else. If a group doesn't actively probe & explore situation & capabilities, it doesn't sample the possible mappings between the two and doesn't get good at actually doing the mapping. And it never masters the adaptive use of dynamic semantics. No practice, no semantic mobilization.
We only have to let American policy sample and know more of what Americans know. Yet tempo counts. Since national intelligence is held in the body of population-wide discourse, it's also clear that our national intelligence can wax and wane very quickly, with changing patterns, extent & sampling of group discourse. Our group intelligence can also fluctuate regardless of how much insight or knowledge is stockpiled in un-leveragable subsets of the population. Like any other analog network, the genius of a democratic culture is very much a use-it or lose-it talent.
* example Fallacy of Scale: If one person stands up at a sports stadium, they can get a better view. Ergo, if everyone stands up, they'll ALL get better views! (False)