Wednesday, June 13, 2012


Without first calling attention to context-dependent semantics, and static-local vs dynamic-aggregate context, it's impossible for a population of our size to have a coherent policy discussion.

Allowing confused semantics in fiscal policy discussions is exactly how we strayed into our current confusion even after initially going off the gold std way back in 1933.

Are you frustrated, wondering why we're reliving a problem we solved in 1933?

Are you also wondering why it's so hard to explain the floating value of fiat currency to people fixated on static, commodity value?

Are you baffled by orthodox economists refusing to discuss fraud?  Or cost-of-coordination?  Or return-on-coordination?   All are truly ancient topics in other disciplines.

Are you becoming aware of the horrifically disorganized semantics displayed in any group discussion of fiscal & monetary policy?

There's no need to feel alone, or to feel frustrated. Sensing and sampling all aspects of a distributed tuning task is simply a prerequisite for solving it. To tune a complex system, the 1st requirement is an adequately broad conceptual model. Otherwise, all efforts will make things worse rather than better - barring blind luck. Can you imagine wasting frustrating days trying to tune an engine by working on one piston, before realizing that there are 8 pistons involved? (Or even 10 or 12?) Imagine radio astronomers trying to tune coherent signals from a large array of receivers.

Our goal is coherent system tuning - not heroic, individual effort.
System tuning is both difficult, & trivially easy. Every sensory system in our body produces coherent signals from large arrays of dumb cells, distributed as sensory receptors, whether tactile, retinal, cochlear, vestibular, taste or smell. Their inter-connectivity wiring just follows some organized principals.

Coherent system tuning depends upon organized campaign design.
A tuning campaign has to first recognize the boundaries of the system to be modified - otherwise the system will respond in ways the campaign couldn't have imagined. Does that sound like the current, degraded state of our politics? There is a better way, so let's look for one.

Let's work backwards through the list above, using the operations of linguistic semantics are a reference point for campaign design.

Campaign design for a growing economy requires Semantic Mobilization.

Since we re-use words & meanings so fluidly, semantic mobilization simply means sharing enough group-wide discourse to know when given semantics apply to what concepts and tasks, local or aggregate-wide.  This isn't rocket surgery.  Shucks, given rational discourse, it's not even taxing! :)

The many inconsistencies producing Sophistry from adaptive semantics are well known, and are frequently mislabeled as mysterious "cognitive biases." In such essays, psychologists note that frequency of so-called cognitive biases correlate weakly with so-called measures of individual intelligence.

Let's ignore the vague definitions of both bias & intelligence. The approach itself is subtly misleading. A core, mal-adaptive mistake occurs in sophistry when broadcaster and audience presume that they're using the same semantic paradigm.

The definition of logic itself is looking for consistency in explaining unpredictable reality. Logic is defined as the reliable mapping of theory to discovered operations. In the process it selects sense from nonsense.

The book "Here's Looking at Euclid" provides a useful lesson, by comparing findings from math & linguistics. One key point is that many if not most "natural" sensory operations parse and encode context by relying upon signal ratios, not absolute calculations based on the arbitrary construct of an equally-spaced measuring systems.   You have to expect that the same, lean signal parsing characteristic occurs for all semantic parsing, not just for number systems.

It all boils down to how our brains actually re-map representations of changing contexts. Given that actual neural networks report on pattern associations as a shortcut to formal calculation, it's easy to postulate that the questioners are the responsible parties when it comes to coherent Q&A.

The whole purpose of language & semantics - and parsing them - is to free us from the tyranny of disorganization, not to see who can confuse whom.

When inconsistencies appear in dialogue, it's because we're so frequently asking context-dependent questions using QUESTIONABLE METRICS, usually in a inappropriate paradigm - i.e., those we were arbitrarily over-trained to use, after someone else developed them for some completely different sub-context. Everywhere you look we have dynamically degenerate semantics - rather like using the same 5-to-7 variable names in every software program you write.

A famous example is the Nobel acceptance speech by Max Planck. He noted that Ludwig Boltzmann was previously aware of the "Boltzmann Const", but that none had bothered to define it's value. Planck went on to derive it's value, in common units, but then went even further. He'd realized that by sampling different, newly defined, scientific units - & plotting the position of the Boltzmann ratio within different coordinate sets - you could make Boltzmann's Const be 1, or any value you wanted. This example is instructive for novices surprised to hear that there's a difference between currency issuers and currency users. Yes, Tommy, data really is meaningless without context.

The more you look at our use of semi-static semantic variables in very dynamic context-space, the more it's apparent that our own semi-static semantics are rate limiting for our own aggregate operations, which are diverse & dynamic.

Amazingly, there are multiple old books on this very topic as well, e.g: "The Tyranny of Words" by Stuart Chase.   He was one of the architects of the FDR New Deal, and realized that much of their policy confusion stemmed from non-scalable semantics. Politics as we know it won't scale simply because it's trying to map semi-static, personal semantics to dynamic, aggregate contexts. Everywhere you look, we need to "dynamically re-define our terms" for multiple, overlapping and changing contexts which can simultaneously scale from local to aggregate-wide.

For a complex system to grow, members eventually have to know their multiple audiences!  Otherwise we never know if we're broadcasting more confusion than clarification.

Ala Max Planck, there's an conclusion here that is completely orthogonal to the paradigm suggested by cognitive psychologists. Smart is a vague term implying hyper-aware context scouts, not dependable, predictable Luddites. A resilient system needs both, just as explorers need a conveniently unchanging foundation to stand upon. Even though there's no consensus, smart people are generally considered to be those with more distributed neural-net mappings & associations. Yes, they're more aware of & hence more easily recruited to subtle or weak associations. The underlying reason, however, is not a mystical "bias." Rather, it's due to the wider breadth and mismatched kinetics of waxing/waning synaptic connection patterns across our huge population of CNS neurons. Similar patterns - though not yet as robust - are expressed in the patterns of discourse expressed nationwide.

Resilient group intelligence gracefully handles semantic exceptions.
Semantic "bugs" in our neural-net operations have been recognized as sophic tricks since the time of Socrates & the Sophists. Adaptive Q&A depends on the distribution & adoption of context-dependent semantics. Managing the utility of that distribution is very similar to appropriately identifying fixed vs dynamic addresses & values in complex software programs. Much information in any optimally engineered system is made implicit by context, thereby achieving substantial resiliency and lower overhead.  In an adaptive race, there has not yet been any escape from the need to engineer in as many - but no more - short cuts & hacks as we can get away with.

At any rate, there's a general solution for people constantly bothered by the sloppiness of translating across old-to-emerging and local-to-aggregate slang & formal semantics. 

If you want an appropriate answer, just rigorously "DEFINE YOUR TERMS" when asking a question which you actually want the answer to.  Otherwise audiences will always answer the first "question" that their context-specific semantic shortcuts indicate to them is the one that you're presumably asking. It's not an inherent bias or stupidity, it's a short cut that gets them through most instances they've encountered, and one that they will continue to use until some out-of-range responses require closer attention. Getting diverse, initial responses from members of a large, diversely bored or engaged audience should never surprise us.

For anyone asking a question, it's a key mistake to presume that the audience is sharing your semantic paradigm. There's many a slip 'twixt the ear & the interpretation, not to mention the calculated implications.

All this still doesn't tell us which came first, psychology of individual stupidity, or the non-scalability of some psychology-specific paradigms. Yet it does make us better aware of the distributed subtlety of our policy context!

For example, for non-psychologists, the article "Cognitive Sophistication Does Not Attenuate the Bias Blind Spot" needs a more generally useful title. The following seems more useful to a diverse aggregate. "If you ask sloppily phrased questions of audience members from diverse contexts, using questionable semantics, you'll get a broad range of initial answers." However, that title might not sell as many books for psychologists. That brings up the topic of willfully - not just innocently - broken semantics, i.e., Bernays propaganda.

As Max Planck noted, ~100 years ago, parsing & leveraging coherent information exchange depends upon adaptive selection of an optimal coordinate system for signal coding & decoding.

Where does this leave 312 million people trying to seamlessly fuse diverse, developing personal paradigms with an aggregate paradigm that is also dynamic and developing? Hopefully with one, starting reference which we can build upon as a bedrock principle. We're constantly mixing semi-static local semantics AND dynamic aggregate semantics. That's a problem mathematically inherent to every growing network, organization, culture or market economy. By not recognizing and mitigating that cost-of-coordination, we constrain our own rate of mobilization and limit our aggregate coordination. Bypassing our return-on-coordination is the primary cause of our ballooning Output Gap.

We don't have to wring our hands over imagined cognitive biases, or console ourselves that ignorance represents things "no one could have predicted." To reduce our Output Gap, we need to be comfortably aware that diverse semantics is a minor cost of a resilient culture. Comically, psychologists also know this task by another name, the "Fallacy of Composition*," which is more aptly named the "Fallacy of Scale." Nevertheless, more advanced solutions come not from psychology, but from comparing the perspectives of other "uniquely blind" professions groping the same context from different perspectives.

The common solution to selecting group intelligence, avoiding fallacies of scale, and coordinating degenerate semantics is just adequate Open Source information exchange.

For your info, this was & is widely discussed in biology, systems theory & neural/analog network fields too. Consensus is that no component can ever fully sample, know or wield all the knowledge available from it's aggregate. Therefore, "group intelligence" is always held in the body of group-wide discourse, and cannot possibly be expressed by any one citizen or subgroup.

Utility of group discourse is, of course, dependent on group practice, above all else.  If a group doesn't actively probe & explore situation & capabilities, it doesn't sample the possible mappings between the two and doesn't get good at actually doing the mapping.  And it never masters the adaptive use of dynamic semantics.   No practice, no semantic mobilization.
We only have to let American policy sample and know more of what Americans know. Yet tempo counts. Since national intelligence is held in the body of population-wide discourse, it's also clear that our national intelligence can wax and wane very quickly, with changing patterns, extent & sampling of group discourse.  Our group intelligence can also fluctuate regardless of how much insight or knowledge is stockpiled in un-leveragable subsets of the population. Like any other analog network, the genius of a democratic culture is very much a use-it or lose-it talent.

What does this all mean for the semantic mess confusing all discussions of fiscal, monetary and general policy?  Without first calling attention to context-dependent semantics, and static, local vs dynamic, aggregate context, it's impossible to have a coherent policy discussion.  That set-up paves the way for all subsequent discussion.

* example Fallacy of Scale: If one person stands up at a sports stadium, they can get a better view. Ergo, if everyone stands up, they'll ALL get better views! (False)

Thursday, May 24, 2012

The "Evolving Aggregate's Task"

How fast is the USA dissolving, from the inside out, as the ratio of engagement to disengagement declines? And what can we not just do, but BEST DO about it?

For quick, situational awareness, let's consider a recent, mini case-report.

Vallejo, Calif., once bankrupt, is now a model for cities in an age of austerity

If this is a model experience & response, then American cities - and states - are still in shockingly deep trouble. Vallejo, CA's unfolding story reminds me more of a pathetic Lord of the Flies than a stirring story of insanely impressive innovation & evolution. The thrust of the Vallejo article is that this is one model for how to re-organize in an "age of austerity." A more audacious response would be to refute the defeatist strategy of austerity, and instead commit to shaping the supposed "age" that our dispirited aggregate seems to be resigning itself to.

The pattern I see is:

1) dismal context awareness of US citizens and aggregates (the whole world is always changing, but most of us, individually & collectively, are woefully out of paradigm - and ongoing "paradigms" are not static, but continue to change)

2) initial consequence is that far too many not only didn't see it coming, but didn't even get the license # of the SituationDeliveryVan (SDV) that just ran over them.

3) given the still-lagging situational awareness, many are still sitting listlessly in the same road, vulnerable to the same and new SDVans & their increasingly frequent "stocking" runs. (SD, Inc. is accelerating expansion of their fleet, if you hadn't yet noticed.)

What are we NOT seeing? Why AREN'T we seeing these things, soon enough?

Where are the scouts contributing the info required to build civic, situational awareness (SA)?

Where are the local command & review teams, staff, councils & delegated officers RAPIDLY summarizing SA from that info?  When our kids are resorting to spontaneous attempts at reorganization - through gangs - you know we're too disengaged.  If we don't take  our kids to work, or elsewhere, soon enough, they'll look for secondary options.

Where are the agile, strategic responses - that should have been initiated 30 years ago?

What the hell happened to American ingenuity & distributed resilience?

And above all else, where is the will, initiative & automatic instinct to act locally without letting communities dissolve?  Prevention is a lot easier than - repeatedly - reconstituting losses.

Martial law from above is a last gasp ploy when civic law isn't self-organizing from below.

So why isn't bottom-up civic-law self-organizing, in more places, sooner?

It's hard to put one's finger on what critical step is missing when a previously self-aggregating aggregate fails to continue organizing - or why it even started slowing it's rate of mobilization.

Ecologists, biologists, historians, military-strategists and system-scientists have long studied this general issue - how aggregates alter rates of adaptive mobilization at any scale - and there are some obvious, if initial, clues as to what's lagging.

Aggregate Instrumentation
Every growing aggregate can generate many combinations of more components and higher component transaction-rates. To preserve aggregate agility in any new organizational pattern, a rapidly increasing amount of situational awareness information must be passed through existing and/or new inter-component, communication channels. In short, a growing aggregate must scale aggregate self-instrumentation much faster than it's actual organic growth. In general, if the size of an aggregate doubles, to maintain similar agility, the amount of info-passing across all components may have to initially increase by a multiple of n-factorial.

That's why brain-size grows faster than body size as one compares different sized animals with similar agility. And if you want increasing agility simultaneously with aggregate growth, then even further, genius-level innovation is required. In the USA, civic-coordination services have to grow at a rate faster, not slower than population and/or economic growth. If they don't, the USA will regress, not prosper.

Aggregate Selective Tuning
To re-adapt to changing situations, a growing aggregate must be constantly re-tuned, very cleverly. The general pattern seen in all model systems that succeed is 2-step re-iteration. Surviving systems periodically re-connect everything to everything, then relax to the minimum connectivity patterns needed for that particular situation. Then they do it all over again. That's the basic pattern seen in the dramatic sexual recombination of genes, in the chemical gradients generated as an embryo grows from one to billions of organized cells, and in the development of neural organization in the developing brain. It's also what happens in tribes, armies, neighborhoods, markets, nations and international markets. New components (i.e., kids/students) go out and meet many peers, then briefly settle into defined adult roles before making room for a succeeding wave of adaptive components able to re-connect & re-tune into novel patterns. Explore, tune in, re-spawn & bow out.

There are too many sub-steps involved to possibly list & review in this article, but even these two, simple observations above allow us to ask further, pertinent questions.

Why are our systems failing?  Why is our adaptive rate declining?
Why are we not adequately scouting all the emerging data lines affecting our aggregate? There are many headlights coming down all roads converging on our unfolding situations. What distributed incentives are we failing to adjust, thereby allowing diverse SDVans to slip through more often, to run over parts of our aggregate with increasing frequency?

Why aren't we adequately reviewing & summarizing our own, distributed knowledge, fast enough? Why, as situations, our size, and our complexity change, are we not re-connecting everything to everything ... frequently enough & quickly enough? Why should an aggregate with our aggregate knowledge ever be in a position of effectively saying "no one could have predicted we wouldn't listen to some of our own people" - or even not knowing that such diverse scouting reports exist?

Institutional hubris simply reflects lack of situational practice, which in turn follows failure to review scaling data. In short, we have too many people over-analyzing insufficient data, when we already know that all data is meaningless without context, and that most data is irrelevant even with context. 

Simple conclusion is that we're spawning options faster than we're exploring them, and are getting distracted in the process. That's inherently a failure to rank options and prioritize pursuit of the largest returns. We're failing to cross our own chasm!

This is a recurring, general problem which I've previously named the "Traveling Entrepreneur's Task." Seeing as how the task scales up to be a rate-limiting task for all evolving aggregates, this task may be better named the "Entrepreneurial Aggregate's Task."

 The general expression of this task is to select the optimal path through a dynamic "space" where options constantly expand or shrink  as a consequence of any action taken. The general solution is to painstakingly map and take that path which continually grows the number of available, aggregate options. In a separate post I'll lay out some of the task parameters that have emerged, from multiple fields.

As Warren Mosler astutely asks, "How do you get people to explore their options?"   Why aren't we even adequately aware of the relative returns on all our spiraling options? Or that there are many, and that the number increases every day?  Heck, if an aggregate is not familiar with it's various options, it seems inescapable that, as a start, they're not exploring enough of them before choosing which to spend time digging in to.

Why aren't we at least reporting, listing, evaluating and ranking all our aggregate options? Tribal groups throughout history have faced this question, and it seems to have limited any & all colony sizes, from termites to humans. 

One answer is that we're simply not discussing our aggregate options widely, deeply or quickly enough. Why not? Perhaps we're simply not allowing ourselves enough time for group discussion. In all prior tribal systems, inordinate amounts of time were spent discussing aggregate options, precisely so that they could be ranked & pursued wisely, not randomly. Natural selection strongly favored those aggregates that measured & considered not just twice, but many times, before cutting into emerging situations. The same lesson is echoed in much business literature, as "slow down, and choose wisely, not randomly." If not, your aggregate never crosses the chasm to achieve its full Output potential.

Why do growing aggregates cycle in & out of optimally organized, aggregate behavior patterns? One obvious answer involves the ratio between distributed methods and emerging success rates. Aggregate strategy is, of necessity, the sum of an aggregate's distributed tactics or methods. Distributed incentive structures work adequately only if not saturated, or unduly frustrated. Obviously, an aggregate may fail to scale due to too much success, leading to distributed apathy & component "obesity."  Or, it may fail to scale due to a "systemic shock" response, where something triggers too much hoarding of resources in some model of the "central" organs of culture. The obvious problem is that reduced circulation of any sort of resource, from commodities to information, reduces aggregate agility.  In either case, the underlying causality is that the distributed methods - & incentives for using them - weren't adjusted with enough systemic agility.

In that case, the solution, if one even appears, occurs in an indirect, nested system response.  Some previously non-existent or seemingly negligible aggregate component becomes part of newly significant aggregate tuning instrumentation.

When a systemic shock reflex kicks in - for whatever reason - it's a sign of a overwhelmed system's desperate attempt to use outmoded reflexes to solve situations it has failed to adequately scout, consider, probe, surf and prepare for.

Sometimes a shock reflex may work, and sometimes it may lead to a self-induced death spiral not absolutely dictated by context. The overall lesson is that an ounce of adaptive prevention prevents multiples of pain, cure & lost output. It's far better to never fall into a systemic shock reflex, because it reveals a system already in a state of confusion, randomly falling back on old vs emerging strategies.

That's the absolutely last behavior an agile aggregate EVER wants to find itself expressing in a novel situation.

Systemic shock - i.e., rising disparity - is a graphic admission that an aggregate has been dozing on the watch, and has no idea whether another SituationDeliveryVan may be about to arrive, with or without it's headlights on.

Slow change in our distributed incentive systems may thus lead us to cycles of accelerating or declining aggregate growth rates.  The question is what to do about it?  How do we continually add protective sub-loops preemptively protecting us from such system bugs? 

Again, answers are already apparent from previously studied model systems. Every known process in biology or biochemistry seems to be simultaneously expressed in at least triplicate, affected by short, medium and long-term feedback mechanisms - call them checks & balances if you will.  This critical lesson, apparent in densely engineered, prior systems, may be exactly what we're failing to pay enough attention to as our aggregate explores it's own, new frontiers of scale.

Parallel system feedback systems with diverse time constants provide a general framework for smoothing out adaptive responses over multiple situations.  An aggregate employing this strategy is less likely to fall into the trap of over-adapting to any one situation while reducing it's chances of transitioning to the next, inevitably different situation.  Other evolved aggregates have shown us that survival of Entrepreneurial Aggregates means taking a middle path, where minimally-adequate adaptation to each situation allows adequate preparation for the next situation.

Is our culture adequately instrumented, from local to national levels?

In our nation, states, counties, cities, towns, schools, neighborhoods and families - do we have enough, different feedback systems with adequately differing time-constants? 

Are we letting the signal strength from those feedback channels scale automatically?   

Do we prepare our citizens - through actual practice - to listen and respond to that multiple and changing signal spectrum?  

As our aggregate size and complexity grows rapidly, are we spawning enough feedback systems with enough different time constants to keep our aggregate agile? 

To even achieve that last step, are we modeling our combined map of aggregate-feedback-methods-to-context, so that we can adequately FOCUS on aggregate tuning, which always delivers the highest return? 

It seems certain that we can always do that, but if we only remind ourselves that the highest return in any organized aggregate is ALWAYS the return-on-coordination.

In fact, to survive, we need to move on to a more powerful version of that message.  The highest return in any organized aggregate is ALWAYS the return-on-rate-of-coordination.

Seems obvious that success follows adaptive rate.  It's not sufficient to adapt, if your aggregate always finds that another aggregate has already done so a decade before, has moved on, and is already out of sight, disappeared into a future your aggregate won't share.

Methods for recursively tuning rate-of-return-on-coordination therefore offer themselves as the general, dynamic solution to the continually scaling "Entrepreneurial Aggregate's Task" or, more directly, the Evolving Aggregate's Task.

As promised, a follow-on post will begin to list the diverse parameters subtly affecting the "Entrepreneurial Aggregate's Task." There's no way that such a listing task will ever end. No one person or component of any system can ever know them all, at any one time ... and our list will never stop growing .. unless the USA dies.

Friday, May 18, 2012

Needed, investment in info-channels to tune return-on-coordination.

Sanjeev Kulkarni emailed some comments about viewing multiple nations as an organized eco-system.  That, of course, weakly implies that each given nation is an organized eco-system.  How organized is our nation?  Are we being everything we could be?  Or are we fat, sick, drunk & passed out on the curb?  If we're somewhere in between those tolerance limits, which are we closer to? These questions open up a bigger can of worms, and unleash queries which span bacteria, amoeba, social insects, and humans.  No worthwhile question is easily answered, so, I'm going to query multiple disciplines for some perspective.

Why fuse input from different disciplines?  Because too much of academia has become academic, and the potential relevance of channel-specific insights is no longer propagating quickly enough to the cultural power zones where our moment-of-adaptation is actually held hostage.  Those power zones include lobbies, industrial & business centers, and our Fed/state/local congresses.  Outside those zones, we have backlogs of both data and knowledge building up, in info-channels choked by sclerosis.  Luckily, we have other adaptive power zones, being held in reserve.  Why are we waiting to access our adaptive reserves?  And what, exactly, could we be doing to unleash them sooner rather than later?

The core task is to drive organization on a larger scale, and mobilization rates as well.  Solving that task depends on key methods, not just desire.  For us, most roads affecting this outcome quickly lead to the practice, training & education our electorate receives, so let's just jump to that part.

So far, efforts to improve modern education inevitably push too much, overly fragmented, information into increasingly uncoordinated info-channels.

We've known this for a long time.  And that sentiment led us to define the "liberal arts" education.  Unfortunately, our methodology hasn't scaled up very well, and is now a rather neglected topic.  Our own educators school graduates are no longer palpably aware of the following insight.

"There are two educations. One should teach us how to make a living and the other how to live."    John Adams

Or, just fuse the diverse, education channels, sooner?   That necessity of that fusion is implicit in all prior aggregates, from before bacteria evolved all the way up to human tribal cultures.  In prior examples, info-channel fusion is mediated through direct, inter-component or inter-personal connectivity methods.

How do other, model systems, eventually coordinate the increasing scale of emerging behaviors spawned by their own adaptation rates?   That process eventually creates new species, but since methods drive results, it's the adaptive methods we're interested in, so that we can use those examples to re-examine our own adaptive methods, or lack of.

To scale up larger populations, any aggregate has to constantly titrate how little key info must be distributed in interleaved channels, in order to reap return on coordination.  The complexity of that map-reduce fusion task quickly scales up to, and beyond, the complexity of thermodynamic statistics within & between channels in an organized aggregate.  It's awesome to imagine, yet we have to tune it even further.

Any ideas? :)

Adaptive tuning of complex systems obviously happens, yet mostly by distributed trial & error.  We can't personally tune our national outcome in realtime, but we can tinker with all the channel processes, wait for the distributed feedback, and then parse it.

To analyze & use that trial & delayed error reporting, no matter how distributed, one suggestion is that we need a core matrix of aggregate "Welfare of the People" signals.   A background hum, so to speak, that unsettles us when it dwindles.   It equates to maintaining homeostasis.  In your own body, for example, no part can survive without the others, and a spectrum of signals are used - in any combination - to trigger homeostasis-preserving behaviors.  Yet in a scaling national culture, we're never completely free from the idea that some subgroup can strike out on their own as a colony somewhere else.  That's fine, but we've let that dissociated thinking get out of hand, expressed as rampant social disparity.  The outcome is a sum acting as less, rather than more than the sum of it's parts, and an astoundingly large Output Gap.

Is aggregate awareness of our Output Gap just another info-channel to manage?  Is that awareness very different from whatever is internally perceived when a honeybee colony reaches a crescendo and decides to swarm?  Point is that bee colonies do this in a reserved, organized way.  If they all go their own way too soon, they all die.

In honeybee colonies, there are subtle sub-processes which drive worker-behavior to compete with queen-behavior.  In good times, the workers induce & nurture new queen larvae & let 'em hatch.  They may even let the old queen kill most of them (how selectively?) but not all.    What subtle process titrates the workers to scale queen protection until the ONE colonist-queen (or a few competitors) is chosen & kept alive?   They may also quit feeding the old queen, so she slims down enough to fly.  (Is that the core reason the "ruler" leaves?  Just to escape starvation?)

In what at first seems a surprise twist to us, it's the old queen & experienced workers that set out to form a colony somewhere else.   That leaves an inheritance to the inexperienced young'uns, in a method that works well for bees.  Back in the original hive, there may or may not be a succession-fight.  How much input do the workers exert on who becomes the next queen?  Do they have a strong union, called the Wobblies? :)

Researchers like Tom Seeley may be able to tell us something about the molecular or hormonal substrates that allow different bees species to manage the particular info-channels that sum to trigger swarm behavior.  It would be particularly interesting to know the degrees of freedom governing interactions between info-channels.  What varies across the bee species which can support markedly different maximum hive sizes yet still express colonizing swarm behavior?  Much is context-specific, but there's also inter-species variance.

  Meanwhile, the way Americans scale up exploration of our own, emerging options is largely based upon completely different methods, although the outcome is recognizably similar.  Even the US Constitution does not deliver the tight organization of a bee colony, where there are few, if any, analogies to unemployment and/or economic disparity.  As with honeybees, much of our organizational state is highly context-specific, but there are obviously extreme differences in the methods supporting organization in bees & humans.  As far as I know, disparity in native bee populations is expressed largely between hives, not within.  Also, I've never heard of immigration/emigration between hives, other than total war (are foreign drones allowed into other hives to feed?).  Mammals generally allow more degrees of freedom in how Romeo & Juliet find one another.

Far as I know, there's no example of diverse bee colonies fusing to create empires.   That level of organization is beyond them, so far.   Humans are organizing on a greater scale, but we're still VERY inefficient at it, and obviously still working out the kinks.

The social amoeba Dichtyostelia provide a remarkably pertinent analogy for fusing & dissolving human lineages into and out of culture.

Nevertheless, we don't seem to have any human institutions capable of accelerating adequately distributed awareness of lessons from other model species as warning examples, or at least not fast enough.

The main point is that we don't need answers to these questions to ONLY be published for academic credit.  We need emerging methodological options to be practiced & explored - at faster tempo - in our state & federal Congresses, or in indirect practices which bypass our existing Congresses.

We need academia to be more relevant, and less academic.  That's obvious.

How?  Why don't we have more academics studying how to be less academic?  Or more entrepreneurs providing that service to academics, faster?

We may need to spawn more info-coordinating channels, ones that cross all existing channels, akin to agile standards groups.   An interesting prototype was the German General Staff, which was remarkably effective - and an early prototype for Open Source as a cultural method, not an abstract philosophy.  Ironically, the concept of a novel, "General Staff" parallel to existing info-channels, was first undermined by the 1919 Economic Consequences of the Peace, then later destroyed by the Nazis, and finally ostracized by the winning Allies, simply by association with the losing "Germans" triggered by the allies as an Economic Consequences of the Peace.  Go figure.   Why does it often seem that a highly adaptive emerging regulatory option is universally opposed - via a mal-adaptive selection process - by the very institutions which would thereby be saved from themselves?    The Economic Consequences of the Peace occurred precisely because an aggregate was not listening to it's own, emerging info-channels!

It seems that every example of organizing on a larger scale requires an emerging information channel specifically for selecting from the emerging, indirect options being spawned.    That task is palpable in, say, software programming.  It's recognized that every insurmountable programming task has a solution, and that the solution always involves another level of indirection.

Yet individual or even team programming doesn't compare to the complexity of market or cultural adaptive rate.   Our nation spawns indirect options at a rate at least exponential to our population growth.   How do we create - quickly enough - specific info-coordinating channels allowing us to select net-adaptation from all the emerging options being explored?   That sounds like a tongue twister, and it is.  We've been solving this problem for 3.5 billion years, at least.  How do we accelerate that process, from our current context?

Anything less is pathetically boring in comparison.

We can obviously do this.  First step is to describe the spectrum of emerging tasks in existing hieroglyphics.  Next step is to crowd-source efficient expression of the leanest, adaptive re-statement of that task, by a process of distributed trial & error.

To love your kids, you have to first love the future.   If you love the future, you have to let it go be distributed, by indirect paths.   Then you have to accelerate it's distribution, with a nudge out the door.

To paraphrase Darwin and Boltzmann, as well as the USMC:
Adaptive survival follows the quality and pace of distributed decision-making.
and ...
"We generate pace by decentralizing decision-making."
To which I'll add:
To prepare for the future, openly accelerate our distribution of adaptive preparations.
To have co-citizens, you have to be one.   To help tune, you have to be tuned.

To help tune an aggregate, faster/cheaper/better, you have to invest in tuning instrumentation.  Right now, the moment of aggregate tuning in our nation appears to be hovering over Open Source methods.  We need to extend those methods faster/cheaper/better, to more info-channels.   To reap the insanely large margins from return on coordination, we have to write off the relatively negligible cost of coordination.

Friday, May 4, 2012

A call for more experimentation in early education.

In keeping with the need to identify and articulate challenges, the Open Operations Forum is now formally calling for participants in a nationwide process of accelerating EDUCATIONAL EXPERIMENTATION.  Early success will presumably be achieved by networks of existing home-schoolers.

We NEED hyper-mutation at the level of elementary school education, and throughout K-12 schools - and ways to regulate selection from it's output.


Because we do not have an adequate cultural-equivalent to a modern immune system.  A culture exists of immense entanglements of densely-engineered transaction chains.  Discriminating which parts of our own, randomly emerging variation harms vs helps ... is a parsing task that gets harder every day.

Not only are we slow to generate options and solicit change, we don't have adequate mechanisms to adaptively regulate & select early rudiments of emerging cultural immune processes.  Statistics alone dictate that most attempts at cultural protection will produce auto-immune stresses.  Experimentation goes nowhere without parallel improvements in the selection process.

Our present cycle is that some changes occur despite all efforts.  Some complain that the change is bad, and that change itself must be regulated.  Then we go off on a tangent about whether to change or not, and how fast or slow.  All irrelevant. We must always tune to changing context.  We need appeal to no higher authority, process or pace.

If sexual recombination generates the stem-cells (new humans) of any culture, then we can make a case that schools are analogous to the spleen or thymus. Education & training  develops the ability of kids to parse cultural-immunity (discriminating sense from non-sense) plus the ability to regulate tolerance limits - so that we actively generate adaptive cultural change, but don't shoot ourselves in the foot en mass.

Therefore, the key challenge for all evolving cultures is rate of developing methods for discriminating emerging variants [most of which are nonsense :) excuse the pun].  Parallel to system evolution, nested selection of "grading metrics" occurs apace - as value metrics & parsing catalysts.  We're always using old grading methods in attempts to select novel innovations.  How do we discover cheaper/faster/better ways to parse emerging group-variance for emerging adaptive value?  Answer, in the end, "we" can't change faster than our grading methods do.  And it starts in elementary, dear Watson.

Reacting to external change with {internal changes plus selections from internal changes} constitutes two, continuously emerging, parallel re-MAPPING tasks.  We're always remapping our selection criteria in order to remap ourselves to context, fast enough.  That involves co-optimizing two co-factor sets simultaneously in one polynomial.  

In short, we need education & training systems to turn out citizens who are increasingly prepared to quickly adapt to evolving demands - i.e., kids able to tune their traditions to context, as fast as needed, not just argue over the absolute need for change vs tradition.

Anyone who's discussed perturbation of & settling in analog computing networks is not surprised - but this core task is completely foreign to the bulk of our existing electorate, who will heatedly argue for hours based on personal allegiance to CONSERVING traditional, completely un-examined - & always partially obsolete - value maps, rating systems & grading systems!  Our net electorate is actually convinced that we mustn't change, and we're busy perfecting our ability to churn out more students of the same mindset!

Tradition can be a delicate, even taboo subject, just like national suicide.

Only in ritualized sports, orchestra, dance, theater, construction, etc do students routinely learn that, to succeed, everything has to continually change.  

Ironically, attention-seeking toddlers & adult Control Frauds both learn that the only way to personally "win" is willingness to break any & all rules [for personal, not group, benefit].  By actively selecting for the confluence of {personal success + group tradition}, we unwittingly select for increasing cultural fraud as the ONLY possible outcome.  This Catch-22 insanity starts in schools, and is most easily prevented there.

The best way to change that may to continually change the education system at a faster pace, so that "components" in our electoral analog-computing-network become increasingly more flexible, adaptive & thereby resilient.  Not too much, but enough to meet emerging demands of group context.  We need neither too little nor too much change, but we always need change tuned to both the magnitude & pace of changing context demands.

Return-on-coordination trumps all other returns, by far.  It's why evolution works, and social-species dominate.

Therefore, the goal is to continually re-map growing sets of dynamic tolerance limits to dynamic context.  To manage the exponentially growing inter-dependencies that come with cultural growth, students must be comfortable constantly & quickly trading SOME local options for SOME group options that produce net benefit.  That means being less fixated on personal success, less wary of group change, and hence better able to pursue emerging return-on-coordination.  We need to change, in order to actively select for more, not less, cultural resilience.  We ALWAYS need to change how we train our kids, so they're even more resilient than we were.  To do that, we need to explore & select from training options - faster.

Below is just ONE of endless examples why:

Roger Erickson, May 2012; mmt-discuss
Subject:  re: painful to watch - Ron Paul vs Paul Krugman video debate 
 it's like a re-enacted debate between Ptolomeic vs pre-Copernican physicists. 
Americans shouldn't let Americans settle for archaic philosophies OR operations!
Ok, I'll extend my original statement. RP doesn't understand fiat monetary systems, PK doesn't understand banking operations, and OUR ELECTORATES DON'T KNOW THE DIFFERENCE!!!

This is a sad commentary on complex, human cultures. How much complexity can be constrained by remarkably pinheaded constraints? Every system has it's bugs. We've identified one permutation. It's not the only one that emerges entirely as a cultural phenomenon, "in the cloud" so to speak. Something has to be recursively placed in our educational system to provide future generations a bit more immunity to this form of cultural auto-immune disease.

Wednesday, January 18, 2012

OpenOperations "Visual Economy" HighSchool Challenge

Open Operations is issuing a Challenge to EVERY HIGH SCHOOL IN THE USA, heck, in the world, to develop and present simple system visualizations in two categories:

1) Visual model for how Personnel System Methods make an Adaptive Organization.

2) Visual models for how Credit, currency, criminology & policy Methods make a Scalable National Economy. (5 subsections)

For more details, see below.

Why this challenge, now?  In many domain areas, we have endless criticism of things that are wrong, and and many illogical things to correct.  Take CREDIT, CURRENCY, CRIMINOLOGY AND FISCAL/TAX POLICY in general.   Despite growing numbers of critics, gridlock abounds, and OUR COUNTRY IS BECOMING A LESS RATHER THAN A MORE PERFECT UNION.  Identifying what's wrong is no longer rate limiting.  Our biggest problem is in coordinating coherent responses to what are still very distributed perceptions.  Our group-awareness of context is lagging context change, and is never adequately close to being coherent.

What to do?  Target aggregate context awareness.  How?  Here's our idea.

I just had a long talk with Peter Groen of CosiTech about the evolution of domain-specific exaqmples of "campaign momentum."

Peter is an IT person focused on distributing enabling IT tools, starting with OpenSource Health IT. He's basically the co-father of the VA's ViSTA open-source healthcare system.

I tried to impress upon Peter that there's a developmental step - widely discussed in biology - that is prior to coherent demand for tools.  Peter grasped it, and agreed in principle.

In health, for example, the bulk of our population had, after decades of education, arrived at a coherent view of personal & public health as systems.

The elements of that context awareness?  Recognition of a system, with components like vitamins, trace minerals, organs, blood pressure, infectious agents, immune responses, draining swamps, clean water, epidemiology, etc, etc.   Few now argue about the details, and instead share a common view.

With that common view came coherent context awareness.  Subsequently, citizens & clinicians began recognizing and crating tools to fill holes where the context model demanded data.  Health IT tools became obviously demanded, as one obligatory part of SYSTEM INSTRUMENTATION, but only once group awareness of a system was established.

Similarly, specific types of navigation tools proliferated AFTER more people realized that the world was round and circled the sun, not before.  For personal/public health, the response was rapidly spreading demand for all the diagnostic-tracking tools we know today, from charting of diets/exercise/toxins/nutrition all the way up to shareable electronic health records & statistics from public databases.

In the case of other systems?  We're just beginning to build coherent public awareness.  Take:
  military personnel systems;
  currency systems;
  public policy systems ... .


Why? First, we're not teaching the prior need for coherent context awareness to students.  Second, we don't provide enough ways for youth to PRACTICE the overriding return-on-coordination that scales up to trump any & all heroic but unscalable local practices.  Why memorize data BEFORE knowing what it's for?  Most data is irrelevant, most of the time.   Data can be generated or retrieved upon demand.  Adaptation primarily involves creating & capturing new methods for determining what tiny fraction of data matters, when it finally matters.

This is a distributed problem where adults have all the relevant data, but where AMERICA DOES NOT KNOW WHAT AMERICA KNOWS.   Worse yet, we don't know how little we need to know!   When that occurs, it's up to youth to come up with their own, coherent group-awareness of group-context.

Since DATA IS MEANINGLESS WITHOUT CONTEXT, we must first build group context-awareness before trying to utilize all the data we've been producing like projectile vomit.

Ergo, for neglected domains where coordination is now rate-limiting, our absolute first need is to disseminate simple models for visualizing & picturing system and system-component models.  

In our current context, we need simple models letting any Trudy, Nick or Harriet visualize how personnel (not just personal) methods make an organization, and how credit/currency/criminology policy methods make a national economy.  

With an iterative approach to such frameworks, people can fit in more detailed visualizations of sub-systems like credit systems, currency systems, criminology systems, and policy systems ... that are actually coherent visualizations of actual systems, rather than uncoordinated and unscalable ideologies. 
   The necessary visualization models need to be as simple as the currently popular but absolutely culturally worthless mobile-phone video games. :)

We have an emerging electorate that visualizes Grand Theft Auto better than they do Grand Theft Economy!   We can fix that.  The way to fix it soonest is to go through a logical process.

1) CREATE multiple, simple visualizations of context.
     Is the world flat or round?   
     Is an economy a dynamic equilibrium among distributed, conflicting, forces .... or a centrally commanded feudal system of, by & for the 1%, narrow-minded ideologues?

2) ADEQUATELY DISTRIBUTE competing, summary visualizations of as many models as possible.  Let juries of our peers - in Open Source courts or "selection markets" - converge to what is or isn't coherent.

3) CONVERGE to adequately shared context awareness. Then let the explosion of Open Source tools begin.  

If we can have Open Source Health tools & practices, then we can also have Open Source Policy tools & practices ... but only after achieving adequately shared public awareness of an economy as a coherent system.  If we as land and sea-faring nomads can invent celestial navigation tools and practices, then we, as Context Nomads, can also invent tools for navigating any context whatsoever.  To navigate, you first have to model which signs are most useful, and then monitor and leverage them appropriately.

Hence, Operations Institute, the OpenOperations Forum and Cosi Open Business & Economics are collaborating on a Challenge to EVERY HIGH SCHOOL IN THE USA, heck, in the world, to develop and present simple system visualizations in two categories:

1) Visual model for how Personnel System Methods make an Adaptive Organization.

2) Visual model for how Credit, currency, criminology & policy Methods make a Scalable National Economy.
Upon reflection, it seems necessary to divide scalable economics into 5, overall stages, especially for students being introduced to scalable systems theory.
     a) credit operations      b) monetary operations      c) regulatory operations      d) national policy operations [public ambitions]      e) a->d combined (adjusted, actual or "dynamic" national policy).

All models submitted for review must include a metric representing the sum "quality of distributed decision-making."   That metric will drive coherency as a selection criteria, thereby building in adaptive rate as the continuous goal, regardless of any pattern of context inputs.

All team members must be named, and - to practice coherent coordination - at least 4 disciplines must be represented.  For example: art student, choreography student, physics student, engineering student, etc, etc.

We suggest that all visualizations be submitted as YouTube videos and/or mobile phone apps.  We'll work with partners to create appropriate channels.

Open Operations Forum and collaborators will begin developing a distributed network of rewards for participating student teams.  Our initial approach will be to connect students to professions in any & all fields targeted above, and also to professionals in those fields who contribute by participation.   Thereby, participation result will be its own rewards.

Those with additional suggestions for further student incentives are free to contact us.

Saturday, January 14, 2012

More on Maintaining the Quality of Distributed Monetary Operations Decision-making

Any complex system that follows the general methodology recipe laid out in the prior post, has a chance to grow.

Any system that exceeds coherency tolerance limits, has a chance to implode, regardless of size or past.

As we lecture to engineers, methods drive results, but only with constant addition of new methods for retaining coherence among existing methods. Just avoid screwing up existing methods, and new options will always become obvious. Our bigger race is over tempo. There are always very few options that will scale.  It boils down to who will recognize & explore emerging options first - and converge soonest to the inevitable?

Metaphorically, whenever some sub-group decides it's okay to change the tires, without consulting drivers/passengers in a moving car, that's when things can go horribly wrong. Pick your imagined permutations of screwups by stakeholder groups in any system.

Then imagine what underlying method they might always use to maintain coherence within all their nested tolerance limits, thereby allowing system growth.

While data & direction & tactics/strategies/policies are all meaningless without context, there has to be one, overall method that is constant regardless of context.

We're always juggling emerging variables, within coherency tolerance limits, while searching for permutations that maximize further options. That is the general solution to the Traveling Entrepreneur Task. It's a constant parsing problem. Since expanding options are synonymous with more levels of indirection, it's also synonymous with system growth, which simply amounts to re-ordering from less-flexible to more flexible and adaptive systems.

That just brings us back to endlessly parking available energy, and the concept of reverse-entropy. Why? Just statistical fall out from the rules originating with the last Big Probability Event (supposedly a Bang). All methods we know are just permutations of those given rules.

To be logically consistent with reverse-entropy, monetary operations, as one minor part of cultural growth, are simply a bookkeeping method for coherently denominating increasingly distributed transaction patterns. If you keep in mind that the overall goal is a 2-stage optimization task, then you can always help maintain a coherent, steadily growing nation. The 2-stage optimization is simple. Keep the components - people - well maintained, AND grow the nation. Anything less gets boring very quickly. Small minds think distracting intoxication, big minds think cultural thermodynamics.

How, exactly, do we juggle existing/emerging variables, look for new options, and avoid the worst repercussions ... and do all that faster than others? The simple answer is by using highly connected human systems. Every group, by definition, is an analog computing network, exchanging messages between nodes. Group intelligence is held in the body of discourse, not in the component people. Group intelligence grows by constantly adjusting the inter-connectivity patterns and interaction rates, to increase net autocatalysis.

Autocatalysis rates (innovation) are managed my maintaining open & rapidly collaborative systems. Highly recombinant systems are, by definition, intelligent and agile. We need highly recombinant "culturetics" far more than recombinant genetics.

In the trivial case of monetary operations, how do we make fiscal/monetary/tax/criminology policies optimally serve national growth? Seems rather simple.

First, always maintain & adequately distribute enough fiat currency to allow quality of distributed decision-making to proceed without undue constraints. Any excessive distribution disparity is as stupid as generals hoarding most weapons and thinking they still command a useful army.

Second, regulate patterns of bookkeeping use in ways that inhibit fraudulent decisions, and encourage productive decisions.

Third, tax specific transaction patterns in order to further tune distributed actions to fit group policy goals.

Everything else constitutes local, tactical details. In that regard, the general rule is for policy staff to stay out of most, but not all, local tactics, and not meddle overmuch in the quality of distributed decision-making.

How do we Maintain the Quality of Distributed Monetary Operations & Decision-making?

First, always remember that currency is a tool created by an issuing group, and that all fiscal/monetary/tax issues involve how best to use that tool. No matter how good a tool you have, you have more options for both using and misusing it.

Separating tool & tool use is a good way to look at what goes wrong everywhere. For now, I'm going to attribute all usage problems as Cargo-Cult Process Management.  How we misuse monetary operations is similar to how we mismanage anything.

Instead of being careful, we're being stupidly lazy and making our nation one big skunk-works. We are, by definition, always in an ongoing aggregation process, yet it seems our thinking is typically stuck somewhere between:
1) personal gain,
2) system gain, or
3) co-optimizing both components + system.

Our greatest need? Refined methods for keeping higher proportions of our electorate practiced at stage 3 thinking.

What can we best use monetary operations for?  Obviously, to advance Public Purpose, which boils down to a 2-stage optimization task of maintaining components (people) as well as system (nation).  That goal is easily stated, but difficult to practice.   How do we actually tune monetary operations to help rather than hinder Public Purpose?  To answer that, we need to step back and look more broadly at context.

When naive people recognize a process for the first time, there's a temptation to think that it's use is adequately defined by a few variables ... when, in fact, there's a polynomial list of variables involved and no easy way to see which permutations scale across multiple contexts. Otherwise, it wouldn't have required 3.5 billion years on Earth to produce the densely-engineered, incredibly complex system call "us".

Take something as mundane as orange juice.
Some mechanical engineers & organic chemists say, "How come the growers/grocers have all the orange juice? It's mostly sugars, ascorbic acid, and some volatile oils. That's not stable. Let's fractionate it for separate storage, then recombine on-demand." Voila! Cargo-cult context management!

Once deoxidized & fractionated, purified parts of what remains stable is hoarded in bulk storage.

Eventually though, such, overly simplistic, Cargo-Cult thinking causes apoplexy among nutritionists, biochemists, physiologists, diverse clinicians, statisticians and general system scientists ... not to mention gourmets & chefs, and ultimately parents, educators & diverse developmental specialists.  Defeat was chosen from the beginning, because all the tool users weren't adequately linked to why they were using their tools.

We're always novices at emerging context. We presume that one-pass sampling of a complex system provides enough predictive power to cavalierly tinker with the whole system. To improve, we always have to explore more group options faster, so tinkering is needed, but must initially be safely isolated until beginners learn from most of the mistakes that others would see coming, if asked.  After that, all tinkering always needs full-group review.  There is no known mathematical alternative that can define the quality of distributed decision-making.  Decision quality, like data, is meaningless without full-group defined context.

Our greatest need - even in monetary operations - is to extend the sanity-check process to better sample all emerging sub-processes.  Here's how.
1) Have a rough model or checklist of all stakeholders repercussions (contingency tables).
2) Have a rough model for what constitutes adequate sampling of stakeholders (actually use the contingency tables).
3) Have a rough model for the ratio of disruptive/adaptive momentum (the summary from reading extended contingency tables).

Basically, don't imbalance groups faster than you can re-organize them. Otherwise, we're sawing off the limb we're sitting on.

[Note: It's not really as bad as it seems. Sub-processes more than ~5 layers deep in any system converge to stability. New options are statistically shielded from black boxes more than ~5 fractals in the past?]

What are Monetary Operations For?

How do we use distributed monetary operations to advance Public Purpose, sooner rather than later?

Briefly, organizations do what they must to survive. That includes inventing social species. The human social species survives by organizing scalable success faster than other species. Our moment of adaptation now floats among human cultures, as an unfolding success path tracking the combined pace & quality of distributed decision-making .

To optimize national returns, we generate as many distributed options as possible, then select from them as wisely & pace allows.  Success is continually minimizing constraints on decision quality. For currency, we do that in large part by separating two concepts - quantity and price of bookkeeping currency.  Across diverse decisions co-optimizing both local and group outcomes, it is price, and not access to bookkeeping, that should be the only burden on local degrees of freedom. Focusing on bookkeeping price vs quantity guarantees all people infinite pathways for contributing to group benefit. Limiting access to fiat currency simply limits the volume of numerals that students may utilize as they progress through math studies.   Once virtual bookkeeping is free, there is no public utility in limiting it's utilization.

In order to scale agility of large groups, currency supply should grow, on-demand, while available options should compete per ROI. That refinement is precisely what allows us to parse scaling options.  Evolving groups need the confidence to explore increasing options, yet still want to choose wisely.  Constraining the quantity of distributed bookkeeping is a useless distraction, whereas floating local pricing - NOT price stability - retains and focuses utility.  Stringing together organized chains of distributed decisions is EXACTLY how agile groups outpace more constrained groups.

Populations removed currency quantity as a constraining variable by transitioning to "fiat" currency standards, where currency supply follows than rather constrains the number of distributed transaction decisions that can be made.  Fiat currency, in it's most general definition, is not convertible upon demand to any particular commodity, only indirectly to public initiative.  Further, it's volume is allowed to float, as a function of public initiative.  Finally, it's exchanage rate with other currencies is also allowed to float (floating Fx). Together, these measures help remove currency supply as a constraint when optimizing the quality of distributed decision-making. Fiat currency is simply a virtual measure of unbounded public initiative. The currency-access side of distributed decision-making is never the place to impose market discipline, since it has become obsolete as a limiting variable.

Fiat currency, as only one of many tools we use, is created and distributed via public initiative (Appropriated by Congress, agent for the electorate), destroyed through taxation or return of fiat "profits" to our Treasury, the currency issuer (through it's agents, the IRS & Central Bank) yet primarily utilization in distributed transactions.  Other tools that we use are regulatory services that serve primarily to manage tolerance limits for all aspects of self-organization, from preparatory education standards to control of fraud rates.

Monetary operations involve fiscal, monetary, taxation and criminology fields - all serving public policy.   As only one of multiple tools, monetary operations are useful only when coordinated to optimize quality of distributed decision-making.

Developmental statistics, freak accidents & psycho-social variance will always prevent us from 100% utilization of our population at any time.  However, access to minimal local currency supplies as "revenue" is an obsolete concept once the transition to a virtul or fiat currency standard occurs.  There is no downside for the issuing group since currency supply follows net group initiative, and is not convertible upon demand to anything else. The upside includes more options to pursue full utilization of human capital as asynchronous group options occur.

The only reason for a group denying a member access to minimal "maintenance" amounts of fiat currency is lazy admission that "we can't come up with anything worthwhile for you to do."  None know the potential value of others, given 308 million people.  Rather, to explore unpredictable options, we either suggest things to be done, or follow individual's suggestions for novel things worth doing.   Since fiat currency supply is infinite, we need only practice selecting ROI from - not constraining - emerging, distributed initiative.

There are no better places to orient to modern monetary operations than the books by Mosler, "The 7 Deadly Innocent Frauds of Economic Policy" and Mitchell, "Full Employment Abandoned".

Thursday, January 12, 2012

Currency & Monetary Operations, And How To Use Them.

This blog kicks off with three questions, addressed in successive posts.

1) What are monetary operations, and

2) what are they for? [We forget, and even forget to ask.]

3) How do we seek a general method for answering those unending questions, and keeping emerging monetary Q&A meaningful to all.

As you'll see, these questions can't be separated, only revisited in a useful, variable sequence.

Initial answers, up front are, respectively:
for accelerating cultural growth; and
by simultaneously optimizing [people+nation], not either separately.

Some needn't read any further.

For the less astute here is post #1, what is money and what are monetary operations?

Money, or "currency," is what people use to denominate distributed transactions. It's that simple. If you have no need for widely separated transactions, then you have no need for currency. Adam & Eve didn't mention currency, just fig leaves. Who knows what they were really hiding.

In small groups, organization can track affinity bonds, and currency can be considered to be "backed" by Central Stores of arbitrary commodities. Yet as separation between transactions scales - with population size and/or market complexity - currency becomes a more distributed form of bookkeeping. A currency transition occurs, from primarily a store of local value, to primarily a unit of account for coordinating widely separated events in increasingly complex transaction chains. Transitions occur because net value to an organized group trumps any local definition of value. Strategic, organizational agility can't co-scale with population without a transition to fiat currency methods.

Someone smarter can always take your Central Stores, whether your men, your women, your horses, your gold, or any other commodity.  Therefore, currency is always backed by group initiative. Affinity bookkeeping can't scale with population, so organizational state requires more distributed logistics accounting. In larger populations, currency must be backed more & more directly by distributed public initiative.

"Why is state money better than gold?Because the highest return is always the return-on-coordination. Scaling up ability to explore large-group options requires scalable large-group agility and scalable large-group intelligence & coherent alignment to emerging options. That's the same reason that species & armies are never resource constrained. The biggest constraint is always organizational ability. That means that only state-money denominations are agile enough to keep up with the kinetic demands of uncontrollable public initiative. Commodity-money was thoroughly tested, and was found inadequate. Its valuation has to be constantly re-scaled, simply because populations & their options scale faster than the magnitude of any commodity store. If that’s the case, just simplify and cut the commodity out of the re-scaling loop that links organizational ability to group outcomes.

Currency concepts differ for local-, mid- and broad-minded people. Minds-in-training start with personal transactions, while experienced minds also consider nationwide coordination. Somehow, that spectrum sums to a coordinated whole greater than the sum of it's parts.  It's no wonder that Ben Franklin, an early proponent of the return-on-coordination, was also an early student of fiat currency. For now, let's just agree that money or currency tends to mean different things to different people in different stages of their development.

To review the history of money, there are no better places to start than Wray's book Understanding Modern Money [a historical guide]. and Graebers book Debt: The First 5000 Years.