6th Complexity in Business Conference

Presented by the

October 30 & 31, 2014

This file is a compilation of all of the abstracts presented at the conference.

They are in order by the first author/presenter’s last name.

Aschenbrenner, Peter

Managing the Endowment of Child Entities in Complex Systems:

The Case of National Banking Legislation, 1781-1846

Peter Aschenbrenner

From 1781 through 1846 officials (acting under Constitutions I and II) wrestled with the problem of creating national banking institutions which would serve the needs of the national government (among other constituencies). These were the most prominent of ‘child entities’ created in the interval 1777-1861 and the most controversial.

A different perspective is offered: I treat a (generic) national bank as a problem faced by legislators with respect to the kinetics (the dirty details) of endowing a child entity. This approach centers analysis on the Act of Congress creating/contracting with the entity. I then modestly generalize from this ‘act’ of creation. The parent solves a perceived governance problem by creating/refining service missions and assigning them to the child structure to fulfill.

Examples of service missions (1789-1861): funding internal improvements, promoting local and state education through resource-based funding, advancing science and technology, enhancing public knowledge and procuring developed talent to officer the armies of the new republic. Congress ‘learned by doing’ when it came to endowing public, private and semi-public child entities with service missions. This is not surprising: national government (once independence was declared) was obliged to replicate many centuries of mother-country experience that had accumulated in creating and managing parent interactions with (new) child entities.

The twenty-nine official events relevant to national banking arrangements (legislation, presidential approvals/vetoes, court cases) are divided into ten separate discrete event states, as the national government attempted to charter or recharter these institutions, along with the relevant sources and dates. Each may be understood as an ‘information exchange’ and coded as a time step in ABM if the investigator graphically simulates the ten exchanges, as I will do in my presentation.

In this case I investigated difficulties arising from assumptions made by some but not all parent actors/bodies. The most problematic assumption was that written instructions (semi-regimented = legal language) which parents did not craft would assist these parents in endowing child entities which would operate more successfully in the real world(= the causal inference). In my model I treat the process of endowing a child entity as a probability space in which many factors are at play, including benefit distribution, constituency demands, costs, revenue available for endowment. I construct my model to allow testing what if’s with these factors in play.

My secondary research thesis is that an inverse relationship is at work: service mission fulfillment by the child entity is degraded by appeals to ‘template’ language that (should have) governed parent behavior when the parent endowed the child. I suggest that operation of procurement models in complex systems are optimized when adherence to the ideology of pre-fabricated endowments is minimized.

What remains: how can degradation of performance at the parent and child level be measured? Complexity theory is offered as a framework to structure the investigation.

Babutsidze, Zakaria

A Trick of the Tail: The Role of Social Networks in Experience-Good Market Dynamics

ZakariaBabutsidze and Marco Valente

Among the many changes brought by the diffusion of Internet is the possibility to access the opinions of a vastly larger number of people than that within the range of physical contact. As a result, the strength of network-externality effects in individual decision making has been increasing in the last 20 years. In this paper, we study the effect of an increasing size of local social networks in decision-making setups where the opinions of others strongly impact on individual choice.

In this work we explore the question of whether and how the increased number of sources providing opinions is reshaping the eventual distribution of consumers' choices as reflected on market shares. An apparently obvious result of increasing size of social networks is that any one source of information (i.e. a single social contact/friend) is less relevant compared to sparser societies. As a consequence, one may expect that larger networks produces more evenly distributed choices.

In fact the fall of block-buster titles has been predicted few years ago by internet-thinkers. The prediction was that raise of information technologies together with the sharp increase in number of titles offered on the market was going to result in longer tail (e.g. larger number of "smallish" (niche) titles) at the expense of high-earners. The reality only partly confirmed these expectations. True, the industry showed increased number of niches. However, the concentration at the top end of the market, expected to pay the price of the lengthening of the tail, has actually also increased instead of decreasing -- the size of the top selling movies has unequivocally increased.

In other words, with time the market share distribution in titles has become polarized -- and the "middle" of the distribution (e.g. average earners) has gotten squeezed out. This polarization effect (consumers concentrating at the top- or bottom-end of options ranked by sales) is quite puzzling and deserves a close attention.

In this paper we present a stylized model providing a stripped-down representation of an experience-good market. The model is populated by a number of consumers who need to choose one movie among many on offer, and that rely exclusively on information obtained by members of their social network. To highlight the structural contribution by the size and the topology of the network to observed events, we assume that consumers have identical preferences and that all movies are potentially identically appreciated. In so doing, we remove the possibility of other factors generating whatever results we obtain.

Such a simple model is able to reproduce the widely observed phenomenon of increasing polarization with increasing size of the consumers' social network. We test the model under a wide variety of network types and parameterization, showing that the increased density of networks is able, on its own, to account for the disappearing of mid-sized titles and increase in, both, the share of business generated by block-busters, as well as the share generated by niche titles.

Bakken, David

Failure to Launch:

Why Most Marketers Are Not Jumping on the Agent-based Modeling Train

David Bakken

Most marketing decisions (such as whether to launch a new product) are made in relative ignorance of the complex interactions that will ultimately determine the outcomes of those decisions. Marketers rely on simple models of processes like adoption and diffusion of innovation. These models often ignore or assume away heterogeneity in consumer or buyer behavior. The well-known Bass Model of new product adoption is one such model for new product decision making. Other relatively simple models are used to make decisions about marketing mix and sales force deployment.

The pharmaceutical industry offers a good example of a market with complex interactions between agents (e.g., physicians, patients, payers, and competing drug manufacturers). However, critical business decisions (such as which clinical endpoints to pursue in clinical trials) are made on the basis of fairly simple multiplicative models (for example: "size of indication market X % of patients treated X % of patients treated with drug = peak patient volume").

The author has been promoting agent-based models for marketing decision-making for about 10 years (e.g., my article "Vizualize It" in Marketing Research, 2007). Despite the boom in individual-level data that reveals the heterogeneity in preferences and behavior (such as the growth in hierarchical Bayesian models for consumer choice) as well as expressed curiosity about agent-based modeling, the author has observed few implementations of agent-based modeling to develop the system-level insights that would lead to decisions based on a more complete picture of a market.

In this paper I'll discuss, based on my own experience, the factors that keep marketers from adopting ABM despite many potential advantages of ABM for informing decision-making. Even though there is plenty of evidence that the models that are used to make many marketing decisions are not particularly effective (they don't consistently lead to better outcomes), marketers seem unwilling to invest much in developing alternative models.

First and foremost is the tactical nature of most marketing decisions. That is accompanied by a relatively short time horizon (e.g., 2-5 years) that places a premium on making a "good enough" decision.

A second factor is the preeminence of statistical modeling in academic modeling.

Agent-based models require, at least initially, more effort to develop and test, and because insights often come from emergent behavior, perhaps a little scary for the average marketer.

I'll share my experience in helping clients take some agent-based modeling baby-steps and suggest some ways to overcome some of the barriers that keep marketers from adopting and using agent-based modeling. In his famous paper on managerial "decision calculus," John D. C. Little listed 6 characteristics that a model needs to satisfy.

Burghardt, Keith, et al.

Connecting Data with Competing Opinion Models

Keith Burghardt, William Rand and Michelle Girvan

In this paper, we attempt to better model the hypothesis of complex contagions, well known in sociology, and opinion dynamics, well known among network scientists and statistical physicists.

First, we will review these two ideas. The complex contagion hypothesis states that new ideas are adopted between individuals not unlike how diseases are caught from a sick person to a healthy one, although, unlike most biological diseases, individuals are highly unlikely to adopt a product or controversial idea if not exposed to it multiple times. In comparison, simple contagions, like viruses, are thought to be caught easily with as little as one exposure.

Opinion dynamics is the study of competing ideas via interactions between individuals. Although the protoypical example is voting for a political candidate, anything from competing products and companies to language diffusion similarly uses local interactions to compete for dominance.

The problems we will address in each field are the following. The complex contagion hypothesis has few models that can match the behavior seen in empirical data, and is therefore in need of a realistic model. Unlike complex contagions, opinion dynamics doesn't suffer from a lack of quantitative models, although models contradict one another when trying to describe very similar behavior on voting patterns, which, although not necessarily incorrect, suggests that there may be a deeper model that can combine the empirical observations. Lastly, agents in opinion dynamic models can reach the same opinion quickly (i.e. the timescale T_cons in random networks typically scales as N^a, where N is the number of agents and a <= 1). Reaching one opinion is typically known as reaching ``consensus" based on previous work, therefore we adopt this language for the present paper. Depending on how we define time-steps, this is in disagreement with observations that competing political parties have lasted a long time (e.g. > 150 years in the US).

We address these issues with what we call the Dynamically Stubborn Competing Strains (DSCS) model, introduced in this paper. Our model is similar to the Susceptible-Infected-Susceptible (SIS) model well-known in epidemiology, where agents can be ``infected" (persuaded) by their neighbors and recover (have no opinion). Our novel contributions are

- Competing opinions (``strains").

- Ideas are spread outward from agents that ``caught" a new idea.

- Agents are increasingly less likely to change their opinion the longer they hold it.

We interpret the recovery probability as randomly becoming undecided (or having ``self-doubt"). These differences are essential to create a minimal effective complex contagion model that shares agreement with real data.

In the following sections, we will show agreement between our model and several empirical studies. Firstly, we will show agreement between our model and a complex contagion. Secondly, we will show that our model can help describe the collapse of the probability distribution of votes among several candidates when scaled by v_0^(-1), where v_0 is the average number of votes per candidate. Next, we will argue that, because we believe that our model approaches the Voter Model Universality Class (VMUC) in certain limits, it has the same long-range vote correlations observed in several countries. Lastly, we will show that model parameters allow for arbitrarily long times to reach opinion consensus.

A question that may arise up to this point is why we are looking at complex contagions and opinion dynamics with the same model? Surely, one may propose, there is no competition in complex contagions? The argument we make is that humans have a limited cognition and thus can only focus on a few ideas at a given time (in economics, this is known as ``budget competition"). We take the limit that we have only one idea being actively spread (alternatively, we can have time-steps small enough that there is only one idea that we can focus on). Therefore, although we may adopt several coding languages, for example, we can only focus on one (e.g. Python or Matlab but not both) at a given time, thus introducing competition into complex contagions.

Next, we may ask what motivation is behind our model parameters? Why is there increasing stubbornness or a notion of self doubt, aside from convenient agreements with data? Our model is based off of a few intuitive observations seen in voting patterns. Conservatism (whose name means ``resistance to new ideas") increases with age, implying dynamic stubbornness may exist among voters. Furthermore, a large stable fraction of ``independent" voters not tied to a single party suggesting an ``unopinionated" state exists in real life. A reduction in poll volatility has been observed before an election which we can understand as a reduction in self-doubt as agents increasingly need to make a stable decision before they go to the polls. Lastly, rumors and opinions seem to spread virally meaning the cumulative probability of adopting an idea increases with the number of exposures to that idea; therefore a realistic opinion model should have “viral” dynamics. Notice that nothing in our model explicitly implies the idea of complex contagions. In other words, the complex contagion behavior we see is not ad hoc, but is instead a natural outcome of our model.

Chica, Manuel

Centrality Metrics for Identifying Key Variables in System Dynamics

Modelling for Brand Management

Manuel Chica

System dynamics (SD) provides the means for modelling complex systems such as those required to analyse many economic and marketing phenomena. When tackling highly complex problems, modellers can soundly increase their understanding of these systems by automatically identifying the key variables that arise from the model structure.

In this work we propose the application of social network analysis centrality metrics, like degree, closeness or centrality, to quantify the relevance of each variable. These metrics shall assist modellers in identifying the most significant variables of the system.

We have applied our proposed key variable detection algorithm to a brand management problem modelled via system dynamics. Concretely, we have modelled and simulated a TV show brand management problem. We have followed Vester's sensitivity model to shape the system dynamics and structure. This SD methodology is convenient for sustainable processes and enables analysts to simplify the real world complexity into a simulation and consensus system.

After applying the algorithm and extracting the key variables of the model structure we have run different simulations to compare the global impact of injecting strategic actions just over top-ranked key variables. Simulation results show how changes in these variables have an noteworthy impact over the whole system with respect to changes in other variables.

Darmon, David

Finding Predictively Optimal Communities in Dynamic Social Networks

David Darmon

The detection of communities is a key first step to understanding complex social networks. Most methods for community detection map a static network to a partition of nodes. We propose using dynamic information via predictive models to generate predictively optimal communities. By better understanding community dynamics, managers can devise strategies for engaging with and creating content for social media, providing them with a powerful way to increase brand awareness and eventually sales.

Klemens, Ben

A Useful Algebraic System of Statistical Models

Ben Klemens

This paper proposes a single form for statistical models that accommodates a broad range of models, from ordinary least squares to agent-based microsimulations. The definition makes it almost trivial to define morphisms to transform and combine existing models to produce new models. It offers a unified means of expressing and implementing methods that are typically given disparate treatment in the literature, including transformations via differentiable functions, Bayesian updating, multi-level and other types of composed models, Markov chain Monte Carlo, and several other common procedures. It especially offers benefit to simulation-type models, because of the value in being able to build complex models from simple parts, easily calculate robustness measures for simulation statistics and, where appropriate, test hypotheses. Running examples will be given using Apophenia, an open-source software library based on the model form and transformations described here.

Lamba, Harbir

How Much `Behavioral Economics' is Neededto Invalidate Equilibrium-Based Models?

HarbirLamba

The orthodox models of economics and finance assume that systems of many agents are always in a quasi-equilibrium state. This (conveniently) implies that the future evolution of the system is decoupled from its past and depends only upon external influences. However, there are many human traits and societal incentives that can cause coupling between agents' behaviours --- potentially invalidating the averaging procedures underpinning such equilibrium models.