Modeling Cryptoeconomic Protocols as Complex Systems - Part 2
In part one, we discussed how as cryptoeconomic designers, we get to choose the rules, interfaces, and incentives of our protocol in order to design a system that fulfills its intended purpose. In order to evaluate the design of such protocols, the traditional economic toolkit offers us top-down models that fail to account for complexity or bottom-up models that are computationally intractable for non-trivial numbers of participants.
In this post, we will introduce two modeling and simulation techniques, widely used in engineering and the social sciences, that attempt to deal with the specific challenges posed by complexity. We will also discuss how BlockScience will be applying these tools to model The Graph.
Dynamic Systems Models and Controls Systems Engineering
Dynamic Systems Models (DSMs) allow us to represent systems of interacting components that change over time. They are a general-purpose tool, equally suited for a mechanical, electrical, or any other dynamic system.
It’s common to visualize dynamic systems using purpose-specific diagramming techniques, such as with electronic circuits.
Four ways of diagramming systems. Top left: A circuit diagram of an electronic oscillator. Top right: A block diagram of an oscillator. Bottom left: A causal loop diagram of supply and demand. Bottom right: A stock and flow diagram of fish populations.
It’s also common to use block diagrams, causal loop diagrams, or stock and flow diagrams. The first two allow us to model any cause-and-effect relationship between different components of the system, such as currents and voltages changing on a circuit board or the kinetic and potential energy of a rope-and-pulley system. Stock and flow diagrams allow us to model some type of resource, such as money or water, moving through the different parts of a system.
Going back to our discussion of mid-20th century general system theorists in the previous post, you may notice that the above systems are medium number systems—the very ones that were said to be difficult to model.
Indeed, while we can represent the above models mathematically as systems of differential equations, as soon as the number of interacting components increases past a certain point, it becomes untenable to solve them analytically.
That’s where simulation software like SPICE or Matlab’s Simulink come into play. They allow us to do what the scientists of the early 20th century could not: Leverage computers to simulate and approximate the behavior of various dynamic systems.
Engineers do more than just model dynamic systems. They also control them.
That is to say, they design controllers that take input from one or more sensors and one or more control signals to output a control variable that modulates an actuator, with the goal of influencing the state of some output of the system.
For example, imagine a self-driving car while trying to stay in a lane: Its sensors measure how far from the center of the lane they are, the input might be a signal that says to continue straight in the lane, and the actuator would be the steering system that allows the car to adjust its heading in response to the error, i.e., the distance the car is from the desired heading.
An image showing control systems with different damping ratios. An “overdamped” system may never reach its target value while an “underdamped system” may perpetually overshoot!
The algorithms used for controllers and parameterizing these controllers in an optimal way is the domain of control systems engineering, and there are immediate parallels to cryptoeconomic engineering.
Consider how governance sets parameters in the Maker protocol.
In its essence, this is just another feedback loop with a sensor, controller, and actuator. The sensor is the MKR holders observing the price of DAI on various exchanges and observing the error between that and the target price of DAI ($1 USD). In response to the error, the MKR holders act as a controller that produces the control variable, the Stability Fee.
In response to changes to the Stability Fee (cost of borrowing), borrowers in the system act as actuators by opening and closing Collateralized Debt Positions (CDPs) to borrow or pay back the DAI they’ve been lent. The fluctuation in the circulating DAI supply in turn lowers or increases the price of DAI respectively. MKR holders observe the new price, and the cycle repeats. In the recently launched Multi-Collateral Dai, a similar dynamic exists with the Dai Savings Rate (DSR), which is designed to increase the demand to hold DAI. Check out this article for a deeper look at Maker through a control theory lens.
You may have noticed a key difference between the self-driving car and Maker examples. In the first dynamic system, all the components, whether electronic or mechanical, are governed by the laws of physics. While there may be imperfections in any given component, their behavior is entirely mechanistic.
In the Maker example, meanwhile, several of the components—such as the MKR holders setting the Stability Fee or the borrowers opening and closing CDPs—are actually diverse groups of human agents making decisions, based on their individual incentives, using partial information and bounded rationality. Furthermore, these agents may learn and modify their behavior over time.
To model the parts of our system that are governed by human behavior, we need to adopt a different approach.
In Agent-Based Models (ABMs), rather than taking statistical averages of how agents behave, the behaviors of individual agents are treated as a first-class concern. By simulating the local interactions of heterogeneous agents exhibiting many different types of behavior, we can observe which macroscale effects emerge.
The goal is not to model reality perfectly but rather to demonstrate how our system will behave given varying assumptions, including external factors, the types of agent behaviors, and the spatial topology in which agents interact.
From left to right: A Schelling Segregation Model simulation run at t=0, t=10 and t=40. Agents are “happy” if at least 3 of their 8 neighbors are similar to them. At each time step, happy agents stay put, while unhappy agents randomly move to a new location on the grid. By t=40, the grid is almost completely segregated, despite agents being happy when more than half their neighbors are dissimilar to them!
One classic ABM is Thomas Schelling’s Segregation Model in which agents are laid out on a grid and, in each round, may choose to move or stay based on the demographic makeup of their neighbors on the grid. The Model showed that even societies comprising relatively tolerant individuals—those with only a slight preference to be neighbors with their same demographic—could exhibit segregation at the macroscale. This was a surprising result! It challenged the conventional thinking at the time that segregation at the societal scale must be the result of intolerance at the level of individuals.
While ABMs have been greatly aided by advancements in computing, they need not be overly complex to be insightful. After all, Schelling’s Segregation Model was initially computed by hand. The defining quality of ABMs is that they represent a bottom-up simulation based on useful assumptions about the agents in our system.
One of my favorite ABMs from recent memory is Ray Dalio’s “How The Economic Machine Works” video. In it, he builds a model for how debt cycles in the economy happen based on the interactions of individual buyers, sellers, lenders, and borrowers. While this is a video and not technically a simulation, you can bet that it reflects the sorts of models that Dalio’s Bridgewater—one of the most successful macro investing hedge funds in the world—uses internally to make their investments.
We’ve now explored two powerful tools for modeling different types of systems. How can we apply these to design our cryptoeconomic protocols, now that we understand the unique challenges presented by complexity?
This is where BlockScience’s cadCAD comes in.
BlockScience is a firm that specializes in researching, analyzing, and engineering complex systems. One of their key insights is that we can apply the dynamic systems modeling approach for validating protocol designs. In place of a deterministic controller in our feedback loop, however, we can leverage agent-based models to simulate how agents might update the system state in response to their perception of the current state of the system.
In the Maker example, we couldn’t model the feedback loop using traditional dynamic systems methods because it was the result of human actions. With the BlockScience approach, we can account for human actions in our model.
To facilitate this new type of modeling, BlockScience has developed a state-of-the-art modeling tool, cadCAD, which stands for complex adaptive dynamics Computer-Aided Design.
We’ve talked about complexity, but what does “adaptive” refer to? Simply that the agents in our system are capable of learning in response to their interactions with the system. Not only that, but the rules of our protocol are capable of evolving in response to the behavior of agents, through processes like governance.
In other words, the dynamics of our system can themselves change over time. It’s imperative that our models account for this.
The three main concepts in cadCAD are policies, states, and state updates. Policies encompass human and autonomous agents, as well as the internal rules of the system, that affect the state of the system and might trigger state updates. For the agents in the system, cadCAD supports using a network model to specify how agents interact.
With this setup, we can run a variety of different simulations depending on the questions we are trying to answer.
For example, if we’re curious about how robust our system design is, we might run a Monte Carlo simulation in which we introduce stochastics (randomness) into specific parts of the system, such as the behaviors of certain agents, and do a statistical analysis of many runs with the same system parameters. We can use so-called “fat-tail” stochastic distributions to help us identify emergent phenomena, such as black swan events, that the rules of our system might produce.
Furthermore, we can run A/B tests to compare how our system behaves given different sets of assumptions. We can also perform parameter sweeps to run many simulations in which a control variable or initial conditions are modified slightly to see how much they influence the overall behavior of the system. This is called sensitivity analysis.
Importantly, our task is not finished when we’ve modeled our system once and launched our protocol. We may also choose to revisit, validate, and fine-tune our models with real-world data collected from our system running in the wild.
In part one, we discussed how cryptoeconomic designers get to choose the rules, interfaces, and incentives of our protocol. Incentives influence the behavior of agents in our system while rules and interfaces constrain said behavior. The agents in our system are acting with partial information and bounded rationality and modify their behavior over time through learning, so we cannot “average them out” in some top-down statistical model. In other words, we are designing complex adaptive systems, which requires a more rigorous approach than is offered by the traditional economic toolkit.
In this second post, we showed how dynamic systems models can help us model systems of interacting components and even how we might control such systems. Agent-based models fill an important gap by allowing us to account for the aggregate behavior of humans exhibiting diverse behaviors in our models. We also covered how cadCAD, a cutting-edge tool for modeling complex adaptive systems seamlessly synthesizes these two approaches.
We’re proud to be working with the BlockScience team to validate The Graph’s cryptoeconomic design. BlockScience have demonstrated themselves to be thought leaders in this emerging field. With Dr. Michael Zargham leading, they’ve assembled a strong interdisciplinary team drawing from fields as diverse as network science, the social sciences, and controls engineering. They have research partnerships with the Research Institute for Cryptoeconomics in Vienna with whom they recently co-authored a paper on cryptoeconomics and complex systems. I recommend checking it out.
As a project with real usage and a strong community ahead of the launch of our decentralized network, we feel The Graph is well-positioned to be an excellent case study for incorporating these emerging best practices into the full design and engineering lifecycle of the protocol.
We look forward to sharing more on this topic in the future. Thanks for reading!
- Graph Protocol
- Brandon Ramirez
- January 14, 2020