## Indexer Allocation Optimization: Part II

*This article was co-authored with Howard Heaton, from Edge & Node, and with Hope Yen, from *GraphOps*.*

## TL;DR

Analytically optimizing to maximize indexing rewards may seem like a straightforward solution, but it oversimplifies the complexities involved in an indexer’s decision-making process. In this post, we’ll discuss how we enable indexers to input their preferences into the Allocation Optimizer and how those preferences can impact the Allocation Optimization problem. We’ll also explore how we incorporate a gas fee to make the optimization problem more realistic, which results in a non-convex problem.

## Overview

In Part I, we provided an overview of the Indexer Allocation Optimization problem and discussed how we can optimize for a basic version of it. However, indexers have unique preferences and constraints that cannot be easily represented in the math model. Additionally, we need to factor in gas costs, which are a per-allocation fee that indexers incur when they allocate. In this section, we’ll address both indexer preferences and gas costs, one at a time. It’s important to note that while we will use some math concepts, we’ll strive to explain them in a way that’s accessible to readers with a college undergraduate level of math knowledge, by which we mean a basic knowledge of calculus and linear algebra.

## Optimizing Over Gas

Gas costs have long been a challenging issue for those seeking to solve web3 optimization problems, with a variety of proposed solutions. For instance, Angeris et al.’s paper on optimal routing for constant function market makers[1] proposes one approach using convex relaxation. In this section, we’ll offer our own take by framing the problem of optimal allocation with fixed transaction costs as a sparse vector optimization problem. Specifically, we aim to find the set of optimal sparse vectors and then select the specific sparse vector that yields the highest profit. We define profit as the total indexing reward minus the total gas cost. In the following section, we’ll go through the background math at a very high level. For the most part, we use images to make our points. However, if you’d rather not understand how the algorithm works, feel free to skip over it and jump to the section “Indexer Profits.”

### The Math At A Glance

Let’s take a look at an example. Say we want to minimize the function

$f(x_1, x_2) = (x_1 – 2)^2 + (x_2 – 5)^2$

You should hopefully be able to see that this happens at $(2, 5)$. Plug those numbers in for $x_1$ and $x_2$ if you can’t. The result should be $0$. This is obvious to see, but let’s see if we can do this in a more automated way.

One way to do this is gradient descent. Here’s how it works. Pick some random point $x^{(1)}$ to start the algorithm from. The gradient of the function $\nabla f$ gives you the direction of steepest ascent. Since we want to find the minimum, we put a negative sign in front of it. We step in the direction of the negative gradient with step size $\eta$. Thus, our update rule becomes.

$x^{(n+1)} = x^{(n)} – \eta \nabla f(x^{(n)})$

**Note: **If you aren’t interested in the math, feel free to skip ahead to the paragraph above the next image.

Let’s plug in our $f$ and see this in action.

$f(x_1, x_2) = (x_1 – 2)^2 + (x_2 – 5)^2$

We’ll split this into two parts – solving for $x_1$ and solving for $x_2$ independently. Let’s start with $x_1$. First, we find the gradient of $f$ with respect to $x_1$.

$\frac{\partial f}{\partial x_1} = 2(x_1 – 2) = 2x_1 – 4$

Now, we use the formula for gradient descent and start plugging in numbers.

$x^{(n+1)}_1 = x^{(n)}_1 – \eta \frac{\partial f}{\partial x_1}$

$x^{(n+1)}_1 = x^{(n)}_1 – \eta (2x^{(n)}_1 – 4)$

$x^{(n+1)}_1 = x^{(n)}_1 -2\eta x^{(n)}_1 + 4\eta$

$x^{(n+1)}_1 = (1 – 2\eta)x^{(n)}_1 + 4\eta$

We can now replace $x^{(n)}$ with the same gradient descent formula applied to $x^{(n-1)}$.

$x^{(n+1)}_1 = (1 – 2\eta)\left[(1-2\eta)x^{(n-1)} + 4\eta\right] + 4\eta$

$x^{(n+1)}_1 = (1 – 2\eta)^2x^{(n-1)}_1 + \left[(1 – 2\eta)4\eta + 4\eta\right]$

I grouped certain terms together on purpose. Hopefully, you can sort of see the pattern at this point. If not, I recommend you do the math again. Regardless, if we want to write $x^{(n+1)}$ in terms of $x^{(n-2)}$, we can do this as

$x^{(n+1)}_1 = (1 – 2\eta)^2x^{(n-2)}_1 + \left[(1 – 2\eta)^24\eta + (1 – 2\eta)4\eta + 4\eta\right]$

And if we keep doing this, we get

$x^{(n+1)}_1 = (1 – 2\eta)^nx^{(1)}_1 + \left[(1 – 2\eta)^{n-1} + (1 – 2\eta)^{n-2} + … + 1\right]4\eta$

You should hopefully recognize the term in the square brackets as a geometric series. If $|\eta| < 1$, this converges to give us

$x^{(n+1)}_1 = (1 – 2\eta)^nx^{(1)}_1 + \frac{1 – (1 – 2\eta)^n}{1 – (1 – 2\eta)}4\eta$

Notice here that we must have $|1 – 2\eta| < 1$. Else, the $(1 – 2\eta)^n$ terms blow up to infinity. If we select $\eta$ to be such that that inequality is satisfied, notice that every $(1 – 2\eta)^n$ goes to $0$ as $n \to \infty$. Thus, as $n \to \infty$, we can rewrite our formula as

$x^{(n+1)}_1 = \frac{1}{2\eta}4\eta$ = 2

Thus, our optimal $x_1 = 2$. Notice this is exactly what we’d found visually earlier. The only difference is that now we’ve used a less manual algorithm instead. If our function $f$ satisfies certain properties, we are guaranteed to find the global minimum using gradient descent. The full list of properties is beyond the scope of this post. We’ll just focus on one – convexity.

A convex function $f$ is a function that always curves upwards. This means that for two points in $a$ and $b$, the line segment connecting $f(a)$ to $f(b)$ is always on or above the curve of $f$. So $x^2$ is convex; $x^3$ is not. $-x^2$ is concave, which is fine because we can just run gradient ascent to find the maximum instead of gradient descent to find the minimum. We’ll come back to convexity a bit later.

Now, that we know we can use gradient descent to minimize an objective function, let’s be a bit more interesting. Let’s say there are constraints on our solution. Let’s say that I have a maximum of 10 units I can divide between $x_1$ and $x_2$. This is totally fine, because 2+5=7 which is less than 10. Now let’s say instead that I have just 1 unit I can allocate. Would I be better off dividing this between $x_1$ and $x_2$, or would it be better to go all in on one? This is no longer as obvious.

In Part I, we talked about using the Karush-Kuhn-Tucker conditions to solve this problem, and this is what gave us our analytic solution. Now, we want to continue on this theme of finding a more general solution. In this case, we can use projections.

This new constraint, that the $x_1 + x_2 = 1$ and that $x_1 \geq 0$ and $x_2 \geq 0$, has a name – the simplex constraint. The simplex constraint has a corresponding simplex projection.[2] We’ll use this here. I won’t talk through the math again. If you’re interested, feel free to work it out for yourself. I’ll just give you the new update rule.

$x^{(n+1)}_1 = T(x^{(n)} – \eta \nabla f(x^{(n)}))$

In other words, we take our normal gradient descent step, and then we project the result of that onto the simplex, which is denoted by the operator $T$. What does this look like?

We revisit our familiar problem $f(x_1, x_2) = (x_1 – 2)^2 + (x_2 – 5)^2$. Now we add a unit-simplex constraint ($x_1 + x_2 = 1$; $x_1 \geq 0$, $x_2 \geq 0$). In the above figure, we set $x^{(1)} = (0, 0)$. From here, we first project this onto the unit simplex. The white line shows the feasible region of the unit-simplex constraint, meaning all the possible combinations of $(x_1, x_2)$ that satisfy the expressions from before. The projection operation $T(x)$ simply finds the closest $x’$ that is in the feasible region (on the white line). You can see this happening in our plot. First $x^{(1)} = (0, 0)$ gets projected onto $(0.5, 0.5)$. Then, we take a step of gradient descent, pulling us off the feasible region in the direction of the true optimal, which we know from before to be $(2, 5)$. Then our projection brings us back onto the feasible region. We repeat this until we converge to the optimal value for our new problem $(0, 1)$.

Let’s add another layer of complexity again. Let’s say we know for a fact that our solution is sparse – meaning the solution is either $(0, 1)$ or $(1, 0)$. Can we do any better than this back-and-forth stepping? We treat this as a new constraint, which we call a sparsity constraint – the number of nonzero elements of $x$ should be at most $k$. In this case, $k = 1$. Sparse projections onto the simplex are again a common type of problem, so we’ll refer you to the cited paper if you want to learn more.[3] In any case, we can now use the Greedy Selector and Simplex Projector (GSSP) algorithm in place of where we’d used the simplex projection before.

Using GSSP converges a lot quicker, but actually gets stuck on the wrong point. It gets stuck on $(1, 0)$ rather than $(0, 1)$. We can check that this is a worse solution by plugging both of those points back into our objective function. $f(1, 0) = 26$ and $f(0, 1) = 20$. Since we’re minimizing, the latter is better. Is there anything we can do here?

Of course! We know the optimal unconstrained point. We were even able to just look at the plot and see that it is $(2, 5)$. We’ll use this an an anchor point in Halpern iteration.[4] Again we won’t go through the mathematical details of the algorithm other than to point you to the paper. Instead, we’ll just talk through the algorithm visually.

At a high level, Halpern iteration will try to anchor our solution to some anchor point $x_a = (2, 5)$. What this means is that we’ll first pull the solution towards $(2, 5)$, and then use GSSP to project the solution back onto the simplex. As the number of iterations increases, the pull of the anchor point lessens to the point at which it drops off completely, and we’re left with just the vanilla projected gradient descent with GSSP that we talked about before. This is equivalent to adding a new operator $H$ that executes Halpern iteration during our projected gradient descent update.

$x^{(n+1)}_1 = T(H(x^{(n)} – \eta \nabla f(x^{(n)})))$

### Indexer Profits

Back to our problem. Instead of our dummy $f(x)$, now we use the indexing reward.

$\min_{\Omega_i} \quad -\sum_{j \in \mathcal{S}}\frac{\Omega_{ij}}{\sum_{k \in \mathcal{I}} \Omega_{kj}}\cdot\Phi\frac{\psi_j}{\sum_{l \in \mathcal{S}} \psi_l}$

$\textrm{s.t.} \quad \omega_k \geq 0 \quad \forall \ \omega_k\in\Omega_i$

$\quad \sum_{j\in\mathcal{S}}\Omega_{ij} = \sigma_i$

If this notation is unfamiliar, please refer back to Part I. We will call the above formulation **A.** Notice our constraints are actually a simplex constraint – they take the form $\sum_i x_i = C$, $x_i \geq 0 \ \forall i$. This means we can solve this problem using the simplex projection we talked about in the previous section. Use gradient descent to step in the direction of an optimum; then, use the simplex projection to bring ourselves back onto the feasible region. As in the last section, we can use the optimal value computed analytically, as shown in Part I as our anchor for Halpern iteration. This is, in fact, what the Allocation Optimizer does. Well, almost.

So far, we’ve talked about indexing rewards. We are actually concerned with indexer profits. Our optimization problem is now

$\min_{\Omega_i} \quad -\sum_{j \in \mathcal{S}}\frac{\Omega_{ij}}{\sum_{k \in \mathcal{I}} \Omega_{kj}}\cdot\Phi\frac{\psi_j}{\sum_{l \in \mathcal{S}} \psi_l}+ 2g\sum_{j\in\mathcal{S}}\eta_j$

$\textrm{s.t.} \quad \omega_k \geq 0 \quad \forall \ \omega_k\in\Omega_i$

$\quad \sum_{j\in\mathcal{S}}\Omega_{ij} = \sigma_i$

We’ll call this formulation **B**.

**B** is a problem. Our objective function, which was convex in **A**, is no longer so. This is because we’ve added a new term depending on $\eta_j$, which is a binary variable such that it has value $\eta_j = 1$ if $\omega_j > 0$ and 0 otherwise. This is a big problem for us. Too see why, consider the picture below.

The plot above shows a non-convex function. Consider the problem of using gradient descent to minimize it. As a reminder, gradient descent follows the slope down. Once the slope shifts from negative to positive, gradient descent is done. In the above plot, we have two minima, one that is at $x>0$ and one that is at $x<0$. Let’s imagine we start gradient descent from $x>1$. What happens? Well, gradient descent will slide leftwards until it reaches the rightmost minimum. Now imagine, that we start from $x<-2$. What happens now? Gradient descent will step rightwards until it hits the leftmost minimum. Except this isn’t the best we could have done. We actually could have gone further and found a better solution. Here’s the problem we now face put more directly. We can use gradient descent just fine on non-convex functions, but how do we know we’ve actually reached the global minimum, rather than a local minimum? How do we know we couldn’t do better than the solution we found? As I’ve mentioned before, this style of problem, in which a convex function becomes non-convex due to gas fees, is everywhere in web3. Let’s talk about one of the potential solutions to this.

For the sake of argument, let’s pretend there were 10 subgraphs on the network. If an indexer wanted to allocate, they could allocate on one subgraph, or two subgraphs, or three subgraphs, all the way up to ten subgraphs. Each subsequent allocation vector (from 1 nonzero to 10) would have increasing gas costs corresponding with the $\eta_j$ term in **B**. What we could do is solve **A** with an additional sparsity constraint.

$\min_{\Omega_i} \quad -\sum_{j \in \mathcal{S}}\frac{\Omega_{ij}}{\sum_{k \in \mathcal{I}} \Omega_{kj}}\cdot\Phi\frac{\psi_j}{\sum_{l \in \mathcal{S}} \psi_l}$

$\textrm{s.t.} \quad \omega_k \geq 0 \quad \forall \ \omega_k\in\Omega_i$

$\quad \sum_{j\in\mathcal{S}}\Omega_{ij} = \sigma_i$

$\quad \sum_{j\in\mathcal{S}}\eta_j \leq k$

We’ll call this formulation **C**. If we solve **C** for each value $k$ from 1 to the number of subgraphs, then we’ll have our optimal allocation strategy if the indexer opens one new allocation, the optimal strategy for two allocations, and so on. Then, we can choose the best of these by picking the one that minimizes the objective function of **B**. We already know how to solve problems with a sparse simplex constraint. Just use GSSP!

When you run the Allocation Optimizer with `opt_mode = "fast"`

, this is exactly what the code does! In summary, solve **C** using projected gradient descent with GSSP and Halpern iteration. Then, for each of these possible allocation vectors, compute the magnitude of the objective function of **B**. Finally, return the allocation strategy with the lowest magnitude.

### Results

100 GRT | 1000 GRT | 10000 GRT | |

Current Profit | 191525.88 | 183425.88 | 102425.88 |

Optimized Profit | 540841.27 | 469017.12 | 333127.37 |

% Increase | 282% | 255% | 325% |

If you recall from Part I, we were originally concerned that the analytic solution allocated to way more subgraphs than indexers did. This was in part due to the fact that the analytic solution does not respect gas costs. Here, we demonstrate that the new algorithm does so, and also manages to outperform the indexer’s existing allocation strategy across a variety of different gas costs. Other than gas costs, the Allocation Optimizer also enables indexers to further pare down this via other entries in the configuration file. Let’s run through those.

## Indexer Preferences

There is no way our optimizer could compensate for the full diversity of thought behind how different indexers choose to allocate. Some may prefer shorter allocations. Others may prefer to keep their allocation open for a maximum of 28 epochs (roughly 28 days). Some indexers may have strong negative or positive feelings towards specific subgraphs. In this section, we talk through how the Allocation Optimizer accounts for this information.

### Filtering Subgraphs

Indexers often don’t care to index some subgraphs. Maybe a subgraph is broken. Maybe it’s massive, and the indexer just won’t be able to sync it in a short amount of time. Whatever the reason, we want to ensure that indexers have the ability to *blacklist* subgraphs. In the previous post, we used the notation $\mathcal{S}$ to represent the set of all subgraphs. To remove blacklisted subgraphs from the optimization problem, all we do is choose some set $\mathcal{S}’ = \mathcal{S}\backslash \mathcal{B}$ where $\mathcal{B}$ is the set of blacklisted subgraphs.

Since indexing a subgraph takes time, many indexers also have a curated list of subgraphs that they already have synced or mostly synced. They may only want to allocate to subgraphs in that list. Therefore, indexers can also specify a *whitelist* in the configuration file. To see how the whitelist $\mathcal{W}$ modifies the math, all we have to do is replace all instances of $\mathcal{S}$ with $\mathcal{W}$.

Let’s say an indexer is already allocated on a subgraph and does not want the Optimizer the try to unallocate from this subgraph. The indexer can specify this in the *frozenlist*. Subgraphs in the frozenlist are treated as though they are blacklisted. However, they differ from blacklisted subgraphs in that the indexer’s allocated stake on these frozen subgraphs $\sigma_f$ is not available for reallocation. Therefore, $\sigma’ = \sigma – \sigma_f$.

The final list we allow for is the *pinnedlist*. This is mostly meant to be used by so-called backstop indexers – indexers who are not necessarily there to make a profit from the network, but more just to ensure that all data is served. Pinned subgraphs are treated as whitelisted since the Optimizer may decide to allocate to them anyway. However, if the Optimizer doesn’t, we manually add 0.1 GRT back onto each of these subgraphs just so that the indexer has a small, non-negative allocation on the pinned subgraphs.

Beyond the lists, indexers may also choose to not be interested in subgraphs with signal less than some amount, even if it is optimal to be on said subgraph. Thus, they can specify a *min_signal* value in the configuration file. The Optimizer will filter out any subgraphs with signal below this amount.

### Other Preferences

As we mentioned before, indexers may also choose to allocate for a variety of different time periods. Thus, they can specify the *allocation_lifetime* in the configuration file. This determined how many new GRT are minted from token issuance during that time frame, which is given by the classic $P(r^t – 1)$ where $P$ is the current number of GRT in circulation, $r$ is the issuance rate, and. $t$ is the allocation lifetime in blocks. As $t$ decreases, fewer tokens are issued, which means the effect of the gas term in our objective function becomes stronger.

The final preference that indexers can specify is *max_allocations*. For whatever reason, an indexer may prefer to allocate to fewer than the optimal number of subgraphs. Maybe this is due to hardware constraints on their side, or else perhaps it’s because of reduced mental load in tracking subgraphs. In any case, once indexers specify this number, the Optimizer uses it as $k_{max}$. In other words, when it solves formulation **C**, normally it would solve it for $k \in [1,…,\textrm{num_subgraphs}]$. Instead, it will solve it $k \in [1,…,k_{max}]$ times.

## Conclusion

In Part II of the Allocation Optimizer series, we’ve demonstrated how the Allocation Optimizer accounts for indexer preferences and how the `Val(:fast)`

mode solves the non-convex optimization problem using GSSP and Halpern iteration. We also demonstrated some results for the algorithm, showing that it outperforms a current indexer’s existing allocations. However, we did not demonstrate optimality. Spoiler alert, the Allocation Optimizer also has a `Val(:optimal)`

flag. The solution we talked about here is fast, and will often outperform manually choosing allocations. However, we can do better. We’ll leave the details to our next blog post though.

Just a final note, if you’re interested in playing around with the techniques described in our blog post – gradient descent, projected gradient descent, the simplex projection, GSSP, and Halpern iteration, check out our Julia package SemioticOpt. The Allocation Optimizer and the code that generated the plots in this blogpost both use SemioticOpt.

### Further Reading

- [1] Optimal Routing for Constant Function Market Makers by Angeris, Chitra, Evans, and Boyd
- [2] Projection Onto A Simplex by Chen and Ye
- [3] Sparse projections onto the simplex by Kyrillidis, Becker, Cevher, and Koch
- [4] Fixed Points of Nonexpanding Maps by Halpern

## An Overview of Automatic Market Maker Mechanisms

*This article explores the principles and mechanisms behind the many popular Automatic Market Maker designs currently used in production. While the mathematical details of these designs are fascinating in their own right, this article seeks to instead focus on graphical representations and high-level concepts, allowing for a more approachable and exhaustive exploration of the space.*

*Watch our related Devcon 2022 talk here: https://www.youtube.com/watch?v=KxuyHfmLHP0*.

## Introduction

Historically, order books run by professional market makers have been the dominant method used to facilitate exchange. On-chain, maintaining an order book using traditional market-making tactics is prohibitively expensive, since storage and computation on distributed ledgers are in short supply and high demand. For this reason, Automatic Market Makers (AMM) have emerged as an efficient class of methods to facilitate the exchange of crypto assets on distributed ledgers. An AMM leverages smart contracts to allow permissionless participation in market-making by individuals seeking yield. These individuals passively provide liquidity to the contract, which can then use a predetermined function to automatically facilitate exchanges between buyers and sellers. The design of this procedure involves many tradeoffs that ultimately affect the utility of the platform to both liquidity providers and traders.

To understand AMM design tradeoffs, we must first understand that the ultimate goal of a market maker (both traditional and automatic) is to generate yield on a portfolio of assets by providing liquidity for trades in a market [1]. Yield will be the net result of fee/spread revenue earned on trading volume minus the change in the value of the portfolio. In AMMs this portfolio value change manifests as a so-called “impermanent loss”, since the portfolio value will decrease as the price deviates from the initial price, but will revert if the price moves back towards the initial one. Oftentimes this concept of a portfolio losing value as price moves is framed as a unique problem with AMMs, but this is only because AMMs allow it to be described and analyzed mathematically. Generally, losing portfolio value as the price moves is always a problem with market making, since by definition as a market maker you are allowing traders to exchange the less desirable asset for the more desirable asset. In this way, you will naturally receive more of the asset that is decreasing in value, and less of the one that is increasing. The market maker turns a profit when the revenue from spreads and/or fees outweighs this portfolio value change.

In AMMs this spread/fee revenue is typically collected by applying a near-one multiplier on the input or output of each trade. For instance, if an AMM wanted to charge a 0.3% fee, it could multiply the input by 0.997 and keep 0.003 * input as revenue for its liquidity providers. Setting this value can have a large impact on liquidity provider returns, and a tradeoff must be found between profiting as much as possible from each trade (higher fees) and maximizing the number of trades (lower fees). The optimal value will typically depend on the type of assets in the portfolio, as well as the type of AMM being used (the subject of the rest of the article). For more stable assets where portfolio risk is lower, lower fees are typically optimal, while more volatile assets typically demand higher fees to compensate for the risk. Many AMM implementations will add additional logic for dynamic fees, which can increase or decrease depending on fluctuating risk factors (e.g. price volatility can lead to inaccurate pricing).

## Basic Automatic Market Makers

The first and most well-known AMM is the Constant Product Market Maker (CPMM), first released by Bancor in the form of bonding curves within “smart token” contracts, and then further popularized by Uniswap as an invariant function [2][3]. The CPMM spreads liquidity out equally between all prices, automatically adjusting the price in the direction of each trade made. This makes it an extremely general solution, allowing CPMMs to facilitate exchange between any arbitrary pair of assets. This generality though also turns out to be the primary weakness of CPMMs: providing equal liquidity at all exchange rates means that only a small portion of the liquidity can be provided at the current exchange rate. A better intuition for these mechanics can be achieved by walking through a few trade examples, as is done here [4]. This means that a CPMM can only facilitate trade sizes that are a small fraction of its total value locked (TVL) before experiencing significant price impact, making them very capital inefficient. Thus, while the CPMM has proven itself an essential member of the AMM ecosystem (especially for long-tail assets), there have been a wide range of AMMs created since the CPMM’s inception that seek to improve upon this design.

One such design is the Constant Mean Market Maker (CMMM), first introduced by Balancer Protocol [5]. The CMMM generalizes the CPMM by allowing the liquidity provider to specify desired portfolio weights, such as a 20%-80% split. In comparison, the CPMM lacks this customization and therefore always enforces an equal 50%-50% portfolio split. This generalization provides more flexibility to liquidity providers to adjust their market-making portfolios, but the underlying design and results are still the same as the CMMM discussed above. For the sake of simplicity, the rest of the article will typically refer to the CPMM, but will generally also apply to the CMMM.

To outperform the CPMM, some prior assumptions about the market exchange rate of the assets must be made. This is not without risks, since making the wrong assumption can cause the market maker to exchange a large portion of its portfolio at the incorrect rate, causing the change in portfolio value to outweigh any fees collected. The extreme case of this is a Constant Sum Market Maker (CSMM), which facilitates exchange at a fixed rate regardless of the current portfolio the market maker holds. This allows for perfect capital efficiency at the selected exchange rate, but also quickly leads to the market maker losing all of the more valuable assets the moment the price deviates from the preselected value (at which point the portfolio is worth significantly less, and can no longer facilitate trades). For those familiar with Bayes theorem, many parallels can be seen here [6]. Using an outside oracle price (such as a 1:1 exchange rate for stablecoins) can be seen as using a prior, while adjusting the price in response to swaps (changes to your portfolio) can be seen as the likelihood. Under this lens, the CPMM can be seen as only using the likelihood with an infinitely diffuse prior, while a CSMM can be seen as utilizing only the prior and ignoring the likelihood. In the absence of any reasonable prior/oracle price (as is often the case in DeFi), the CPMM is a great choice. The CSMM only makes sense if you have complete trust in your prior/oracle price, which is rarely a good idea. Thus, the best option is usually a mixture of the two, which is what the bulk of AMM research focuses on.

## Hybrid Automatic Market Makers

For the reasons stated above, CSMMs are rarely used in practice. In theory, they sound great for facilitating trades between assets that should theoretically have the same value, such as two stablecoins. But, in practice, small fluctuations in desirability between these assets will still leave the market maker with only the less desirable asset (not to mention the risk of one de-pegging completely). This was some of the motivation behind Curve introducing a more efficient solution to this problem called StableSwap, our first example of a hybrid AMM [7]. Like a CPMM, a StableSwap market maker will adjust its exchange rate as its portfolio changes, but, unlike a CPMM (and like a CSMM), it will do so very slowly at first, maintaining close to a 1:1 exchange rate for longer. As the price deviates farther and farther from the 1:1 exchange rate, the rate at which the price is adjusted accelerates, such that the market maker is never left holding only one asset. The rate at which this process accelerates is adjustable via an amplification coefficient parameter set for each pool. This model has become the industry standard for assets pegged to the same external asset but has not seen much use outside of this specific use case.

An alternative algorithm for swapping stable assets was more recently introduced by Solidly, which builds on Uniswap V2 by adding an option to create a pair specialized for stable assets [8]. Unlike a normal Uniswap V2 pair, this pair utilizes a quartic sum invariant instead of the usual quadratic invariant (but retains the same interface). This flattens the curve around the 1:1 exchange rate and produces a very similar effect to StableSwap, but lacks an adjustment parameter similar to the amplification coefficient mentioned above. Empirically, the curve seems to provide similar output to StableSwap with an amplification coefficient of 2 (see below figure), which is much lower than is typically used in StableSwap pools. This makes it a bit lower risk (without additional tuning) for a wider range of stable assets that may have larger fluctuations around their peg, but it also may be slightly less efficient for those that do not.

Other AMMs similarly aim to interpolate between a constant sum and constant product model, but also expand their scope beyond pegged assets. Dodo develops a Proactive Market Maker (PMM) algorithm that aims to flatten the price curve around an oracle price [9]. This oracle price can be set to 1 for pegged asset pairs, but can also utilize Chainlink oracles for volatile pairs. At a high level, the PMM algorithm aims to maintain target reserve values (how much of each asset it would like to have) and gives decreasing rates the farther away the real reserves deviate from these values. The PMM exposes a slippage parameter, k, that allows the algorithm to interpolate between a constant product market maker (k = 1) and a constant sum/fixed rate market maker (k = 0). The math for calculating trades with this setup gets a bit messy, requiring integration under some scenarios, and solving for quadratic roots under others. Regardless, it has proven itself to be an effective mechanism to provide capital-efficient liquidity.

Clipper develops an AMM that has a similar result but gets there in a very different way [10]. Clipper more explicitly derives a solution that interpolates between constant sum and constant product by combining the constant sum model, wherein assets are exchanged at a fixed price, and a constant product model, where assets are exchanged according to the ratio of reserves held by the market maker. Similar to Dodo, these two extremes are combined on a continuous spectrum parameterized by a slippage parameter, k. The resulting interpolation is a bit different, however — the most concrete difference is that, unlike Dodo, a clipper AMM is actually capable of giving away all of one asset (like a constant sum market maker). This can become particularly dangerous for configurations closer to a CSMM (k close to 0) with an unreliable oracle.

Curve, the originator of StableSwap, has also released a new AMM for volatile assets called CryptoSwap [11]. CryptoSwap expands on the StableSwap algorithm by adding another parameter enabling quicker switching into price discovery mode (constant product) as the price moves away from the peg. This makes the algorithm better at handling a dynamic peg, allowing it to be used for more volatile assets. Curve’s CryptoSwap implementations also include a dynamic fee and an internal oracle system, making it unique in that respect since most other solutions use fixed rates or Chainlink oracles [12]. Implementing this AMM requires solving cubic, sextic, and higher-degree equations, which is typically done in practice using Newton’s Method [13]. This combined with its internal oracle and dynamic fees makes it one of the most complex AMMs currently in use. (For this reason, a CryptoSwap graph is omitted.) It is still early in its life cycle, so it remains to be seen whether this extra complexity translates into a better-performing AMM. So far it has mostly seen its use limited to one pool filled with the highest market cap assets on a given chain. Recently, though, it has been getting rolled out in more pools, growing its reach in the AMM space.

## Virtual Reserve Automatic Market Makers

An alternative way to achieve greater capital efficiency is to use virtual reserves, a concept first introduced by KyberSwap [14]. Virtual reserve AMMs utilize the CPMM model but multiply the real balances of each asset by an amplification factor. The result is that a much higher capital efficiency can be provided over the CPMM (bringing in more volume and fee revenue), but the market maker loses the CPMM’s ability to never run out of either asset. In this way, the amplification factor allows a tradeoff between capital efficiency and extra portfolio risk. Generally, a high amplification factor can work well for stable pairs but becomes riskier the more volatile a pair gets.

Uniswap V3 puts a very interesting spin on the concept of virtual reserves [15]. Instead of imposing one constant amplification coefficient on the entire pool, Uniswap V3 allows each liquidity provider to pick their own price range (and implicitly their own amplification parameter to span that range). Liquidity providers take on more portfolio risk with tighter price range positions (implicitly this means higher amplification coefficients), but also accrue proportionally more fees for trades within this range. In this way, Uniswap V3 can be seen as using incentives (higher yield for picking the true price range as precisely as possible) to source the best tradeoff in capital efficiency and portfolio risk at a given point, even for two volatile assets. This model has proven extremely effective, but also generally requires liquidity providers to be more active and informed in order to receive a good yield. Naive liquidity provision over the entire price range will typically result in a much lower fee share, while aggressive but mismanaged liquidity provision will typically lead to portfolio value loss outweighing any fee revenue. The development of intelligent automated liquidity managing strategies will likely continue to make these (and similar) AMMs more effective, and the corresponding yield opportunities more competitive.

## Request For Quote Systems

As a footnote, it is worth mentioning Request For Quote (RFQ) systems. RFQ mechanisms allow for private off-chain pricing with on-chain settlement and portfolio custody, allowing the gap between DeFi and traditional finance to be bridged. To utilize such a system, a centralized API must be queried for a quote, which can then be used on-chain to execute the trade. Off-chain pricing has the advantage of allowing for the use of off-chain information. This can include any of a range of private market-making tactics, including a marginal rate structure (used by Hashflow), or more traditional order book methods used by centralized exchanges [16]. Alternatively, high-frequency tuning of AMM equations can be done to achieve better performance than their on-chain counterparts. One example of this is Clipper’s Formula Market Maker (FMM). Clipper’s FMM uses a rapidly updating live price feed as the oracle price in their AMM formula (discussed above), allowing them to shift closer to a CSMM and achieve greater capital efficiency [17].

## Conclusions

While the above list of AMMs gives an overview of the space and covers many of the major DEXs that currently hold a high TVL, there remain several significant AMMs that were not covered (not to mention the many more currently being built). Recently, a new class of DeFi projects has even started designing paradigms that aim to generalize to any curve. Primitive Finance makes use of a Replicating Market Maker (RMM), which is able to construct an AMM curve from any of a large range of possible liquidity provider payoffs [18]. Shell protocol recently introduced a new AMM called Proteus — it is constructed from conic sections and contains 6 parameters, giving it the ability to be fit to a very wide range of desired curves [19]. While out of the scope of this article, there is also an ever-growing selection of protocols that offer financial derivatives, such as options and perpetuals.

With the enormous variation in AMM models and fragmentation in the liquidity used to facilitate trades, DEX aggregators have become an essential part of the DeFi ecosystem. DEX aggregators collect information about all liquidity sources on a given chain (including all AMM models discussed above) and then attempt to find the best combination of actions in order to get the best rate for a given trade. Oftentimes, the optimal route can contain many hops and complex splitting, making it an extremely hard problem to solve. Odos.xyz is a new DEX aggregator that is able to search more complex solutions than existing platforms, allowing for atomic multi-input trades and better rates for its users.

[1] https://www.investopedia.com/terms/m/marketmaker.asp

[2] https://cryptorating.eu/whitepapers/Bancor/bancor_protocol_whitepaper_en.pdf

[3] https://uniswap.org/whitepaper.pdf

[4] https://jfin-swufe.springeropen.com/articles/10.1186/s40854-021-00314-5#Sec5

[5] https://balancer.fi/whitepaper.pdf

[6] https://en.wikipedia.org/wiki/Bayes%27_theorem

[7] https://curve.fi/files/stableswap-paper.pdf

[8] https://pontem.network/posts/everything-you-need-to-know-about-solidly-the-latest-project-by-andre-cronje

[9] https://docs.dodoex.io/english/dodo-academy/pmm-overview/the-mathematical-principle-of-pmm

[10] https://github.com/shipyard-software/market-making-whitepaper

[11] https://classic.curve.fi/files/crypto-pools-paper.pdf

[12] https://research.chain.link/whitepaper-v2.pdf

[13] https://en.wikipedia.org/wiki/Newton%27s_method

[14] https://files.kyber.network/DMM-Feb21.pdf

[15] https://uniswap.org/whitepaper-v3.pdf

[16] https://docs.hashflow.com/hashflow/

[17] https://www.shipyardsoftware.org/post/what-is-a-fmm

[18] https://stanford.edu/~guillean/papers/rmms.pdf

[19] https://shellprotocol.io/static/Proteus_AMM_Engine_-_Shell_v2_Part_1.pdf

## Indexer Allocation Optimization: Part I

*This article was co-authored with Howard Heaton, from Edge & Node, and with Hope Yen, from *GraphOps*.*

### TL;DR

Indexers within The Graph Protocol are rewarded via an indexing reward. How can indexers optimise their allocations so as to maximise the reward they receive? In this blog post we formalise the problem in terms of a reward function and use convex optimization to find a solution.

### Overview

The Graph’s goal is to decentralize the API and query layer of web3, enabling users to query blockchain data without a centralized service provider. Part of that process is to have people who can serve queries to consumers. These people are called “indexers.” In this blog post, we focus on the problem of how an indexer should choose which data to index, and, thereby, which data to serve. The outline of this blog post is as follows:

- In the first section, we dive deeper into the allocation problem.
- Next, we formalise the problem and express it as a primal form convex optimisation problem.
- Subsequently, we show how to use convex optimisation for finding an optimal solution.
- Finally, we present some results demonstrating the benefit of our approach by comparing the reward received when manually allocating to the reward received when using the optimal allocation for an existing indexer.

For more details regarding The Graph, indexers, and the other roles present on The Graph, please refer to the excellent blog posts by Brandon Ramirez: Part I and Part II.[3]

Please note that this blog post provides only essential details of the solution. A complete discussion can be found in the associated yellowpaper.[1]

### The Indexer Allocation Problem

To understand the problem of indexer allocation optimization, we need to understand The Graph from an indexer’s perspectives. Subgraphs are collections of data extracted from blockchain transactions. In order to serve data to consumers, indexers must index subgraphs, which essentially involves them storing the data from a subgraph into a Postgres database. Obviously, this takes time, effort, and compute, so indexers want to be selective with which subgraphs they index. How then does an indexer know which subgraph(s) to index?

As a rule, the more a subgraph is queried, the more indexers want to be indexing a subgraph. This is because they’d be able to earn more money from query fees. Predicting which subgraphs will have high query volumes is a tough problem unto itself, so we don’t force indexers to try to do this on top of their existing responsibilities. Instead, we rely on curation. The outputs of the curation process are subgraph signals, which should be roughly proportional to the volume of queries on the corresponding subgraphs. In other words, the higher the signal on a subgraph, the higher we expect the query volume on that subgraph to be. Thus, when deciding which subgraphs to index, indexers who want to maximise their indexing reward ought to allocate to subgraphs with the greatest signal.

There’s still one more consideration to make. Say you allocate 30 GRT on subgraph A, and I allocate 10 GRT on that same subgraph. Should we be rewarded the same amount? We would be if we were rewarded only as per subgraph signal, but that would be unfair. In the next section, we’ll make these qualitative statements about signal and fairness into a quantitative equation over which indexers can optimise.

## The Indexing Reward Function

Let’s now formalise the previous section into the indexing reward, which is paid out of the new token issuance . An indexer has some stake . The indexer must decide how much of its stake to allocate to each subgraph . Each subgraph has some signal on it, which should have a strong, positive correlation with the query volume on subgraph . An indexer ‘s allocation to subgraph is defined as . We can use this to construct an allocation matrix across all indexers , in which the th indexer’s allocation to the th subgraph corresponds to . The indexing reward is then given as:

## Intuition

Let’s consider the following scenario. Say we have two indexers trying to allocate to two subgraphs. We’re playing the game from indexer 1’s perspective, so for us, indexer 2 and the signals on the two subgraphs are out of our control. All we can do is modify our allocation. Let’s first consider how subgraph signal changes how we should allocate.

Let’s read the above plot from an indexer’s perspective. Say we know that the signal on subgraph 1 is 0. In that case, we should read across from 0 on the y-axis to find the point on the x-axis that is brightest. In this case, we’d want to allocate 0 to subgraph 1. Similarly, let’s say subgraph 1’s signal is 5. Reading across the x-axis, we’d want to allocate roughly 5 to subgraph 1. Generally speaking here, the intuition is that the amount we should allocate to a subgraph is a linear function of the signal on that subgraph.

However, if we only look at subgraph signal, we’re missing an important part of the indexing reward – the amount other indexers have allocated to a subgraph. Let’s now hold the subgraph’s signal constant and see how the amount we should allocate will vary with how much other indexers have allocated to this subgraph.

￼

Generally speaking, this isn’t anywhere as near as nice a relationship as we had between signal and our allocation. Let’s start to get some intuition by looking at the bright spot in the top-right corner of the plot. In this case, all other indexers have allocated on subgraph 1. No one is allocated on subgraph 2. This is great for us! As long as we put even 0.1 GRT on subgraph 2, we’ll get all of the reward that is available on subgraph 2. This means that our best strategy is to put a fractional amount of GRT on subgraph 2 and to maximise the amount we put on subgraph 1. This way, we get all of the reward available on subgraph 2, and we also maximise how much of the reward available on subgraph 1 that we receive.

This exemplifies how we should allocate with respect to the allocations of other indexers. We want to try to place just enough stake to capture the maximum reward we can receive off a given subgraph. In other words, when the marginal reward we’d receive for increasing our stake on subgraph 1 is less than the marginal reward we’d receive for increasing our stake on subgraph 2, we should allocate to subgraph 2.

In reality, we don’t have nearly as much freedom as in this diagram when choosing how to allocate because there is often an order-of-magnitude difference between existing allocations on a subgraph and the amount of stake that an indexer has, but this example is illustrative for intuition.

## Optimizing The Indexing Reward

The reward function enables us to formulate the optimization problem as:

where are the allocations of indexer .

In plain English, we want to minimise the negative indexing reward (equivalent to maximise the indexing reward) subject to the constraints that the amount we allocate must sum to the amount of stake we have to allocate and that we can’t allocate a negative amount to any subgraph. Why the negative indexing reward?

Take a look at the plot above. In it, for each trace, we hold the network constant, meaning we hold the signal and the allocations of all other indexers constant, and we sweep the allocations of the indexer we are optimizing. As you can see, the indexing reward is a concave function of our allocations. Since a convex function is just the negative of a concave function, we can use convex optimization to solve for the optimal allocation!

We leave a more complete discussion of how we solve this optimization problem to our yellowpaper, but, at a high level, we consider the dual of allocation optimization problem:

We then use the Karush-Kuhn-Tucker conditions to find an analytic solution for the dual variable .

where . With in hand, we can then plug it in to solve for our optimal allocations using

### Results

Running our optimiser for an existing indexer, we see an improvement in indexing rewards received from 208,608.87 GRT to 240,739.72 GRT. That’s an improvement of 15%!

It may be visually obvious to you from the above plots, but the optimal allocation allocates to way more subgraphs than what this indexer has done by hand. To quantify this, the indexer has currently allocated to 70 subgraphs. The optimal allocation tool allocates to 144 subgraphs.

### Conclusion

The above problem formulation and optimization focus on a simplified version of the problem. One thing we haven’t yet considered is that indexers have to pay gas to allocate to subgraphs. Furthermore, we mentioned earlier that indexers want to be selective in how they allocate based on compute or other resource constraints. These will turn our nice, convex problem into a non-convex problem. The next post in the series will discuss how we can optimise this non-convex problem. Stay tuned!

### Further Reading

- [1] Yellowpaper (arXiv link to come)
- [2] Convex Optimization by Boyd and Vandenberghe
- [3] Brandon Ramirez’s blog posts about The Graph: Part I and Part II

## Automated Query Pricing in The Graph

*Indexers ^{1} in The Graph have control over the pricing of the GraphQL queries they serve based on their shape. For this task, The Graph created a domain-specific language called Agora^{2,3} that maps query shapes to prices in GRT. However, manually populating and updating Agora models for each subgraph is a tedious task, and as a consequence, most indexers default to a static, flat pricing model.*

*To help indexers with pricing in the relative resource cost of serving different query shapes, as well as following the query market price, we are developing AutoAgora, an automation tool that automatically creates and updates Agora models.*

*See our related Devcon 2022 talk here:*

*https://www.youtube.com/watch?v=LRl9uFfmjEs*.

## Query relative cost discovery

This work is based on a few trends that we have observed through analyzing the query traffic.

### Observations on received queries

One of the major problems of GraphQL is its ability to create very computationally expensive queries. In the worst cases, it is even possible to create exponentially hard queries while the query body itself grows only linearly^{4}. We can indeed observe this on queries against the Uniswap V2 subgraph, where query execution times span 4 orders of magnitude.

Simultaneously, we observed that a vast majority of the queries are based on a small number of query shapes. This is to be expected as most queries are generated programmatically from web frontends, such as the official Uniswap v2 statistics website.

Based on the observations above, a reasonable solution for relative query pricing that would cover most of the value being served by an indexer is to estimate the relative cost of the most frequent queries for each subgraph.

### AutoAgora logging and relative costing infrastructure

In this section, we will discuss the implementation of the automated *relative query pricing* discovery feature of AutoAgora, and how it fits in the indexer’s stack.

All monetized queries coming from the Gateway pass through the `indexer-service`

. We modified it to output detailed query logs (subgraph hash, query, variables, GRT fees paid, execution time). These logs are then processed to normalize the queries, separate all query values from the query shapes. Shapes are deduplicated and stored in a PostgreSQL table, while the rest of the log information is appended to a logs table and references its query shape through its blake2 hash.

Here’s a sample from the query logs table:

```
autoagora=# select * from query_logs order by random() limit 3;
-[ RECORD 1 ]---+-----------------------------------------------------
id | 723883ae-edbb-4ed5-81ad-f06dcc9195e7
subgraph | QmXDs3UikxJBLwPztj3tLhyFuaV51tEftjSY1AHjBUz3CH
query_hash | \x8e6c32fce280c7bb43081b0e0173b0b1
timestamp | 2022-06-17 07:04:00.581+00
query_time_ms | 56
query_variables | ["0x8289baf639318d9076616d7d231cbfc4f8fdc9bf628b1d43.
|.e5ebb8e4b125a68d", 100, "preparedTimestamp", "desc",.
|. 0, "0x7aa9530f8df705d07a048d8508236e9cdf2a8f1331578.
|.129c918246766962ae1"]
-[ RECORD 2 ]---+-----------------------------------------------------
id | 50546eda-51e5-43ed-8829-a66a9642355b
subgraph | QmXDs3UikxJBLwPztj3tLhyFuaV51tEftjSY1AHjBUz3CH
query_hash | \x35d9bb4ecfd091d6aa75659f9a33a06b
timestamp | 2022-06-17 03:32:35.185+00
query_time_ms | 31
query_variables | ["0x8f80d695bb4f3ac71f7392f930f5f91529a54df1ffc40c06.
|.deb06492af7260d6", "amount", "desc"]
-[ RECORD 3 ]---+-----------------------------------------------------
id | 4505c23a-bbab-49a0-a58c-49428d51380d
subgraph | QmXDs3UikxJBLwPztj3tLhyFuaV51tEftjSY1AHjBUz3CH
query_hash | \x7efa66f59bd19e1c501b091b931a6069
timestamp | 2022-06-17 03:24:37.586+00
query_time_ms | 14
query_variables | ["0x723bfdcd48cf2bdb056a2d23003e7d089596056cea9f1799.
|.68f503661d349fb9", "preparedBlockNumber", "desc", "1.
|.", "Prepared", "0xcada61e60e8c797c61f0a34468224e782f.
|.8dfb8d"]
```

And the corresponding query shapes (also called skeletons in AutoAgora):

```
autoagora=# select * from query_skeletons where hash in ('\x8e6c32fce280c7bb43081b0e0173b0b1', '\x35d9bb4ecfd091d6aa75659f9a33a06b', '\x7efa66f59bd19e1c501b091b931a6069');
-[ RECORD 1 ]---------------------------------------------------------
hash | \x7efa66f59bd19e1c501b091b931a6069
query | query($_0:Bytes$_1:Transaction_orderBy$_2:OrderDirection$_3:Bi.
|.gInt$_4:TransactionStatus$_5:String){transactions(block:{hash:.
|.$_0}orderBy:$_1 orderDirection:$_2 where:{sendingChainId:$_3 s.
|.tatus:$_4 user:$_5}){amount bidSignature callDataHash callTo c.
|.ancelCaller cancelTransactionHash chainId encodedBid encrypted.
|.CallData expiry fulfillCaller fulfillTransactionHash id initia.
|.tor prepareCaller prepareTransactionHash preparedBlockNumber p.
|.reparedTimestamp receivingAddress receivingAssetId receivingCh.
|.ainId receivingChainTxManagerAddress router{id}sendingAssetId .
|.sendingChainFallback sendingChainId status transactionId user{.
|.id}}}
-[ RECORD 2 ]---------------------------------------------------------
hash | \x35d9bb4ecfd091d6aa75659f9a33a06b
query | query($_0:Bytes$_1:AssetBalance_orderBy$_2:OrderDirection){ass.
|.etBalances(block:{hash:$_0}orderBy:$_1 orderDirection:$_2){amo.
|.unt assetId id router{id}}}
-[ RECORD 3 ]---------------------------------------------------------
hash | \x8e6c32fce280c7bb43081b0e0173b0b1
query | query($_0:Bytes$_1:Int$_2:Transaction_orderBy$_3:OrderDirectio.
|.n$_4:Int$_5:Bytes){transactions(block:{hash:$_0}first:$_1 orde.
|.rBy:$_2 orderDirection:$_3 skip:$_4 where:{transactionId:$_5}).
|.{amount bidSignature callData callDataHash callTo cancelCaller.
|. cancelMeta cancelTimestamp cancelTransactionHash chainId enco.
|.dedBid encryptedCallData expiry externalCallIsContract externa.
|.lCallReturnData externalCallSuccess fulfillCaller fulfillMeta .
|.fulfillTimestamp fulfillTransactionHash id initiator prepareCa.
|.ller prepareMeta prepareTransactionHash preparedBlockNumber pr.
|.eparedTimestamp receivingAddress receivingAssetId receivingCha.
|.inId receivingChainTxManagerAddress relayerFee router{id}sendi.
|.ngAssetId sendingChainFallback sendingChainId signature status.
|. transactionId user{id}}}
```

Relative pricing is generated periodically from the logs database. For each subgraph, query shapes that have seen more than a threshold number of queries are selected to be included in the Agora model. For each shape, the average query execution time is computed and used as the query shape cost factor in the pricing model.

Here is an example of an Agora model generated by AutoAgora for a subgraph:

```
# count: 1303
# min time: 30
# max time: 12994
# avg time: 4978.261703760552
# stddev time: 3374.180130690477
query {
erc721Contract(block: {hash: $_0}, id: $_1) {
transfers(
first: $_2,
orderBy: $_3
orderDirection: $_4
where: {id_gt: $_5}
) {
contract {
id
name
}
from {
id
}
id
timestamp
to {
id
}
token {
id
identifier
uri
}
transaction {
id
}
}
}
} => 4978.261703760552 * $GLOBAL_COST_MULTIPLIER;
# count: 830
# min time: 4
# max time: 1227
# avg time: 44.67831325301205
# stddev time: 82.78835711833459
query {
account(block: {hash: $_0}, id: $_1) {
tokens: ERC721tokens(
first: $_2
orderBy: $_3
orderDirection: $_4
where: {contract_in: $_5, identifier_gt: $_6}
) {
id
identifier
}
id
}
} => 44.67831325301205 * $GLOBAL_COST_MULTIPLIER;
default => $DEFAULT_COST * $GLOBAL_COST_MULTIPLIER;
```

You may have noticed the `$GLOBAL_COST_MULTIPLIER`

variable. It is there to adjust the pricing of all the query shapes to the market price, which is the subject of the next section.

## Query market price discovery

### The query market

In The Graph, each subgraph, on each gateway (multiple geographies) defines a query market, on which end-consumers compete to buy queries (currently automated through user-defined budgets and quality preferences), and indexers compete to serve queries based on their Agora-defined prices, and their quality of service.

The indexers’ main objective is to maximize profits. For now we simplified the problem down to revenue maximization, where for each subgraph, the query market is treated as a black box that takes a `$GLOBAL_COST_MULTIPLIER`

input value, and outputs the GRT per second earned.

The objective of the AutoAgora price discovery is to find the `$GLOBAL_COST_MULTIPLIER`

that optimizes the revenue rate, as well as continuously adjust it to track markets fluctuations.

### AutoAgora absolute price discovery

To find and continuously track the optimal price point requires continuously probing the market “black box” at various price points. However, doing so means that a significant amount of time will be spent serving queries at unfavorable price points. The balance between exploration and exploitation is a reinforcement learning problem that has been extensively studied through the multi-armed bandit problem^{5}.

As such, we are mapping the price discovery problem as a continuum-armed bandit problem. The policy is modeled as a Gaussian probability distribution over `$GLOBAL_COST_MULTIPLIER`

values, from which values are continuously sampled. The mean and standard deviation of the Gaussian policy are continuously optimized through gradient descent using the Adam^{6} optimizer.

We tested the AutoAgora price discovery on our own indexer (`semiotic-indexer.eth`

) on mainnet with great success. The plots below are extracted from our own indexer’s Grafana boards, and shows the convergence to, and tracking of the market price over time (“Bandit mean” and “standard deviation”), as well as the resulting increase in revenue rate (“GRT / second per subgraph”) on the UMA subgraph.

## Limitations, Conclusions

Our goal with AutoAgora is to help build sustainable, efficient query markets on The Graph, while also lowering the human operation costs.

While we have shown a successful operation on our own indexer, we are still working on tackling current shortcomings of AutoAgora:

- The relative cost models based only on average query shape execution time are too simplistic. Other hardware impact measurements could be added, so that indexers can express the relative cost of their hardware commodities (CPU, RAM, I/O, Network, etc.).
- The initial convergence speed of the market price bandit model is quite slow, taking many hours. One of the main limitations currently is the speed at which the gateways are updating the indexer’s cost models.
- The price bandit training is unstable on subgraphs with low query volumes. The solution being currently implemented is a simple rule deactivating the training, and defaulting to an indexer-supplied default price multiplier. Such a solution is acceptable, since the revenue expected from a low-query subgraph is expected to be negligeable.
- The price bandit training stability is unproven when multiple of those agents are competing against each other on the same market. Thus we are working on a simulation environment to stress-test this particular scenario.

Therefore, the state of AutoAgora on the date of publication of this piece is highly experimental, and we do not recommend indexers to use the software in production.

The AutoAgora components are open source under the Apache-2 license, and available at: https://gitlab.com/semiotic-ai/the-graph/autoagora

### References

- The Graph Foundation. Indexer. The Graph. Published 2022. Accessed June 17, 2022. https://thegraph.com/docs/en/indexing/
- Zachary B. Cost Model Workshop. Youtube. Published May 24, 2021. Accessed June 17, 2022. https://youtu.be/s7zNzgiL4z4
- Agora contributors. graphprotocol/agora. GitHub. Published 2022. Accessed 2022. https://github.com/graphprotocol/agora
- Wittern E, Cha A, Davis JC, Baudart G, Mandel L. An Empirical Study of GraphQL Schemas.
*Service-Oriented Computing*. Published online 2019:3-19. doi:10.1007/978-3-030-33702-5_1 - Slivkins A. Introduction to Multi-Armed Bandits.
*FNT in Machine Learning*. Published online 2019:1-286. doi:10.1561/2200000068 - Kingma DP, Ba J. Adam: A Method for Stochastic Optimization. Published online 2014. doi:10.48550/ARXIV.1412.6980