indexing > Overview

Indexing Overview

Reading time: 26 min

Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn query fees that are rebated according to an exponential rebate function.

GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers also earn rewards for delegated stake from Delegators, to contribute to the network.

Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing.

Technical Level Required

ADVANCED

FAQ

Link to this section

What is the minimum stake required to be an Indexer on the network?

Link to this section

The minimum stake for an Indexer is currently set to 100K GRT.

What are the revenue streams for an Indexer?

Link to this section

Query fee rebates - Payments for serving queries on the network. These payments are mediated via state channels between an Indexer and a gateway. Each query request from a gateway contains a payment and the corresponding response a proof of query result validity.

Indexing rewards - Generated via a 3% annual protocol wide inflation, the indexing rewards are distributed to Indexers who are indexing subgraph deployments for the network.

How are indexing rewards distributed?

Link to this section

Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.

Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the Community Guides collection. You can also find an up to date list of tools in the #Delegators and #Indexers channels on the Discord server. Here we link a recommended allocation optimiser integrated with the indexer software stack.

What is a proof of indexing (POI)?

Link to this section

POIs are used in the network to verify that an Indexer is indexing the subgraphs they have allocated on. A POI for the first block of the current epoch must be submitted when closing an allocation for that allocation to be eligible for indexing rewards. A POI for a block is a digest for all entity store transactions for a specific subgraph deployment up to and including that block.

When are indexing rewards distributed?

Link to this section

Allocations are continuously accruing rewards while they're active and allocated within 28 epochs. Rewards are collected by the Indexers, and distributed whenever their allocations are closed. That happens either manually, whenever the Indexer wants to force close them, or after 28 epochs a Delegator can close the allocation for the Indexer, but this results in no rewards. 28 epochs is the max allocation lifetime (right now, one epoch lasts for ~24h).

Can pending indexing rewards be monitored?

Link to this section

The RewardsManager contract has a read-only getRewards function that can be used to check the pending rewards for a specific allocation.

Many of the community-made dashboards include pending rewards values and they can be easily checked manually by following these steps:

  1. Query the mainnet subgraph to get the IDs for all active allocations:
query indexerAllocations {
indexer(id: "<INDEXER_ADDRESS>") {
allocations {
activeForIndexer {
allocations {
id
}
}
}
}
}

Use Etherscan to call getRewards():

  • Navigate to Etherscan interface to Rewards contract
  • To call getRewards():
    • Expand the 9. getRewards dropdown.
    • Enter the allocationID in the input.
    • Click the Query button.

What are disputes and where can I view them?

Link to this section

Indexer's queries and allocations can both be disputed on The Graph during the dispute period. The dispute period varies, depending on the type of dispute. Queries/attestations have 7 epochs dispute window, whereas allocations have 56 epochs. After these periods pass, disputes cannot be opened against either of allocations or queries. When a dispute is opened, a deposit of a minimum of 10,000 GRT is required by the Fishermen, which will be locked until the dispute is finalized and a resolution has been given. Fisherman are any network participants that open disputes.

Disputes have three possible outcomes, so does the deposit of the Fishermen.

  • If the dispute is rejected, the GRT deposited by the Fishermen will be burned, and the disputed Indexer will not be slashed.
  • If the dispute is settled as a draw, the Fishermen's deposit will be returned, and the disputed Indexer will not be slashed.
  • If the dispute is accepted, the GRT deposited by the Fishermen will be returned, the disputed Indexer will be slashed and the Fishermen will earn 50% of the slashed GRT.

Disputes can be viewed in the UI in an Indexer's profile page under the Disputes tab.

What are query fee rebates and when are they distributed?

Link to this section

Query fees are collected by the gateway and distributed to indexers according to the exponential rebate function (see GIP here). The exponential rebate function is proposed as a way to ensure indexers achieve the best outcome by faithfully serving queries. It works by incentivizing Indexers to allocate a large amount of stake (which can be slashed for erring when serving a query) relative to the amount of query fees they may collect.

Once an allocation has been closed the rebates are available to be claimed by the Indexer. Upon claiming, the query fee rebates are distributed to the Indexer and their Delegators based on the query fee cut and the exponential rebate function.

What is query fee cut and indexing reward cut?

Link to this section

The queryFeeCut and indexingRewardCut values are delegation parameters that the Indexer may set along with cooldownBlocks to control the distribution of GRT between the Indexer and their Delegators. See the last steps in Staking in the Protocol for instructions on setting the delegation parameters.

  • queryFeeCut - the % of query fee rebates that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the query fees earned when an allocation is closed with the other 5% going to the Delegators.

  • indexingRewardCut - the % of indexing rewards that will be distributed to the Indexer. If this is set to 95%, the Indexer will receive 95% of the indexing rewards when an allocation is closed and the Delegators will split the other 5%.

How do Indexers know which subgraphs to index?

Link to this section

Indexers may differentiate themselves by applying advanced techniques for making subgraph indexing decisions but to give a general idea we'll discuss several key metrics used to evaluate subgraphs in the network:

  • Curation signal - The proportion of network curation signal applied to a particular subgraph is a good indicator of the interest in that subgraph, especially during the bootstrap phase when query voluming is ramping up.

  • Query fees collected - The historical data for volume of query fees collected for a specific subgraph is a good indicator of future demand.

  • Amount staked - Monitoring the behavior of other Indexers or looking at proportions of total stake allocated towards specific subgraphs can allow an Indexer to monitor the supply side for subgraph queries to identify subgraphs that the network is showing confidence in or subgraphs that may show a need for more supply.

  • Subgraphs with no indexing rewards - Some subgraphs do not generate indexing rewards mainly because they are using unsupported features like IPFS or because they are querying another network outside of mainnet. You will see a message on a subgraph if it is not generating indexing rewards.

What are the hardware requirements?

Link to this section
  • Small - Enough to get started indexing several subgraphs, will likely need to be expanded.
  • Standard - Default setup, this is what is used in the example k8s/terraform deployment manifests.
  • Medium - Production Indexer supporting 100 subgraphs and 200-500 requests per second.
  • Large - Prepared to index all currently used subgraphs and serve requests for the related traffic.
SetupPostgres
(CPUs)
Postgres
(memory in GBs)
Postgres
(disk in TBs)
VMs
(CPUs)
VMs
(memory in GBs)
Small481416
Standard83011248
Medium166423264
Large724683.548184

What are some basic security precautions an Indexer should take?

Link to this section
  • Operator wallet - Setting up an operator wallet is an important precaution because it allows an Indexer to maintain separation between their keys that control stake and those that are in control of day-to-day operations. See Stake in Protocol for instructions.

  • Firewall - Only the Indexer service needs to be exposed publicly and particular attention should be paid to locking down admin ports and database access: the Graph Node JSON-RPC endpoint (default port: 8030), the Indexer management API endpoint (default port: 18000), and the Postgres database endpoint (default port: 5432) should not be exposed.

Infrastructure

Link to this section

At the center of an Indexer's infrastructure is the Graph Node which monitors the indexed networks, extracts and loads data per a subgraph definition and serves it as a GraphQL API. The Graph Node needs to be connected to an endpoint exposing data from each indexed network; an IPFS node for sourcing data; a PostgreSQL database for its store; and Indexer components which facilitate its interactions with the network.

  • PostgreSQL database - The main store for the Graph Node, this is where subgraph data is stored. The Indexer service and agent also use the database to store state channel data, cost models, indexing rules, and allocation actions.

  • Data endpoint - For EVM-compatible networks, Graph Node needs to be connected to an endpoint that exposes an EVM-compatible JSON-RPC API. This may take the form of a single client or it could be a more complex setup that load balances across multiple. It's important to be aware that certain subgraphs will require particular client capabilities such as archive mode and/or the parity tracing API.

  • IPFS node (version less than 5) - Subgraph deployment metadata is stored on the IPFS network. The Graph Node primarily accesses the IPFS node during subgraph deployment to fetch the subgraph manifest and all linked files. Network Indexers do not need to host their own IPFS node, an IPFS node for the network is hosted at https://ipfs.network.thegraph.com.

  • Indexer service - Handles all required external communications with the network. Shares cost models and indexing statuses, passes query requests from gateways on to a Graph Node, and manages the query payments via state channels with the gateway.

  • Indexer agent - Facilitates the Indexers interactions onchain including registering on the network, managing subgraph deployments to its Graph Node/s, and managing allocations.

  • Prometheus metrics server - The Graph Node and Indexer components log their metrics to the metrics server.

Note: To support agile scaling, it is recommended that query and indexing concerns are separated between different sets of nodes: query nodes and index nodes.

Ports overview

Link to this section

Important: Be careful about exposing ports publicly - administration ports should be kept locked down. This includes the the Graph Node JSON-RPC and the Indexer management endpoints detailed below.

PortPurposeRoutesCLI ArgumentEnvironment Variable
8000GraphQL HTTP server
(for subgraph queries)
/subgraphs/id/...
/subgraphs/name/.../...
--http-port-
8001GraphQL WS
(for subgraph subscriptions)
/subgraphs/id/...
/subgraphs/name/.../...
--ws-port-
8020JSON-RPC
(for managing deployments)
/--admin-port-
8030Subgraph indexing status API/graphql--index-node-port-
8040Prometheus metrics/metrics--metrics-port-

Indexer Service

Link to this section
PortPurposeRoutesCLI ArgumentEnvironment Variable
7600GraphQL HTTP server
(for paid subgraph queries)
/subgraphs/id/...
/status
/channel-messages-inbox
--portINDEXER_SERVICE_PORT
7300Prometheus metrics/metrics--metrics-port-

Indexer Agent

Link to this section
PortPurposeRoutesCLI ArgumentEnvironment Variable
8000Indexer management API/--indexer-management-portINDEXER_AGENT_INDEXER_MANAGEMENT_PORT

Setup server infrastructure using Terraform on Google Cloud

Link to this section

Note: Indexers can alternatively use AWS, Microsoft Azure, or Alibaba.

Install prerequisites

Link to this section
  • Google Cloud SDK
  • Kubectl command line tool
  • Terraform

Create a Google Cloud Project

Link to this section
  • Clone or navigate to the Indexer repository.

  • Navigate to the ./terraform directory, this is where all commands should be executed.

cd terraform
  • Authenticate with Google Cloud and create a new project.
gcloud auth login
project=<PROJECT_NAME>
gcloud projects create --enable-cloud-apis $project
  • Use the Google Cloud Console's billing page to enable billing for the new project.

  • Create a Google Cloud configuration.

proj_id=$(gcloud projects list --format='get(project_id)' --filter="name=$project")
gcloud config configurations create $project
gcloud config set project "$proj_id"
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a
  • Enable required Google Cloud APIs.
gcloud services enable compute.googleapis.com
gcloud services enable container.googleapis.com
gcloud services enable servicenetworking.googleapis.com
gcloud services enable sqladmin.googleapis.com
  • Create a service account.
svc_name=<SERVICE_ACCOUNT_NAME>
gcloud iam service-accounts create $svc_name \
--description="Service account for Terraform" \
--display-name="$svc_name"
gcloud iam service-accounts list
# Get the email of the service account from the list
svc=$(gcloud iam service-accounts list --format='get(email)'
--filter="displayName=$svc_name")
gcloud iam service-accounts keys create .gcloud-credentials.json \
--iam-account="$svc"
gcloud projects add-iam-policy-binding $proj_id \
--member serviceAccount:$svc \
--role roles/editor
  • Enable peering between database and Kubernetes cluster that will be created in the next step.
gcloud compute addresses create google-managed-services-default \
--prefix-length=20 \
--purpose=VPC_PEERING \
--network default \
--global \
--description 'IP Range for peer networks.'
gcloud services vpc-peerings connect \
--network=default \
--ranges=google-managed-services-default
  • Create minimal terraform configuration file (update as needed).
indexer=<INDEXER_NAME>
cat > terraform.tfvars <<EOF
project = "$proj_id"
indexer = "$indexer"
database_password = "<database passowrd>"
EOF

Use Terraform to create infrastructure

Link to this section

Before running any commands, read through variables.tf and create a file terraform.tfvars in this directory (or modify the one we created in the last step). For each variable where you want to override the default, or where you need to set a value, enter a setting into terraform.tfvars.

  • Run the following commands to create the infrastructure.
# Install required plugins
terraform init
# View plan for resources to be created
terraform plan
# Create the resources (expect it to take up to 30 minutes)
terraform apply

Download credentials for the new cluster into ~/.kube/config and set it as your default context.

gcloud container clusters get-credentials $indexer
kubectl config use-context $(kubectl config get-contexts --output='name'
| grep $indexer)

Creating the Kubernetes components for the Indexer

Link to this section
  • Copy the directory k8s/overlays to a new directory $dir, and adjust the bases entry in $dir/kustomization.yaml so that it points to the directory k8s/base.

  • Read through all the files in $dir and adjust any values as indicated in the comments.

Deploy all resources with kubectl apply -k $dir.

Graph Node is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint.

Getting started from source

Link to this section

Install prerequisites

Link to this section
  • Rust

  • PostgreSQL

  • IPFS

  • Additional Requirements for Ubuntu users - To run a Graph Node on Ubuntu a few additional packages may be needed.

sudo apt-get install -y clang libpg-dev libssl-dev pkg-config
  1. Start a PostgreSQL database server
initdb -D .postgres
pg_ctl -D .postgres -l logfile start
createdb graph-node
  1. Clone Graph Node repo and build the source by running cargo build

  2. Now that all the dependencies are setup, start the Graph Node:

cargo run -p graph-node --release -- \
--postgres-url postgresql://[USERNAME]:[PASSWORD]@localhost:5432/graph-node \
--ethereum-rpc [NETWORK_NAME]:[URL] \
--ipfs https://ipfs.network.thegraph.com

Getting started using Docker

Link to this section

Prerequisites

Link to this section
  • Ethereum node - By default, the docker compose setup will use mainnet: http://host.docker.internal:8545 to connect to the Ethereum node on your host machine. You can replace this network name and url by updating docker-compose.yaml.
  1. Clone Graph Node and navigate to the Docker directory:
git clone https://github.com/graphprotocol/graph-node
cd graph-node/docker
  1. For linux users only - Use the host IP address instead of host.docker.internal in the docker-compose.yaml using the included script:
./setup.sh
  1. Start a local Graph Node that will connect to your Ethereum endpoint:
docker-compose up

Indexer components

Link to this section

To successfully participate in the network requires almost constant monitoring and interaction, so we've built a suite of Typescript applications for facilitating an Indexers network participation. There are three Indexer components:

  • Indexer agent - The agent monitors the network and the Indexer's own infrastructure and manages which subgraph deployments are indexed and allocated towards onchain and how much is allocated towards each.

  • Indexer service - The only component that needs to be exposed externally, the service passes on subgraph queries to the graph node, manages state channels for query payments, shares important decision making information to clients like the gateways.

  • Indexer CLI - The command line interface for managing the Indexer agent. It allows Indexers to manage cost models, manual allocations, actions queue, and indexing rules.

Getting started

Link to this section

The Indexer agent and Indexer service should be co-located with your Graph Node infrastructure. There are many ways to set up virtual execution environments for your Indexer components; here we'll explain how to run them on baremetal using NPM packages or source, or via kubernetes and docker on the Google Cloud Kubernetes Engine. If these setup examples do not translate well to your infrastructure there will likely be a community guide to reference, come say hi on Discord! Remember to stake in the protocol before starting up your Indexer components!

From NPM packages

Link to this section
npm install -g @graphprotocol/indexer-service
npm install -g @graphprotocol/indexer-agent
# Indexer CLI is a plugin for Graph CLI, so both need to be installed:
npm install -g @graphprotocol/graph-cli
npm install -g @graphprotocol/indexer-cli
# Indexer service
graph-indexer-service start ...
# Indexer agent
graph-indexer-agent start ...
# Indexer CLI
#Forward the port of your agent pod if using Kubernetes
kubectl port-forward pod/POD_ID 18000:8000
graph indexer connect http://localhost:18000/
graph indexer ...

From source

Link to this section
# From Repo root directory
yarn
# Indexer Service
cd packages/indexer-service
./bin/graph-indexer-service start ...
# Indexer agent
cd packages/indexer-agent
./bin/graph-indexer-service start ...
# Indexer CLI
cd packages/indexer-cli
./bin/graph-indexer-cli indexer connect http://localhost:18000/
./bin/graph-indexer-cli indexer ...

Using docker

Link to this section
  • Pull images from the registry
docker pull ghcr.io/graphprotocol/indexer-service:latest
docker pull ghcr.io/graphprotocol/indexer-agent:latest

Or build images locally from source

# Indexer service
docker build \
--build-arg NPM_TOKEN=<npm-token> \
-f Dockerfile.indexer-service \
-t indexer-service:latest \
# Indexer agent
docker build \
--build-arg NPM_TOKEN=<npm-token> \
-f Dockerfile.indexer-agent \
-t indexer-agent:latest \
  • Run the components
docker run -p 7600:7600 -it indexer-service:latest ...
docker run -p 18000:8000 -it indexer-agent:latest ...

NOTE: After starting the containers, the Indexer service should be accessible at http://localhost:7600 and the Indexer agent should be exposing the Indexer management API at http://localhost:18000/.

Using K8s and Terraform

Link to this section

See the Setup Server Infrastructure Using Terraform on Google Cloud section

NOTE: All runtime configuration variables may be applied either as parameters to the command on startup or using environment variables of the format COMPONENT_NAME_VARIABLE_NAME(ex. INDEXER_AGENT_ETHEREUM).

Indexer agent

Link to this section
graph-indexer-agent start \
--ethereum <MAINNET_ETH_ENDPOINT> \
--ethereum-network mainnet \
--mnemonic <MNEMONIC> \
--indexer-address <INDEXER_ADDRESS> \
--graph-node-query-endpoint http://localhost:8000/ \
--graph-node-status-endpoint http://localhost:8030/graphql \
--graph-node-admin-endpoint http://localhost:8020/ \
--public-indexer-url http://localhost:7600/ \
--indexer-geo-coordinates <YOUR_COORDINATES> \
--index-node-ids default \
--indexer-management-port 18000 \
--metrics-port 7040 \
--network-subgraph-endpoint http://query-node-0:8000/subgraphs/id/QmUzRg2HHMpbgf6Q4VHKNDbtBEJnyp5JWCh2gUX9AV6jXv \
--default-allocation-amount 100 \
--register true \
--inject-dai true \
--postgres-host localhost \
--postgres-port 5432 \
--postgres-username <DB_USERNAME> \
--postgres-password <DB_PASSWORD> \
--postgres-database indexer \
--allocation-management auto \
| pino-pretty

Indexer service

Link to this section
SERVER_HOST=localhost \
SERVER_PORT=5432 \
SERVER_DB_NAME=is_staging \
SERVER_DB_USER=<DB_USERNAME> \
SERVER_DB_PASSWORD=<DB_PASSWORD> \
graph-indexer-service start \
--ethereum <MAINNET_ETH_ENDPOINT> \
--ethereum-network mainnet \
--mnemonic <MNEMONIC> \
--indexer-address <INDEXER_ADDRESS> \
--port 7600 \
--metrics-port 7300 \
--graph-node-query-endpoint http://localhost:8000/ \
--graph-node-status-endpoint http://localhost:8030/graphql \
--postgres-host localhost \
--postgres-port 5432 \
--postgres-username <DB_USERNAME> \
--postgres-password <DB_PASSWORD> \
--postgres-database is_staging \
--network-subgraph-endpoint http://query-node-0:8000/subgraphs/id/QmUzRg2HHMpbgf6Q4VHKNDbtBEJnyp5JWCh2gUX9AV6jXv \
| pino-pretty

Indexer CLI

Link to this section

The Indexer CLI is a plugin for @graphprotocol/graph-cli accessible in the terminal at graph indexer.

graph indexer connect http://localhost:18000
graph indexer status

Indexer management using Indexer CLI

Link to this section

The suggested tool for interacting with the Indexer Management API is the Indexer CLI, an extension to the Graph CLI. The Indexer agent needs input from an Indexer in order to autonomously interact with the network on the behalf of the Indexer. The mechanism for defining Indexer agent behavior are allocation management mode and indexing rules. Under auto mode, an Indexer can use indexing rules to apply their specific strategy for picking subgraphs to index and serve queries for. Rules are managed via a GraphQL API served by the agent and known as the Indexer Management API. Under manual mode, an Indexer can create allocation actions using actions queue and explicitly approve them before they get executed. Under oversight mode, indexing rules are used to populate actions queue and also require explicit approval for execution.

The Indexer CLI connects to the Indexer agent, typically through port-forwarding, so the CLI does not need to run on the same server or cluster. To help you get started, and to provide some context, the CLI will briefly be described here.

  • graph indexer connect <url> - Connect to the Indexer management API. Typically the connection to the server is opened via port forwarding, so the CLI can be easily operated remotely. (Example: kubectl port-forward pod/<indexer-agent-pod> 8000:8000)

  • graph indexer rules get [options] <deployment-id> [<key1> ...] - Get one or more indexing rules using all as the <deployment-id> to get all rules, or global to get the global defaults. An additional argument --merged can be used to specify that deployment specific rules are merged with the global rule. This is how they are applied in the Indexer agent.

  • graph indexer rules set [options] <deployment-id> <key1> <value1> ... - Set one or more indexing rules.

  • graph indexer rules start [options] <deployment-id> - Start indexing a subgraph deployment if available and set its decisionBasis to always, so the Indexer agent will always choose to index it. If the global rule is set to always then all available subgraphs on the network will be indexed.

  • graph indexer rules stop [options] <deployment-id> - Stop indexing a deployment and set its decisionBasis to never, so it will skip this deployment when deciding on deployments to index.

  • graph indexer rules maybe [options] <deployment-id> — Set the decisionBasis for a deployment to rules, so that the Indexer agent will use indexing rules to decide whether to index this deployment.

  • graph indexer actions get [options] <action-id> - Fetch one or more actions using all or leave action-id empty to get all actions. An additional argument --status can be used to print out all actions of a certain status.

  • graph indexer action queue allocate <deployment-id> <allocation-amount> - Queue allocation action

  • graph indexer action queue reallocate <deployment-id> <allocation-id> <allocationAmount> - Queue reallocate action

  • graph indexer action queue unallocate <deployment-id> <allocation-id> - Queue unallocate action

  • graph indexer actions cancel [<action-id> ...] - Cancel all action in the queue if id is unspecified, otherwise cancel array of id with space as separator

  • graph indexer actions approve [<action-id> ...] - Approve multiple actions for execution

  • graph indexer actions execute approve - Force the worker to execute approved actions immediately

All commands which display rules in the output can choose between the supported output formats (table, yaml, and json) using the -output argument.

Indexing rules

Link to this section

Indexing rules can either be applied as global defaults or for specific subgraph deployments using their IDs. The deployment and decisionBasis fields are mandatory, while all other fields are optional. When an indexing rule has rules as the decisionBasis, then the Indexer agent will compare non-null threshold values on that rule with values fetched from the network for the corresponding deployment. If the subgraph deployment has values above (or below) any of the thresholds it will be chosen for indexing.

For example, if the global rule has a minStake of 5 (GRT), any subgraph deployment which has more than 5 (GRT) of stake allocated to it will be indexed. Threshold rules include maxAllocationPercentage, minSignal, maxSignal, minStake, and minAverageQueryFees.

Data model:

type IndexingRule {
identifier: string
identifierType: IdentifierType
decisionBasis: IndexingDecisionBasis!
allocationAmount: number | null
allocationLifetime: number | null
autoRenewal: boolean
parallelAllocations: number | null
maxAllocationPercentage: number | null
minSignal: string | null
maxSignal: string | null
minStake: string | null
minAverageQueryFees: string | null
custom: string | null
requireSupported: boolean | null
}
IdentifierType {
deployment
subgraph
group
}
IndexingDecisionBasis {
rules
never
always
offchain
}

Example usage of indexing rule:

graph indexer rules offchain QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK
graph indexer rules set QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK decisionBasis always allocationAmount 123321 allocationLifetime 14 autoRenewal false requireSupported false
graph indexer rules stop QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK
graph indexer rules delete QmZfeJYR86UARzp9HiXbURWunYgC9ywvPvoePNbuaATrEK

Actions queue CLI

Link to this section

The indexer-cli provides an actions module for manually working with the action queue. It uses the Graphql API hosted by the indexer management server to interact with the actions queue.

The action execution worker will only grab items from the queue to execute if they have ActionStatus = approved. In the recommended path actions are added to the queue with ActionStatus = queued, so they must then be approved in order to be executed onchain. The general flow will look like:

  • Action added to the queue by the 3rd party optimizer tool or indexer-cli user
  • Indexer can use the indexer-cli to view all queued actions
  • Indexer (or other software) can approve or cancel actions in the queue using the indexer-cli. The approve and cancel commands take an array of action ids as input.
  • The execution worker regularly polls the queue for approved actions. It will grab the approved actions from the queue, attempt to execute them, and update the values in the db depending on the status of execution to success or failed.
  • If an action is successful the worker will ensure that there is an indexing rule present that tells the agent how to manage the allocation moving forward, useful when taking manual actions while the agent is in auto or oversight mode.
  • The indexer can monitor the action queue to see a history of action execution and if needed re-approve and update action items if they failed execution. The action queue provides a history of all actions queued and taken.

Data model:

Type ActionInput {
status: ActionStatus
type: ActionType
deploymentID: string | null
allocationID: string | null
amount: string | null
poi: string | null
force: boolean | null
source: string
reason: string | null
priority: number | null
}
ActionStatus {
queued
approved
pending
success
failed
canceled
}
ActionType {
allocate
unallocate
reallocate
collect
}

Example usage from source:

graph indexer actions get all
graph indexer actions get --status queued
graph indexer actions queue allocate QmeqJ6hsdyk9dVbo1tvRgAxWrVS3rkERiEMsxzPShKLco6 5000
graph indexer actions queue reallocate QmeqJ6hsdyk9dVbo1tvRgAxWrVS3rkERiEMsxzPShKLco6 0x4a58d33e27d3acbaecc92c15101fbc82f47c2ae5 55000
graph indexer actions queue unallocate QmeqJ6hsdyk9dVbo1tvRgAxWrVS3rkERiEMsxzPShKLco6 0x4a58d33e27d3acbaecc92c15101fbc82f47c2ae
graph indexer actions cancel
graph indexer actions approve 1 3 5
graph indexer actions execute approve

Note that supported action types for allocation management have different input requirements:

  • Allocate - allocate stake to a specific subgraph deployment

    • required action params:
      • deploymentID
      • amount
  • Unallocate - close allocation, freeing up the stake to reallocate elsewhere

    • required action params:
      • allocationID
      • deploymentID
    • optional action params:
      • poi
      • force (forces using the provided POI even if it doesn’t match what the graph-node provides)
  • Reallocate - atomically close allocation and open a fresh allocation for the same subgraph deployment

    • required action params:
      • allocationID
      • deploymentID
      • amount
    • optional action params:
      • poi
      • force (forces using the provided POI even if it doesn’t match what the graph-node provides)

Cost models

Link to this section

Cost models provide dynamic pricing for queries based on market and query attributes. The Indexer Service shares a cost model with the gateways for each subgraph for which they intend to respond to queries. The gateways, in turn, use the cost model to make Indexer selection decisions per query and to negotiate payment with chosen Indexers.

The Agora language provides a flexible format for declaring cost models for queries. An Agora price model is a sequence of statements that execute in order for each top-level query in a GraphQL query. For each top-level query, the first statement which matches it determines the price for that query.

A statement is comprised of a predicate, which is used for matching GraphQL queries, and a cost expression which when evaluated outputs a cost in decimal GRT. Values in the named argument position of a query may be captured in the predicate and used in the expression. Globals may also be set and substituted in for placeholders in an expression.

Example cost model:

# This statement captures the skip value,
# uses a boolean expression in the predicate to match specific queries that use `skip`
# and a cost expression to calculate the cost based on the `skip` value and the SYSTEM_LOAD global
query { pairs(skip: $skip) { id } } when $skip > 2000 => 0.0001 * $skip * $SYSTEM_LOAD;
# This default will match any GraphQL expression.
# It uses a Global substituted into the expression to calculate cost
default => 0.1 * $SYSTEM_LOAD;

Example query costing using the above model:

QueryPrice
{ pairs(skip: 5000) { id } }0.5 GRT
{ tokens { symbol } }0.1 GRT
{ pairs(skip: 5000) { id } tokens { symbol } }0.6 GRT

Applying the cost model

Link to this section

Cost models are applied via the Indexer CLI, which passes them to the Indexer Management API of the Indexer agent for storing in the database. The Indexer Service will then pick them up and serve the cost models to gateways whenever they ask for them.

indexer cost set variables '{ "SYSTEM_LOAD": 1.4 }'
indexer cost set model my_model.agora

Interacting with the network

Link to this section

Stake in the protocol

Link to this section

The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions.

Note: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice (OneClickDapp, ABItopic, and MyCrypto are a few other known tools).

Once an Indexer has staked GRT in the protocol, the Indexer components can be started up and begin their interactions with the network.

Approve tokens

Link to this section
  1. Open the Remix app in a browser

  2. In the File Explorer create a file named GraphToken.abi with the token ABI.

  3. With GraphToken.abi selected and open in the editor, switch to the Deploy and run transactions section in the Remix interface.

  4. Under environment select Injected Web3 and under Account select your Indexer address.

  5. Set the GraphToken contract address - Paste the GraphToken contract address (0xc944E90C64B2c07662A292be6244BDf05Cda44a7) next to At Address and click the At address button to apply.

  6. Call the approve(spender, amount) function to approve the Staking contract. Fill in spender with the Staking contract address (0xF55041E37E12cD407ad00CE2910B8269B01263b9) and amount with the tokens to stake (in wei).

Stake tokens

Link to this section
  1. Open the Remix app in a browser

  2. In the File Explorer create a file named Staking.abi with the staking ABI.

  3. With Staking.abi selected and open in the editor, switch to the Deploy and run transactions section in the Remix interface.

  4. Under environment select Injected Web3 and under Account select your Indexer address.

  5. Set the Staking contract address - Paste the Staking contract address (0xF55041E37E12cD407ad00CE2910B8269B01263b9) next to At Address and click the At address button to apply.

  6. Call stake() to stake GRT in the protocol.

  7. (Optional) Indexers may approve another address to be the operator for their Indexer infrastructure in order to separate the keys that control the funds from those that are performing day to day actions such as allocating on subgraphs and serving (paid) queries. In order to set the operator call setOperator() with the operator address.

  8. (Optional) In order to control the distribution of rewards and strategically attract Delegators Indexers can update their delegation parameters by updating their indexingRewardCut (parts per million), queryFeeCut (parts per million), and cooldownBlocks (number of blocks). To do so call setDelegationParameters(). The following example sets the queryFeeCut to distribute 95% of query rebates to the Indexer and 5% to Delegators, set the indexingRewardCutto distribute 60% of indexing rewards to the Indexer and 40% to Delegators, and set thecooldownBlocks period to 500 blocks.

setDelegationParameters(950000, 600000, 500)

Setting delegation parameters

Link to this section

The setDelegationParameters() function in the staking contract is essential for Indexers, allowing them to set parameters that define their interactions with Delegators, influencing their reward sharing and delegation capacity.

How to set delegation parameters

Link to this section

To set the delegation parameters using Graph Explorer interface, follow these steps:

  1. Navigate to Graph Explorer.
  2. Connect your wallet. Choose multisig (such as Gnosis Safe) and then select mainnet. Note: You will need to repeat this process for Arbitrum One.
  3. Connect the wallet you have as a signer.
  4. Navigate to the 'Settings' section and select 'Delegation Parameters'. These parameters should be configured to achieve an effective cut within the desired range. Upon entering values in the provided input fields, the interface will automatically calculate the effective cut. Adjust these values as necessary to attain the desired effective cut percentage.
  5. Submit the transaction to the network.

Note: This transaction will need to be confirmed by the multisig wallet signers.

The life of an allocation

Link to this section

After being created by an Indexer a healthy allocation goes through two states.

  • Active - Once an allocation is created onchain (allocateFrom()) it is considered active. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules.

  • Closed - An Indexer is free to close an allocation once 1 epoch has passed (closeAllocation()) or their Indexer agent will automatically close the allocation after the maxAllocationEpochs (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators (learn more).

Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation onchain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically.

Edit page

Previous
FAQ
Next
Graph Node
Edit page