Advance Subgraph Features
Reading time: 16 min
Add and implement advanced subgraph features to enhanced your subgraph's built.
Starting from specVersion
0.0.4
, subgraph features must be explicitly declared in the features
section at the top level of the manifest file, using their camelCase
name, as listed in the table below:
For instance, if a subgraph uses the Full-Text Search and the Non-fatal Errors features, the features
field in the manifest should be:
specVersion: 0.0.4description: Gravatar for Ethereumfeatures:- fullTextSearch- nonFatalErrorsdataSources: ...
Note that using a feature without declaring it will incur a validation error during subgraph deployment, but no errors will occur if a feature is declared but not used.
Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, etc.
This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the Timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL.
type Data @entity(timeseries: true) {id: Int8!timestamp: Timestamp!price: BigDecimal!}type Stats @aggregation(intervals: ["hour", "day"], source: "Data") {id: Int8!timestamp: Timestamp!sum: BigDecimal! @aggregate(fn: "sum", arg: "price")}
Timeseries entities are defined with @entity(timeseries: true)
in schema.graphql. Every timeseries entity must have a unique ID of the int8 type, a timestamp of the Timestamp type, and include data that will be used for calculation by aggregation entities. These Timeseries entities can be saved in regular trigger handlers, and act as the “raw data” for the Aggregation entities.
Aggregation entities are defined with @aggregation
in schema.graphql. Every aggregation entity defines the source from which it will gather data (which must be a Timeseries entity), sets the intervals (e.g., hour, day), and specifies the aggregation function it will use (e.g., sum, count, min, max, first, last). Aggregation entities are automatically calculated on the basis of the specified source at the end of the required interval.
hour
: sets the timeseries period every hour, on the hour.day
: sets the timeseries period every day, starting and ending at 00:00.
sum
: Total of all values.count
: Number of values.min
: Minimum value.max
: Maximum value.first
: First value in the period.last
: Last value in the period.
{stats(interval: "hour", where: { timestamp_gt: 1704085200 }) {idtimestampsum}}
Note:
To use Timeseries and Aggregations, a subgraph must have a spec version ≥1.1.0. Note that this feature might undergo significant changes that could affect backward compatibility.
about Timeseries and Aggregations.
Los errores de indexación en subgrafos ya sincronizados provocarán, por defecto, que el subgrafo falle y deje de sincronizarse. Los subgrafos pueden ser configurados de manera alternativa para continuar la sincronización en presencia de errores, ignorando los cambios realizados por el handler que provocó el error. Esto da a los autores de los subgrafos tiempo para corregir sus subgrafos mientras las consultas continúan siendo servidas contra el último bloque, aunque los resultados serán posiblemente inconsistentes debido al bug que provocó el error. Nótese que algunos errores siguen siendo siempre fatales, para que el error no sea fatal debe saberse que es deterministico.
Note: The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio.
Para activar los errores no fatales es necesario establecer el siguiente indicador en el manifiesto del subgrafo:
specVersion: 0.0.4description: Gravatar for Ethereumfeatures:- nonFatalErrors...
The query must also opt-in to querying data with potential inconsistencies through the subgraphError
argument. It is also recommended to query _meta
to check if the subgraph has skipped over errors, as in the example:
foos(first: 100, subgraphError: allow) {id}_meta {hasIndexingErrors}
If the subgraph encounters an error, that query will return both the data and a graphql error with the message "indexing_error"
, as in this example response:
"data": {"foos": [{"id": "0xdead"}],"_meta": {"hasIndexingErrors": true}},"errors": [{"message": "indexing_error"}]
File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave.
Esto también establece las bases para la indexación determinista de datos off-chain, así como la posible introducción de datos arbitrarios procedentes de HTTP.
Rather than fetching files "in line" during handler execution, this introduces templates which can be spawned as new data sources for a given file identifier. These new data sources fetch the files, retrying if they are unsuccessful, running a dedicated handler when the file is found.
This is similar to the , which are used to dynamically create new chain-based data sources.
This replaces the existing ipfs.cat
API
File data sources requires graph-ts >=0.29.0 and graph-cli >=0.33.1
Las fuentes de datos de archivos no pueden acceder a entidades basadas en cadenas ni actualizarlas, pero deben actualizar entidades específicas de archivos.
Esto puede significar dividir campos de entidades existentes en entidades separadas, vinculadas entre sí.
Entidad combinada original:
type Token @entity {id: ID!tokenID: BigInt!tokenURI: String!externalURL: String!ipfsURI: String!image: String!name: String!description: String!type: String!updatedAtTimestamp: BigIntowner: User!}
Nueva, entidad dividida:
type Token @entity {id: ID!tokenID: BigInt!tokenURI: String!ipfsURI: TokenMetadataupdatedAtTimestamp: BigIntowner: String!}type TokenMetadata @entity {id: ID!image: String!externalURL: String!name: String!description: String!}
Si la relación es 1:1 entre la entidad padre y la entidad fuente de datos de archivo resultante, el patrón más sencillo es vincular la entidad padre a una entidad de archivo resultante utilizando el CID IPFS como búsqueda. Pónte en contacto con nosotros en Discord si tienes dificultades para modelar tus nuevas entidades basadas en archivos!
Esta es la fuente de datos que se generará cuando se identifique un archivo de interés.
templates:- name: TokenMetadatakind: file/ipfsmapping:apiVersion: 0.0.7language: wasm/assemblyscriptfile: ./src/mapping.tshandler: handleMetadataentities:- TokenMetadataabis:- name: Tokenfile: ./abis/Token.json
Currently abis
are required, though it is not possible to call contracts from within file data sources
The file data source must specifically mention all the entity types which it will interact with under entities
. See for more details.
This handler should accept one Bytes
parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with graph-ts
helpers ().
The CID of the file as a readable string can be accessed via the dataSource
as follows:
const cid = dataSource.stringParam()
Ejemplo de handler:
import { json, Bytes, dataSource } from '@graphprotocol/graph-ts'import { TokenMetadata } from '../generated/schema'export function handleMetadata(content: Bytes): void {let tokenMetadata = new TokenMetadata(dataSource.stringParam())const value = json.fromBytes(content).toObject()if (value) {const image = value.get('image')const name = value.get('name')const description = value.get('description')const externalURL = value.get('external_url')if (name && image && description && externalURL) {tokenMetadata.name = name.toString()tokenMetadata.image = image.toString()tokenMetadata.externalURL = externalURL.toString()tokenMetadata.description = description.toString()}tokenMetadata.save()}}
Ahora puedes crear fuentes de datos de archivos durante la ejecución de handlers basados en cadenas:
- Import the template from the auto-generated
templates
- call
TemplateName.create(cid: string)
from within a mapping, where the cid is a valid content identifier for IPFS or Arweave
For IPFS, Graph Node supports , and content identifers with directories (e.g. bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json
).
For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave based on their from an Arweave gateway (). Arweave supports transactions uploaded via Irys (previously Bundlr), and Graph Node can also fetch files based on .
Ejemplo:
import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates'const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm'//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs.export function handleTransfer(event: TransferEvent): void {let token = Token.load(event.params.tokenId.toString())if (!token) {token = new Token(event.params.tokenId.toString())token.tokenID = event.params.tokenIdtoken.tokenURI = '/' + event.params.tokenId.toString() + '.json'const tokenIpfsHash = ipfshash + token.tokenURI//This creates a path to the metadata for a single Crypto coven NFT. It concats the directory with "/" + filename + ".json"token.ipfsURI = tokenIpfsHashTokenMetadataTemplate.create(tokenIpfsHash)}token.updatedAtTimestamp = event.block.timestamptoken.owner = event.params.to.toHexString()token.save()}
This will create a new file data source, which will poll Graph Node's configured IPFS or Arweave endpoint, retrying if it is not found. When the file is found, the file data source handler will be executed.
This example is using the CID as the lookup between the parent Token
entity and the resulting TokenMetadata
entity.
Previously, this is the point at which a subgraph developer would have called ipfs.cat(CID)
to fetch the file
¡Felicitaciones, estás utilizando fuentes de datos de archivos!
You can now build
and deploy
your subgraph to any Graph Node >=v0.30.0-rc.0.
Los handlers y entidades de fuentes de datos de archivos están aislados de otras entidades del subgrafo, asegurando que son deterministas cuando se ejecutan, y asegurando que no se contaminan las fuentes de datos basadas en cadenas. En concreto:
- Las entidades creadas por File Data Sources son inmutables y no pueden actualizarse
- Los handlers de File Data Source no pueden acceder a entidades de otras fuentes de datos de archivos
- Los handlers basados en cadenas no pueden acceder a las entidades asociadas a File Data Sources
Aunque esta restricción no debería ser problemática para la mayoría de los casos de uso, puede introducir complejidad para algunos. Si tienes problemas para modelar tus datos basados en archivos en un subgrafo, ponte en contacto con nosotros a través de Discord!
Además, no es posible crear fuentes de datos a partir de una File Data Source, ya sea una fuente de datos on-chain u otra File Data Source. Es posible que esta restricción se elimine en el futuro.
Si estás vinculando metadatos NFT a los tokens correspondientes, utiliza el hash IPFS de los metadatos para hacer referencia a una entidad Metadata desde la entidad Token. Guarda la entidad de metadatos utilizando el hash IPFS como ID.
You can use when creating File Data Sources to pass extra information which will be available to the File Data Source handler.
If you have entities which are refreshed multiple times, create unique file-based entities using the IPFS hash & the entity ID, and reference them using a derived field in the chain-based entity.
Estamos trabajando para mejorar la recomendación anterior, de modo que las consultas sólo devuelvan la versión "más reciente"
File data sources currently require ABIs, even though ABIs are not used (). Workaround is to add any ABI.
Handlers for File Data Sources cannot be in files which import eth_call
contract bindings, failing with "unknown import: ethereum::ethereum.call
has not been defined" (). Workaround is to create file data source handlers in a dedicated file.
Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments.
-
These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data.
-
This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain.
When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments.
- The event's first indexed argument corresponds to
topic1
, the second totopic2
, and so on, up totopic3
, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event.
// SPDX-License-Identifier: MITpragma solidity ^0.8.0;contract Token {// Event declaration with indexed parameters for addressesevent Transfer(address indexed from, address indexed to, uint256 value);// Function to simulate transferring tokensfunction transfer(address to, uint256 value) public {// Emitting the Transfer event with from, to, and valueemit Transfer(msg.sender, to, value);}}
In this example:
- The
Transfer
event is used to log transactions of tokens between addresses. - The
from
andto
parameters are indexed, allowing event listeners to filter and monitor transfers involving specific addresses. - The
transfer
function is a simple representation of a token transfer action, emitting the Transfer event whenever it is called.
Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured:
eventHandlers:- event: SomeEvent(indexed uint256, indexed address, indexed uint256)handler: handleSomeEventtopic1: ['0xValue1', '0xValue2']topic2: ['0xAddress1', '0xAddress2']topic3: ['0xValue3']
In this setup:
topic1
corresponds to the first indexed argument of the event,topic2
to the second, andtopic3
to the third.- Each topic can have one or more values, and an event is only processed if it matches one of the values in each specified topic.
- Within a Single Topic: The logic functions as an OR condition. The event will be processed if it matches any one of the listed values in a given topic.
- Between Different Topics: The logic functions as an AND condition. An event must satisfy all specified conditions across different topics to trigger the associated handler.
eventHandlers:- event: Transfer(indexed address,indexed address,uint256)handler: handleDirectedTransfertopic1: ['0xAddressA'] # Sender Addresstopic2: ['0xAddressB'] # Receiver Address
In this configuration:
topic1
is configured to filterTransfer
events where0xAddressA
is the sender.topic2
is configured to filterTransfer
events where0xAddressB
is the receiver.- The subgraph will only index transactions that occur directly from
0xAddressA
to0xAddressB
.
eventHandlers:- event: Transfer(indexed address,indexed address,uint256)handler: handleTransferToOrFromtopic1: ['0xAddressA', '0xAddressB', '0xAddressC'] # Sender Addresstopic2: ['0xAddressB', '0xAddressC'] # Receiver Address
In this configuration:
topic1
is configured to filterTransfer
events where0xAddressA
,0xAddressB
,0xAddressC
is the sender.topic2
is configured to filterTransfer
events where0xAddressB
and0xAddressC
is the receiver.- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses.
Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node.
Declarative eth_calls
are a valuable subgraph feature that allows eth_calls
to be executed ahead of time, enabling graph-node
to execute them in parallel.
This feature does the following:
- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency.
- Allows faster data fetching, resulting in quicker query responses and a better user experience.
- Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient.
- Declarative
eth_calls
: Ethereum calls that are defined to be executed in parallel rather than sequentially. - Parallel Execution: Instead of waiting for one call to finish before starting the next, multiple calls can be initiated simultaneously.
- Time Efficiency: The total time taken for all the calls changes from the sum of the individual call times (sequential) to the time taken by the longest call (parallel).
Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings.
Traditionally, these calls might be made sequentially:
- Call 1 (Transactions): Takes 3 seconds
- Call 2 (Balance): Takes 2 seconds
- Call 3 (Token Holdings): Takes 4 seconds
Total time taken = 3 + 2 + 4 = 9 seconds
With this feature, you can declare these calls to be executed in parallel:
- Call 1 (Transactions): Takes 3 seconds
- Call 2 (Balance): Takes 2 seconds
- Call 3 (Token Holdings): Takes 4 seconds
Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call.
Total time taken = max (3, 2, 4) = 4 seconds
- Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel.
- Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously.
- Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing.
Declared eth_calls
can access the event.address
of the underlying event as well as all the event.params
.
Subgraph.yaml
using event.address
:
eventHandlers:event: Swap(indexed address,indexed address,int256,int256,uint160,uint128,int24)handler: handleSwapcalls:global0X128: Pool[event.address].feeGrowthGlobal0X128()global1X128: Pool[event.address].feeGrowthGlobal1X128()
Details for the example above:
global0X128
is the declaredeth_call
.- The text (
global0X128
) is the label for thiseth_call
which is used when logging errors. - The text (
Pool[event.address].feeGrowthGlobal0X128()
) is the actualeth_call
that will be executed, which is in the form ofContract[address].function(arguments)
- The
address
andarguments
can be replaced with variables that will be available when the handler is executed.
Subgraph.yaml
using event.params
calls:- ERC20DecimalsToken0: ERC20[event.params.token0].decimals()
Note: it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more .
When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the startBlock
defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called Grafting. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed.
A subgraph is grafted onto a base subgraph when the subgraph manifest in subgraph.yaml
contains a graft
block at the top-level:
description: ...graft:base: Qm... # Subgraph ID of base subgraphblock: 7345624 # Block number
When a subgraph whose manifest contains a graft
block is deployed, Graph Node will copy the data of the base
subgraph up to and including the given block
and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph.
Debido a que el grafting copia en lugar de indexar los datos base, es mucho más rápido llevar el subgrafo al bloque deseado que indexar desde cero, aunque la copia inicial de los datos aún puede llevar varias horas para subgrafos muy grandes. Mientras se inicializa el subgrafo grafted, Graph Node registrará información sobre los tipos de entidad que ya han sido copiados.
El subgrafo grafteado puede utilizar un esquema GraphQL que no es idéntico al del subgrafo base, sino simplemente compatible con él. Tiene que ser un esquema de subgrafo válido por sí mismo, pero puede diferir del esquema del subgrafo base de las siguientes maneras:
- Agrega o elimina tipos de entidades
- Elimina los atributos de los tipos de entidad
- Agrega atributos anulables a los tipos de entidad
- Convierte los atributos no anulables en atributos anulables
- Añade valores a los enums
- Agrega o elimina interfaces
- Cambia para qué tipos de entidades se implementa una interfaz