subgraphs > Developing > Creating > Advance Subgraph Features

Advance Subgraph Features

Reading time: 19 min

概要

このセクションへのリンク

Add and implement advanced subgraph features to enhanced your subgraph's built.

Starting from specVersion 0.0.4, subgraph features must be explicitly declared in the features section at the top level of the manifest file, using their camelCase name, as listed in the table below:

FeatureName
Non-fatal errorsnonFatalErrors
Full-text SearchfullTextSearch
Graftinggrafting

For instance, if a subgraph uses the Full-Text Search and the Non-fatal Errors features, the features field in the manifest should be:

specVersion: 0.0.4
description: Gravatar for Ethereum
features:
- fullTextSearch
- nonFatalErrors
dataSources: ...

Note that using a feature without declaring it will incur a validation error during subgraph deployment, but no errors will occur if a feature is declared but not used.

Timeseries and Aggregations

このセクションへのリンク

Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, etc.

This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the Timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL.

type Data @entity(timeseries: true) {
id: Int8!
timestamp: Timestamp!
price: BigDecimal!
}
type Stats @aggregation(intervals: ["hour", "day"], source: "Data") {
id: Int8!
timestamp: Timestamp!
sum: BigDecimal! @aggregate(fn: "sum", arg: "price")
}

Defining Timeseries and Aggregations

このセクションへのリンク

Timeseries entities are defined with @entity(timeseries: true) in schema.graphql. Every timeseries entity must have a unique ID of the int8 type, a timestamp of the Timestamp type, and include data that will be used for calculation by aggregation entities. These Timeseries entities can be saved in regular trigger handlers, and act as the “raw data” for the Aggregation entities.

Aggregation entities are defined with @aggregation in schema.graphql. Every aggregation entity defines the source from which it will gather data (which must be a Timeseries entity), sets the intervals (e.g., hour, day), and specifies the aggregation function it will use (e.g., sum, count, min, max, first, last). Aggregation entities are automatically calculated on the basis of the specified source at the end of the required interval.

Available Aggregation Intervals

このセクションへのリンク
  • hour: sets the timeseries period every hour, on the hour.
  • day: sets the timeseries period every day, starting and ending at 00:00.

Available Aggregation Functions

このセクションへのリンク
  • sum: Total of all values.
  • count: Number of values.
  • min: Minimum value.
  • max: Maximum value.
  • first: First value in the period.
  • last: Last value in the period.

Example Aggregations Query

このセクションへのリンク
{
stats(interval: "hour", where: { timestamp_gt: 1704085200 }) {
id
timestamp
sum
}
}

Note:

To use Timeseries and Aggregations, a subgraph must have a spec version ≥1.1.0. Note that this feature might undergo significant changes that could affect backward compatibility.

Read more about Timeseries and Aggregations.

致命的でないエラー

このセクションへのリンク

すでに同期しているサブグラフのインデックスエラーは、デフォルトではサブグラフを失敗させ、同期を停止させます。サブグラフは、エラーが発生したハンドラーによる変更を無視することで、エラーが発生しても同期を継続するように設定することができます。これにより、サブグラフの作成者はサブグラフを修正する時間を得ることができ、一方でクエリは最新のブロックに対して提供され続けますが、エラーの原因となったバグのために結果が一貫していない可能性があります。なお、エラーの中には常に致命的なものもあり、致命的でないものにするためには、そのエラーが決定論的であることがわかっていなければなりません。

Note: The Graph Network does not yet support non-fatal errors, and developers should not deploy subgraphs using that functionality to the network via the Studio.

非致命的エラーを有効にするには、サブグラフのマニフェストに以下の機能フラグを設定する必要があります:

specVersion: 0.0.4
description: Gravatar for Ethereum
features:
- fullTextSearch
...

The query must also opt-in to querying data with potential inconsistencies through the subgraphError argument. It is also recommended to query _meta to check if the subgraph has skipped over errors, as in the example:

foos(first: 100, subgraphError: allow) {
id
}
_meta {
hasIndexingErrors
}

If the subgraph encounters an error, that query will return both the data and a graphql error with the message "indexing_error", as in this example response:

"data": {
"foos": [
{
"id": "0xdead"
}
],
"_meta": {
"hasIndexingErrors": true
}
},
"errors": [
{
"message": "indexing_error"
}
]

IPFS/Arweave File Data Sources

このセクションへのリンク

ファイルデータソースは、堅牢で拡張可能な方法でインデックス作成中にオフチェーンデータにアクセスするための新しいサブグラフ機能です。ファイルデータソースは、IPFS および Arweave からのファイルのフェッチをサポートしています。

また、オフチェーンデータの決定論的なインデックス作成、および任意のHTTPソースデータの導入の可能性についても基礎ができました。

Rather than fetching files "in line" during handler execution, this introduces templates which can be spawned as new data sources for a given file identifier. These new data sources fetch the files, retrying if they are unsuccessful, running a dedicated handler when the file is found.

This is similar to the existing data source templates, which are used to dynamically create new chain-based data sources.

This replaces the existing ipfs.cat API

アップグレードガイド

このセクションへのリンク

Update graph-ts and graph-cli

このセクションへのリンク

File data sources requires graph-ts >=0.29.0 and graph-cli >=0.33.1

ファイルが見つかったときに更新される新しいエンティティタイプを追加します。

このセクションへのリンク

ファファイルが見つかったときに更新される新しいエンティティタイプを追加します。イルデータソースは、チェーンベースのエンティティにアクセスしたり更新することはできませんが、ファイル固有のエンティティを更新する必要があります。

これは、既存のエンティティからフィールドを分離し、別のエンティティにリンクさせることを意味します。

Original combined entity:

type Token @entity {
id: ID!
tokenID: BigInt!
tokenURI: String!
externalURL: String!
ipfsURI: String!
image: String!
name: String!
description: String!
type: String!
updatedAtTimestamp: BigInt
owner: User!
}

New, split entity:

type Token @entity {
id: ID!
tokenID: BigInt!
tokenURI: String!
ipfsURI: TokenMetadata
updatedAtTimestamp: BigInt
owner: String!
}
type TokenMetadata @entity {
id: ID!
image: String!
externalURL: String!
name: String!
description: String!
}

親エンティティと結果のファイルデータソースエンティティの間の関係が1:1である場合、最も単純なパターンは、IPFS CIDをルックアップとして使用して、親エンティティを結果のファイルエンティティにリンクすることです。新しいファイルベースのエンティティのモデリングに問題がある場合は、Discordに連絡してください。

You can use nested filters to filter parent entities on the basis of these nested entities.

Add a new templated data source with kind: file/ipfs or kind: file/arweave

このセクションへのリンク

目的のファイルが特定されたときに生成されるデータソースです。

templates:
- name: TokenMetadata
kind: file/ipfs
mapping:
apiVersion: 0.0.7
language: wasm/assemblyscript
file: ./src/mapping.ts
handler: handleMetadata
entities:
- TokenMetadata
abis:
- name: Token
file: ./abis/Token.json

Currently abis are required, though it is not possible to call contracts from within file data sources

The file data source must specifically mention all the entity types which it will interact with under entities. See limitations for more details.

ファイルを処理するハンドラーを新規に作成

このセクションへのリンク

This handler should accept one Bytes parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with graph-ts helpers (documentation).

The CID of the file as a readable string can be accessed via the dataSource as follows:

const cid = dataSource.stringParam()

ハンドラーの例:

import { json, Bytes, dataSource } from '@graphprotocol/graph-ts'
import { TokenMetadata } from '../generated/schema'
export function handleMetadata(content: Bytes): void {
let tokenMetadata = new TokenMetadata(dataSource.stringParam())
const value = json.fromBytes(content).toObject()
if (value) {
const image = value.get('image')
const name = value.get('name')
const description = value.get('description')
const externalURL = value.get('external_url')
if (name && image && description && externalURL) {
tokenMetadata.name = name.toString()
tokenMetadata.image = image.toString()
tokenMetadata.externalURL = externalURL.toString()
tokenMetadata.description = description.toString()
}
tokenMetadata.save()
}
}

必要なときにファイルデータソースを起動する

このセクションへのリンク

チェーンベースハンドラーの実行中に、ファイルデータソースを作成できるようになりました:

  • Import the template from the auto-generated templates
  • call TemplateName.create(cid: string) from within a mapping, where the cid is a valid content identifier for IPFS or Arweave

For IPFS, Graph Node supports v0 and v1 content identifiers, and content identifers with directories (e.g. bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json).

For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave based on their transaction ID from an Arweave gateway (example file). Arweave supports transactions uploaded via Irys (previously Bundlr), and Graph Node can also fetch files based on Irys manifests.

例:

import { TokenMetadata as TokenMetadataTemplate } from '../generated/templates'
const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm'
//This example code is for a Crypto coven subgraph. The above ipfs hash is a directory with token metadata for all crypto coven NFTs.
export function handleTransfer(event: TransferEvent): void {
let token = Token.load(event.params.tokenId.toString())
if (!token) {
token = new Token(event.params.tokenId.toString())
token.tokenID = event.params.tokenId
token.tokenURI = '/' + event.params.tokenId.toString() + '.json'
const tokenIpfsHash = ipfshash + token.tokenURI
//This creates a path to the metadata for a single Crypto coven NFT. It concats the directory with "/" + filename + ".json"
token.ipfsURI = tokenIpfsHash
TokenMetadataTemplate.create(tokenIpfsHash)
}
token.updatedAtTimestamp = event.block.timestamp
token.owner = event.params.to.toHexString()
token.save()
}

これにより、新しいファイル データ ソースが作成され、グラフ ノードの構成済み IPFS または Arweave エンドポイントがポーリングされ、見つからない場合は再試行されます。ファイルが見つかると、ファイルデータソースハンドラが実行されます。

This example is using the CID as the lookup between the parent Token entity and the resulting TokenMetadata entity.

Previously, this is the point at which a subgraph developer would have called ipfs.cat(CID) to fetch the file

おめでとうございます!ファイルデータソースが使用できます。

サブグラフのデプロイ

このセクションへのリンク

You can now build and deploy your subgraph to any Graph Node >=v0.30.0-rc.0.

ファイルデータソースハンドラおよびエンティティは、他のサブグラフエンティティから分離され、実行時に決定論的であることを保証し、チェーンベースのデータソースを汚染しないことを保証します。具体的には、以下の通りです。

  • ファイルデータソースで作成されたエンティティは不変であり、更新することはできません。
  • ファイルデータソースハンドラは、他のファイルデータソースのエンティティにアクセスすることはできません。
  • ファイルデータソースに関連するエンティティは、チェーンベースハンドラーからアクセスできません。

この制約は、ほとんどのユースケースで問題になることはありませんが、一部のユースケースでは複雑さをもたらすかもしれません。ファイルベースのデータをサブグラフでモデル化する際に問題がある場合は、Discordを通じてご連絡ください。

また、オンチェーンデータソースや他のファイルデータソースからデータソースを作成することはできません。この制限は、将来的に解除される可能性があります。

ベストプラクティス

このセクションへのリンク

NFT メタデータを対応するトークンにリンクする場合、メタデータの IPFS ハッシュを使用して、トークン エンティティから Metadata エンティティを参照します。IPFSハッシュをIDとして使用してMetadataエンティティを保存します。

You can use DataSource context when creating File Data Sources to pass extra information which will be available to the File Data Source handler.

If you have entities which are refreshed multiple times, create unique file-based entities using the IPFS hash & the entity ID, and reference them using a derived field in the chain-based entity.

クエリが「最新版」のみを返すように、上記の推奨事項を改善するよう取り組んでいます。

File data sources currently require ABIs, even though ABIs are not used (issue). Workaround is to add any ABI.

Handlers for File Data Sources cannot be in files which import eth_call contract bindings, failing with "unknown import: ethereum::ethereum.call has not been defined" (issue). Workaround is to create file data source handlers in a dedicated file.

Crypto Coven Subgraph migration

GIP File Data Sources

Indexed Argument Filters / Topic Filters

このセクションへのリンク

Requires: SpecVersion >= 1.2.0

Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments.

  • These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data.

  • This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain.

How Topic Filters Work

このセクションへのリンク

When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments.

  • The event's first indexed argument corresponds to topic1, the second to topic2, and so on, up to topic3, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event.
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.0;
contract Token {
// Event declaration with indexed parameters for addresses
event Transfer(address indexed from, address indexed to, uint256 value);
// Function to simulate transferring tokens
function transfer(address to, uint256 value) public {
// Emitting the Transfer event with from, to, and value
emit Transfer(msg.sender, to, value);
}
}

In this example:

  • The Transfer event is used to log transactions of tokens between addresses.
  • The from and to parameters are indexed, allowing event listeners to filter and monitor transfers involving specific addresses.
  • The transfer function is a simple representation of a token transfer action, emitting the Transfer event whenever it is called.

Configuration in Subgraphs

このセクションへのリンク

Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured:

eventHandlers:
- event: SomeEvent(indexed uint256, indexed address, indexed uint256)
handler: handleSomeEvent
topic1: ['0xValue1', '0xValue2']
topic2: ['0xAddress1', '0xAddress2']
topic3: ['0xValue3']

In this setup:

  • topic1 corresponds to the first indexed argument of the event, topic2 to the second, and topic3 to the third.
  • Each topic can have one or more values, and an event is only processed if it matches one of the values in each specified topic.
  • Within a Single Topic: The logic functions as an OR condition. The event will be processed if it matches any one of the listed values in a given topic.
  • Between Different Topics: The logic functions as an AND condition. An event must satisfy all specified conditions across different topics to trigger the associated handler.

Example 1: Tracking Direct Transfers from Address A to Address B

このセクションへのリンク
eventHandlers:
- event: Transfer(indexed address,indexed address,uint256)
handler: handleDirectedTransfer
topic1: ['0xAddressA'] # Sender Address
topic2: ['0xAddressB'] # Receiver Address

In this configuration:

  • topic1 is configured to filter Transfer events where 0xAddressA is the sender.
  • topic2 is configured to filter Transfer events where 0xAddressB is the receiver.
  • The subgraph will only index transactions that occur directly from 0xAddressA to 0xAddressB.

Example 2: Tracking Transactions in Either Direction Between Two or More Addresses

このセクションへのリンク
eventHandlers:
- event: Transfer(indexed address,indexed address,uint256)
handler: handleTransferToOrFrom
topic1: ['0xAddressA', '0xAddressB', '0xAddressC'] # Sender Address
topic2: ['0xAddressB', '0xAddressC'] # Receiver Address

In this configuration:

  • topic1 is configured to filter Transfer events where 0xAddressA, 0xAddressB, 0xAddressC is the sender.
  • topic2 is configured to filter Transfer events where 0xAddressB and 0xAddressC is the receiver.
  • The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses.

Note: This is an experimental feature that is not currently available in a stable Graph Node release yet. You can only use it in Subgraph Studio or your self-hosted node.

Declarative eth_calls are a valuable subgraph feature that allows eth_calls to be executed ahead of time, enabling graph-node to execute them in parallel.

This feature does the following:

  • Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency.
  • Allows faster data fetching, resulting in quicker query responses and a better user experience.
  • Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient.
  • Declarative eth_calls: Ethereum calls that are defined to be executed in parallel rather than sequentially.
  • Parallel Execution: Instead of waiting for one call to finish before starting the next, multiple calls can be initiated simultaneously.
  • Time Efficiency: The total time taken for all the calls changes from the sum of the individual call times (sequential) to the time taken by the longest call (parallel).

Scenario without Declarative eth_calls

このセクションへのリンク

Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings.

Traditionally, these calls might be made sequentially:

  1. Call 1 (Transactions): Takes 3 seconds
  2. Call 2 (Balance): Takes 2 seconds
  3. Call 3 (Token Holdings): Takes 4 seconds

Total time taken = 3 + 2 + 4 = 9 seconds

Scenario with Declarative eth_calls

このセクションへのリンク

With this feature, you can declare these calls to be executed in parallel:

  1. Call 1 (Transactions): Takes 3 seconds
  2. Call 2 (Balance): Takes 2 seconds
  3. Call 3 (Token Holdings): Takes 4 seconds

Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call.

Total time taken = max (3, 2, 4) = 4 seconds

  1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel.
  2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously.
  3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing.

Example Configuration in Subgraph Manifest

このセクションへのリンク

Declared eth_calls can access the event.address of the underlying event as well as all the event.params.

Subgraph.yaml using event.address:

eventHandlers:
event: Swap(indexed address,indexed address,int256,int256,uint160,uint128,int24)
handler: handleSwap
calls:
global0X128: Pool[event.address].feeGrowthGlobal0X128()
global1X128: Pool[event.address].feeGrowthGlobal1X128()

Details for the example above:

  • global0X128 is the declared eth_call.
  • The text (global0X128) is the label for this eth_call which is used when logging errors.
  • The text (Pool[event.address].feeGrowthGlobal0X128()) is the actual eth_call that will be executed, which is in the form of Contract[address].function(arguments)
  • The address and arguments can be replaced with variables that will be available when the handler is executed.

Subgraph.yaml using event.params

calls:
- ERC20DecimalsToken0: ERC20[event.params.token0].decimals()

既存のサブグラフへのグラフト

このセクションへのリンク

Note: it is not recommended to use grafting when initially upgrading to The Graph Network. Learn more here.

When a subgraph is first deployed, it starts indexing events at the genesis block of the corresponding chain (or at the startBlock defined with each data source) In some circumstances; it is beneficial to reuse the data from an existing subgraph and start indexing at a much later block. This mode of indexing is called Grafting. Grafting is, for example, useful during development to get past simple errors in the mappings quickly or to temporarily get an existing subgraph working again after it has failed.

A subgraph is grafted onto a base subgraph when the subgraph manifest in subgraph.yaml contains a graft block at the top-level:

description: ...
graft:
base: Qm... # Subgraph ID of base subgraph
block: 7345624 # Block number

When a subgraph whose manifest contains a graft block is deployed, Graph Node will copy the data of the base subgraph up to and including the given block and then continue indexing the new subgraph from that block on. The base subgraph must exist on the target Graph Node instance and must have indexed up to at least the given block. Because of this restriction, grafting should only be used during development or during an emergency to speed up producing an equivalent non-grafted subgraph.

グラフトはベースデータのインデックスではなくコピーを行うため、スクラッチからインデックスを作成するよりもサブグラフを目的のブロックに早く到達させることができますが、非常に大きなサブグラフの場合は最初のデータコピーに数時間かかることもあります。グラフトされたサブグラフが初期化されている間、グラフノードは既にコピーされたエンティティタイプに関する情報を記録します。

グラフト化されたサブグラフは、ベースとなるサブグラフのスキーマと同一ではなく、単に互換性のある GraphQL スキーマを使用することができます。また、それ自体は有効なサブグラフのスキーマでなければなりませんが、以下の方法でベースサブグラフのスキーマから逸脱することができます。

  • エンティティタイプを追加または削除する
  • エンティティタイプから属性を削除する
  • エンティティタイプに nullable 属性を追加する
  • null 化できない属性を null 化できる属性に変更する
  • enums に値を追加する
  • インターフェースの追加または削除
  • インターフェースがどのエンティティタイプに実装されるかを変更する

Feature Management: grafting must be declared under features in the subgraph manifest.

ページを編集

Writing AssemblyScript Mappings
AssemblyScript API
ページを編集