Docs
検索⌘ K
  • Home
  • The Graphについて
  • サポートされているネットワーク
  • Protocol Contracts
  • サブグラフ
    • サブストリーム
      • Token API
        • AI Suite
          • インデクシング
            • リソース
              サブグラフ > How-to Guides

              5 分

              フォークを用いた迅速かつ容易なサブグラフのデバッグ

              As with many systems processing large amounts of data, The Graph’s Indexers (Graph Nodes) may take quite some time to sync-up your Subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing Subgraph forking, developed by LimeChain⁠, and in this article I will show you how this feature can be used to substantially speed-up Subgraph debugging!

              さて、それは何でしょうか?

              Subgraph forking is the process of lazily fetching entities from another Subgraph’s store (usually a remote one).

              In the context of debugging, Subgraph forking allows you to debug your failed Subgraph at block X without needing to wait to sync-up to block X.

              その方法は?

              When you deploy a Subgraph to a remote Graph Node for indexing and it fails at block X, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block X. That’s great! This means we can take advantage of this “up-to-date” store to fix the bugs arising when indexing block X.

              In a nutshell, we are going to fork the failing Subgraph from a remote Graph Node that is guaranteed to have the Subgraph indexed up to block X in order to provide the locally deployed Subgraph being debugged at block X an up-to-date view of the indexing state.

              コードを見てみましょう

              To stay focused on Subgraph debugging, let’s keep things simple and run along with the example-Subgraph⁠ indexing the Ethereum Gravity smart contract.

              Here are the handlers defined for indexing Gravatars, with no bugs whatsoever:

              1export function handleNewGravatar(event: NewGravatar): void {2  let gravatar = new Gravatar(event.params.id.toHex().toString())3  gravatar.owner = event.params.owner4  gravatar.displayName = event.params.displayName5  gravatar.imageUrl = event.params.imageUrl6  gravatar.save()7}89export function handleUpdatedGravatar(event: UpdatedGravatar): void {10  let gravatar = Gravatar.load(event.params.id.toI32().toString())11  if (gravatar == null) {12    log.critical('Gravatar not found!', [])13    return14  }15  gravatar.owner = event.params.owner16  gravatar.displayName = event.params.displayName17  gravatar.imageUrl = event.params.imageUrl18  gravatar.save()19}

              Oops, how unfortunate, when I deploy my perfect looking Subgraph to Subgraph Studio it fails with the “Gravatar not found!” error.

              通常の試すであろう修正方法:

              1. マッピングソースを変更して問題の解決を試す(解決されないことは分かっていても)
              2. Re-deploy the Subgraph to Subgraph Studio (or another remote Graph Node).
              3. 同期を待つ
              4. 再び問題が発生した場合は、1に戻る

              It is indeed pretty familiar to an ordinary debug process, but there is one step that horribly slows down the process: 3. Wait for it to sync-up.

              Using Subgraph forking we can essentially eliminate this step. Here is how it looks:

              1. Spin-up a local Graph Node with the appropriate fork-base set.
              2. マッピングのソースを変更し、問題を解決する
              3. Deploy to the local Graph Node, forking the failing Subgraph and starting from the problematic block.
              4. もし再度、壊れる場合1に戻る

              さて、ここで2つの疑問が生じます:

              1. フォークベースとは?
              2. フォーキングは誰ですか?

              回答:

              1. fork-base is the “base” URL, such that when the subgraph id is appended the resulting URL (<fork-base>/<subgraph-id>) is a valid GraphQL endpoint for the Subgraph’s store.
              2. フォーキングは簡単であり煩雑な手間はありません
              1$ graph deploy <subgraph-name> --debug-fork <subgraph-id> --ipfs http://localhost:5001 --node http://localhost:8020

              Also, don’t forget to set the dataSources.source.startBlock field in the Subgraph manifest to the number of the problematic block, so you can skip indexing unnecessary blocks and take advantage of the fork!

              そこで、以下の通りです:

              1. I spin-up a local Graph Node (here is how to do it⁠) with the fork-base option set to: https://api.thegraph.com/subgraphs/id/, since I will fork a Subgraph, the buggy one I deployed earlier, from Subgraph Studio.
              1$ cargo run -p graph-node --release -- \2    --postgres-url postgresql://USERNAME[:PASSWORD]@localhost:5432/graph-node \3    --ethereum-rpc NETWORK_NAME:[CAPABILITIES]:URL \4    --ipfs 127.0.0.1:50015    --fork-base https://api.thegraph.com/subgraphs/id/
              1. After careful inspection I notice that there is a mismatch in the id representations used when indexing Gravatars in my two handlers. While handleNewGravatar converts it to a hex (event.params.id.toHex()), handleUpdatedGravatar uses an int32 (event.params.id.toI32()) which causes the handleUpdatedGravatar to panic with “Gravatar not found!”. I make them both convert the id to a hex.
              2. After I made the changes I deploy my Subgraph to the local Graph Node, forking the failing Subgraph and setting dataSources.source.startBlock to 6190343 in subgraph.yaml:
              1$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020
              1. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working.
              2. I deploy my now bug-free Subgraph to a remote Graph Node and live happily ever after! (no potatoes tho)
              ⁠GitHubで編集する⁠

              Build a Composable Subgraph with Multiple SubgraphsNEAR でサブグラフを作成する
              このページでは
              • さて、それは何でしょうか?
              • その方法は?
              • コードを見てみましょう
              The GraphStatusTestnetBrand AssetsForumSecurityプライバシーポリシーTerms of Service