Services
PostgreSQL for data (and events)

Postgres for Data and Events

Installation

Environment variables

These environment variables need to be provided to the indexer to work:

TypeVariableMeaningSample value (using anvil running on the host)
RequiredRPC_HTTP_URLThe URL to access the blockchain using HTTPhttp://host.docker.internal:8545 (opens in a new tab) (when running in Docker)
OptionalRPC_WS_URLThe URL to access the blockchain using WebSocket
OptionalSTART_BLOCKThe block to start indexing from. The block in which the World contract was deployed is a good choice.1
OptionalDEBUG=mud:*Turn on debugging
OptionalSTORE_ADDRESSOnly index tables from this World
RequiredDATABASE_URLURL for the database, of the form postgres://<host>/<database>

Using npx

This indexer dynamically creates a PostgreSQL table every time it encounters a new MUD table. It then decodes the MUD events and stores them in the PostgreSQL table with a matching schema. This approach doesn't scale well to all tables of a chain, but it is a convenient way to index the tables of a particular MUD World and have access to powerful SQL queries on its data.

  1. Start a World to index. An easy way to do this is to use a TypeScript template in a separate command line window.

  2. Set the environment variables and start the indexer, installing it if necessary.

    export RPC_HTTP_URL=http://127.0.0.1:8545
    export DATABASE_URL=postgres://127.0.0.1/postgres
    npx -y -p @latticexyz/store-indexer postgres-decoded-indexer
  3. Open a separate command line. In it, specify the database and run the indexer frontend, which is responsible for serving the data to the client.

    export DATABASE_URL=postgres://127.0.0.1/postgres
    npx -y -p @latticexyz/store-indexer postgres-frontend

Docker

The indexer Docker image is available on github (opens in a new tab).

There are several ways to provide the environment variables to docker run:

  • On the command line you can specify -e <variable>=<value>. You specify this after the docker run, but before the name of the image.
  • You can also write all the environment variables in a file and specify it using --env-file. You specify this after the docker run, but before the name of the image.
  • Both Docker Compose (opens in a new tab) and Kubernetes (opens in a new tab) have their own mechanisms for starting docker containers with environment variables.

The easiest way to test the indexer is to run the template as a world in a separate command-line window.

The command to start the indexer in data and events PostgreSQL mode is pnpm start:postgres-decoded. This command starts both the indexer and the query frontend.

  1. The docker instance identifies itself to PostgreSQL as root. To give this user permissions on the database, enter psql and run this command.

    CREATE ROLE root SUPERUSER LOGIN;

    Note: This is assuming a database that is isolated from the internet and only used by trusted entities. In a production system you will use at least a password as authentication, and limit the user's authority.

  2. Start the docker container. For example, to index an anvil instance running to the host to the database postgres on the host, use.

    docker run \
      --platform linux/amd64 \
      -e RPC_HTTP_URL=http://host.docker.internal:8545 \
      -e DATABASE_URL=postgres://host.docker.internal/postgres \
      -p 3001:3001  \
      ghcr.io/latticexyz/store-indexer:latest  \
      pnpm start:postgres-decoded

Using the indexer

See here how to use the indexer with a MUD client.

Viewing events

You can run this command to verify the indexer is working correctly.

curl 'http://localhost:3001/api/logs?input=%7B%22chainId%22%3A31337%2C%22address%22%3A%220x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b%22%2C%22filters%22%3A%5B%5D%7D' | jq

The result should be nicely formatted (and long) JSON output with all the data change events that happened in that World.

Where does this URL come from?

The URL has these parameters:

ParameterValueExplanation
Serverhttp://localhost:3001 (opens in a new tab)By default the indexer listens on port 3001
Path/api/logsRead log events
input%7B%22 ... %5D%7DSee below

The input is the JSON filter that tells the server what we need. It is URL encoded (opens in a new tab), you can decode it using an online calculator (opens in a new tab).

{
  "chainId": 31337,
  "address": "0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b",
  "filters": []
}

Meaning that the query is for all events in the World at address 0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b, on the chain with chain ID 31337.

Reading the data

  1. Run psql.

  2. List the schemas.

    \dn

    Result:

                      List of schemas
                      Name                    | Owner
    --------------------------------------------+-------
    0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b | qbzzt
    mud                                        | qbzzt
    (2 rows)
  3. Connect to the schema for your world.

    SET search_path TO "0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b";
  4. Get the list of tables.

    \dt

    Result (When using the React template):

                                     List of relations
                      Schema                   |           Name            | Type  | Owner
    --------------------------------------------+---------------------------+-------+-------
    0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b | app__tasks                | table | qbzzt
    0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b | store__resource_ids       | table | qbzzt
    0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b | store__store_hooks        | table | qbzzt
    0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b | store__tables             | table | qbzzt
    0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b | world__balances           | table | qbzzt
    0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b | world__function_selector  | table | qbzzt
    0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b | world__function_signatur  | table | qbzzt
    0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b | world__init_module_addres | table | qbzzt
    0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b | world__installed_modules  | table | qbzzt
    0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b | world__namespace_delegat  | table | qbzzt
    0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b | world__namespace_owner    | table | qbzzt
    0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b | world__resource_access    | table | qbzzt
    0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b | world__system_hooks       | table | qbzzt
    0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b | world__system_registry    | table | qbzzt
    0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b | world__systems            | table | qbzzt
    0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b | world__user_delegation_co | table | qbzzt
    (16 rows)
  5. Read actual data.

    SELECT description FROM app__tasks;

    Result:

        description
    --------------------
    Walk the dog
    Take out the trash
    Do the dishes
    (3 rows)

Clearing the information

If you restart the blockchain, you need to clear all the information stored by the indexer otherwise you'll have an inconsistent state.

  1. Get the list of Worlds using this command:

    SELECT DISTINCT address FROM mud.records;
  2. Delete those schemas. For example, if you use one of the TypeScript templates, run this command:

    DROP SCHEMA "0x8d8b6b8414e1e3dcfd4168561b9be6bd3bf6ec4b" CASCADE;
  3. Drop the mud schema.

    DROP SCHEMA mud CASCADE;