📥Our Current Indexer
The RFC process is intended to provide consistent rules for people writing inscription indexers. Discussion - https://github.com/evm-ink/docs/issues/2
The inscriptions indexer is going to be the improvement of the description protocol. We will try to be more strict and make indexer development more manageable by providing better rules with examples.
The original idea was to use the transaction calldata to create and transfer the inscription.
This is the creation of the inscription. The inscription id is 0xf9e47da676fc089caa2de5846f3836bc1df2ea184d87fffd9b691de2e8b73ef0
.
This is the transfer of the inscription, where the calldata is 0xf9e47da676fc089caa2de5846f3836bc1df2ea184d87fffd9b691de2e8b73ef0
(id of inscription).
This excellent approach creates artificial limitations because it can only be used by EOA (Externally owned account) and not smart contracts. This means that:
We can not send inscriptions from our multi-sig wallets
We can not create multiple inscriptions in one transaction
We can not perform atomic swaps (exchanging one inscription with another, including swapping inscription for money)
Anything contracts could help us with is not possible purely with inscription.
The current approach to fixing it is by using contract logs. This is how the inscription marketplace currently works. It emits the log, and then the inscription backend listens for events on this contract. If you have a smart contract trying to use inscriptions, you would have to raise events and make another index to listen for events from your contract. For the users, this breaks the purpose of inscriptions in the first place because the idea was to build a meta layer on top of the existing layer.
Logs exist to cover the limitation of not knowing what contracts are sending to each other, but there is a way to do it without logs and make protocol even more powerful.
Instead of using transactions input (calldata), we propose to use "internal" transaction calldata. In EVM, this is called trace, and you can query this using existing trace_* RPC like trace_replayBlockTransactions. What is nice about this method is that it will still work with the old inscription protocol. With the new protocol, the inscription would be identified by the transaction hash + internal transaction index, like 0xf9e47da676fc089caa2de5846f3836bc1df2ea184d87fffd9b691de2e8b73ef0:0
. If the internal transaction index is missing, it is considered zero.
This is an example of a transaction where someone tried to mint erc-20 multiple times within the transaction; you can see it if you click "Decode Input Data". And here, you can see all internal transactions with internal "calldata" for each call. Using the Ethscription protocol, these transactions are not valid Ethscription creation, but our EVM INK protocol finds them as valid and records 50 new inscriptions.
0x849d325e36d670ae284d6102d2d5fad94caaa1543f14e016921244133d48713a:1
- data:,{"p":"erc-20","op":"mint","tick":"fair","id":"17560","amt":"1000"}
...
0x849d325e36d670ae284d6102d2d5fad94caaa1543f14e016921244133d48713a:50
- data:,{"p":"erc-20","op":"mint","tick":"fair","id":"17609","amt":"1000"}
The etherscan
will show 51 as the number of trace records because the first record is the call to contract itself.
DATA URL encoding
If you read the Data URL RFC, you will notice that it has some reserved characters that must be encoded to consider the string a valid Data URL.
Here are the reserved characters:
! # $ & ' ( ) * + , / : ; = ? @ [ ]
This means that any URL-encoded JSON is not a valid inscription by definition.
That is why we do NOT use strict data URL encoding, we use slightly more permissive encoding. You can read our implementation and tests here.
Duplication detection problems
Data URL standard is quite flexible and lets you do a lot of this, but the price of it is the rise of complexity. Imagine that we want to describe the 'Hello, World!' message. We can do this; all these methods will be valid.
Would you consider them as duplicates? Keep in mind that you can use different charsets like UTF-16. How about JSON? Do you consider these 3 the same? They are different strings but equal JSON objects.
As you can see, we can not simply hash data and compare it.
Duplication detection steps
Instead of simply applying the hash function on the data, we first will "prepare" data and then hash it. We assume that we already know that the input is a valid inscription.
All parameters (in the format attribute=value, separated by semicolons ';') are striped. Charset is always assumed to be UTF-8
If data is base64 encoded, then it is decoded.
We call the "prepare"
prepare(mime_type: string, data: string): string)
that will return a unified view of the data. We need this because not every date type is easy to compare; first, we must write rules for every mime type. Later, We will provide more rules for different mime types with many examples so you can write test cases in your preferred language. We are planning to propose new mime types for tokens and collections. If the mime type is missing, it defaults to text/plain.
The last step is to hash prepared data. We can skip this since we have unified data already, but comparing and storing hashes of data is just better.
Last updated