Workshop on αW and TokenScript Engine tech choices

This is part of the effort to lift wallet to a token wallet instead of a blockchain wallet.

There are a few things I wish to discover through experimentation.

Local (cached) Status

In AlphaWallet, can we build tokens like a component instead of a view? The challenge here is to store the states of a token and update it. For example, 𝑎) if a user's AAVE token is liquidated, the wallet updates token status cheaply and quickly, without conditions like the user opening the token. 𝑏) If the user loses an NFT token, the linked token changes owner too.

If without such a design, one of the two must happen:

𝑎) when the user isn't connected to the blockchain, such as when the user is on a plane, the aave token won't show; or

𝑏) a drama such as opening his AAVE token, seeing it has a value of $3,000 and updated to no value at all in a few seconds.

Without such a design, it's also difficult to write tokenscript or wallet logic triggered by a state update. They will not execute unless you are looking.

Elevator compatibility

The typical scenario is:

  1. A user receives a status update, e.g. portfolio value update.
  2. The user goes to an elevator, WIFI connection drops.
  3. The user gest out of an elevator, 4G connection resumes, and has a new IP address.
  4. At this point, the status stops updating.

It's unwise to write a connection manager ourselves. Instead, a message queue or event stream designed for this is needed.

combining local (cached) status with elevator compatibility

Ideally, a technical framework that allows 𝑎) creating an event stream based on a topic and 𝑏) a local database that provides a view of token data based on the past events.

Kafka claimed to support this, but we come to learn that it is supported as server infrastructure, not a client one, meaning locally (in mobile phone), Kafka does not create a database view that results from past events. The learning is that even Kafka users would create a local SQL database to store events for later view enquiry.

decision-based on the learning

  • A message queue system and a local database should be selected to be used in αW architect;
  • Both should be well supported by KMM, so the database access code can be written once only.
  • Since we use AWS extensively, we must consider message queue systems compatible with or offerred by AWS

Investigation on the suitable databases and message queue

  • For now, Seaborn works with MQTT and REALM database. KMM supports REALM well, but its support on MQTT is unsure.
  • We must update or redesign the existing REALM database with local token data storage.

Considerations for local database

Each token might have its own status and properties, and there are use-cases where these are indexed. An example is someone who owns 100 ENS names rank them by the last traded price.

Therefore a pre-defined structure like SQL wouldn't fit. Some non-sql databases such as AWS dynamodb and Kafka have a flexible structure. For a record, every piece of data except the identifier can be described as an object (or JSON structure) with no limits on what fields exist. This also created the flexibility of adding new properties to a token without altering the SQL table.