Mirror has the ambitious vision of redefining publishing online by empowering writers with the tools offered by crypto. We are delighted to share that we've officially closed our first full-stack engineering sprint: the MVP of the Mirror blogging platform. This will allow us to onboard our first writers and serve as the foundation for all the functionality in the future.
We have a lot to figure out over the next few months and years—from discovering new crypto-native business models for creators to supporting basic things like embedding NFTs. We knew that the starting point for the project was supporting a basic publishing experience; the ability to post text and share it with the world.
This post details our journey so far.
Our first commit to the backend API was 29 days ago, on December 7, 2020 — my (Graeme's) first official day at Mirror. Denis had been speaking with Trent about announcing his book on Mirror, and working with Jon-Kyle on designs and UI. We set an ambitious goal to have everything working for Trent by Christmas time — our gift to the world! 🎁
To ship the MVP, we needed to make crucial early technical decisions. Most importantly, how to create a world-class writing tool with a native web3 and crypto foundation.
One of the first challenges was thinking through building the platform from a web3 perspective. Here are rules of the web3 game (via negativa style):
The architecture we came up with looks something like this:
I will now briefly break down how we solved each of these problems.
We decided to use the Arweave data storage protocol to store user content. Arweave provides permanent storage of data for a one-time cost at upload time. The data posted to Arweave includes all of the information required to retrieve all entries for a publication (including any changes made), and to verify authorship authenticity. (We have plans to publish the protocol specification along with an open-source tool that can assist with this retrieval and verification process, so that it would be trivial to migrate off of Mirror if the time ever comes for that.)
All entries are signed by a user's signing key (this is a non-Ethereum keypair that does not contain any direct economic value. I'll explain more on this later). The signature is then published to Arweave, along with the digest that was signed, and with the content that went into the digest. The signing key itself is put into a message that asserts authorship rights, which is then signed by the writer's Ethereum address. All of this information is stored on Arweave, along with the entry's content — for every entry and update. Therefore, it's possible (and fairly easy) for anyone to verify whether an entry was authored by a given Ethereum account.
We're still iterating on this model, and it's obviously very early. Consider us in the "testnet" phase of this (along with everything else you read in this post).
We use ENS to provide domain ownership to the writer, who claims this ownership by burning an invite token (called $WRITE; expect more information on this soon) during onboarding. The process involves interacting with the $WRITE ERC20 token contract, which grants approval to an ENS registrar contract to burn the token just before registering the ENS label.
We didn't want to make it cumbersome to have to write an Entry on Mirror — for example, needing to sign everything with their Ledger wallet (or Metamask). Particularly, because we imagine allowing users to take smaller and more frequent actions, too, like commenting on or liking someone else's entry. We still want to have superb security, but we deemed it to be dissuading if writers required their hardware wallet for writing.
It's considered heresy to store a user's Ethereum private keys directly in the browser, and I would typically agree with that. It's not strictly true that you couldn't build a useful webapp this way (Dharma did it for a while), but the security burden will compound over time. This is mainly because there is no way to make and store a non-extractable Ethereum private key using the native web-crypto library; it doesn't support the same ECDSA curve. I don't know how this will play out in the future, but for now, I would say that this is something to avoid.
We avoid this security burden in two ways:
I invite criticism on this approach. I am not a cryptography expert, and we're just trying to make an honest effort at a secure and usable platform that lives outside of the conventional standards of web2 that have been failing us. If you want a more detailed explanation of the approach, we will shortly release a more complete spec/RFC about this topic.
Solved by using Arweave! 🎉 Very cheap (at the moment?) — it's costing us about 0.00005 AR per blog post, which amounts to about $0.00015, at the current exchange rate.
Since we have signing keys and a signature-based authenticity model, we don't need to have login sessions, and we don't have anything to recover via email. In the future, it may be useful to allow for email notifications and email digests, but for now, it's interesting that there is no email confirmation required to use Mirror. This is the crypto-native way used by apps like Zerion and Zapper, and is the future of authentication on the web.
(We make up for this lack of friction by adding a transaction confirmation in onboarding! 😂 But even that, we will smooth out over time.)
Mirror is still in its early stages, and we are using the Ethereum testnet infrastructure to explore our ideas. This allows us to run through complex protocol flows (like burning tokens, deploying contract, and registering ENS domains) very cheaply and with fast confirmation times. In particular, we're still exploring what the mechanics of our economic, naming and ownership protocol will be, as demand grows for a more sophisticated system. We're hoping to deploy to mainnet as soon as we know that migrating our early writers to this better protocol will be feasible and easy.
In order to facilitate a good reading and writing experience, we decided to build a backend API that can act as a gateway between client and protocol. This allows us to do things like signature verification before publishing to Arweave, or to pay for the Arweave publication ourselves and cache the entry content for extremely fast response-times. This allows us to have a world-class writing and reading experience, while still achieving all of our goals of content decentralization.
In terms of building this backend API in 2021, there were probably 10+ viable options for where to deploy code and just as many promising languages for its development. Picking early tools can be difficult! I ended up choosing something that was strongly within my realm of competency, and would also be easy for new engineers on our team to learn. I also chose what I believe to be the path of greatest extensibility and flexibility into the future, which for us meant building our backend in NodeJS on AWS infrastructure. The decision was not obvious, however, and I wanted to take into account the competencies we would be building as an engineering order in time to come.
Setting up web applications on AWS is much more time consuming than using an abstracted service like Heroku or Vercel. Things like environments, databases, security groups, and deployments all have to be configured up individually. In the long-run, though, it grants more control, and this can be helpful for crypto applications that sometimes had non-standard concerns. It took me about 2 days to set up this infrastructure — including security groups, IAM accounts, databases, staging and production environments, etc. But from here, I know that I have the world's best infrastructure behind us, without the platform risk of newer, less fully-featured services.
We use Postgres via AWS's RDS service, and VPC and security groups for communicating between ElasticBeanstalk and RDS. We used GraphQL with Apollo for our API endpoints.
I'll note here that, for a while before committing to this path, I considered writing the backend in Golang or Rust, and using protobufs, since cryptographic signatures require strict typing for correct validation across platforms. But development would have been a lot slower if we had gone this path. On the other hand, building a competency in Rust from the start would have allowed us to track the growing number of interesting layer-2 integrations (which seems promising).
Our deployments are handled through Github workflows, such that every push to our staging or main triggers an action on Github that deploys the application. To be clear, there is a lot that still needs to be improved here — including blocking deploys on test suites passing, and measuring test coverage. We don't have full integration tests, e.g. against a built docker contains with exposed GraphQL endpoints.
We decided to use Vercel and NextJS on the frontend, which has wildcard subdomain support, as well as support for statically rendered content. This has been great so far, because we get fast-loading, well-cached content that can be read easily by machines like social media platforms and search engines. It's also generally a fantastic product for managing staging environments, deployments, etc.
We were able to get an MVP out really really quickly — we made a goal to ship before Christmas, and only started building the backend some ways into December. So overall, it took us mere weeks to build a novel CMS that uses public-private key signatures to verify content, stores data in Postgres, stores data on Arweave, burns a token on Ethereum to claim ownership of an ENS label, validates ownership of that label, iterations on our signing model, and so on.
Here's a few specific examples of these things:
It's exhausting even to think back through the past month and everything we've been able to build out! And the fact that this is basically working is very fun and exciting. I'm glad that we also did a fair amount of this "in public" and have screen recording our on-boarding sessions.
By tackling so many challenges at once, it sometimes felt like there were an overwhelming number of important issues that urgently needed to be patched. Sometimes, that made it difficult for me to communicate priorities in linear way, so that we were all on the same page and rationally working together on the same thing. Given that that team hadn't worked together before this project, and are just learning about each of our distinct work and communication styles, there was considerable "under-communication" that we need to improve on.
The fast pace also created some disorder. For example, late one Thursday I was configuring the Github Workflows to deploy our AWS EB environment, and it was working well in staging. We didn't deploy to production over the weekend, and during that time I updated production to a whole new AWS EB environment. Our first deploy on Monday shipped to the outdated environment, and a bug manifested right before an onboarding session! We had to roll-back to prevent live-streaming a failed onboarding, and it was very confusing! We wrote our first post-mortem for this.
We learned that we can really push hard and work quickly to make an ambitious goal a reality within a short timeframe. I think we proved that by hitting the December 25th deadline.
We also learned a lot about each each and our competencies (there's only three of us), and how we need to communicate with each other to build more trust and confidence in how things are going. That includes over-communicating more about expectations and responsibilities over the codebase and product, outstanding issues, priorities, and individual goals for any given sprint.
Expect more information on $WRITE soon.
In the meantime, you can read the first entries published on Mirror over the past few weeks: