Web3: The hope for protocols over platforms

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

In the beginning, there were protocols…

Rather than write about Web3 again, I want to write about Web1: the 90s. At that time, I used something called Communicator. You can think of it as a suite of internet clients and applications. Of course, it had Navigator, a web browser, but also a messenger for emails, a news client and even a push system. It was a good example of how the early web worked: multiple protocols for different purposes. You may remember FTP, SMTP, Gopher and Archie, but also XMPP and many, many more. 

The cool thing about these protocols is that they made the computer you used irrelevant. They abstracted away the underlying operating system and hardware. Similarly, these protocols embraced the Unix philosophy and only focused on one thing to do it well: file sharing, email transmission, push messaging and so forth. 

Then, HTTP and HTML won 

The most “abstract” of these protocols was HTTP. Even though it was initially designed for transfer of hypertext documents, it quickly became apparent that it was good at transferring pretty much any kind of file. Similarly, HTML pretty quickly saw the emergence of JavaScript as a way to make static documents more dynamic. The web stack was (and still mostly is): 

1. Make requests to download HTML, JavaScript and CSS files over HTTP. 

2. The browser “executes” these to render them as fancy websites and applications. 

This meant that other, more specialized protocols could just become applications on top of HTTP and HTML. If you’re using Gmail and sending an email to another person using Gmail, you’re probably not using POP, SMTP or IMAP, but only HTTP and HTML. FTP and XMPP are now known as Megaupload and WhatsApp, for better or worse.

What might surprise you is how hacky HTTP and HTML are. After all, the HTTP spec uses Referer instead of “referrer,” which would be the proper English term, and despite all efforts, HTML never was able to conform to the XML requirements. See the irony? HTML and HTTP, which were both poorly designed compared to other more academic protocols and formats, eventually took over the whole stack. 

Their simplicity and versatility is what made HTTP, HTML and JavaScript so powerful by being adopted everywhere and for everything. 

402: Reserved for later use 

Still, the HTTP spec did have a set of interesting features, including HTTP status codes, to tell the clients how to behave with the files it downloaded. It includes mechanisms to redirect users when resources have changed, or indicate that the user is not allowed to access it, or that it is now unavailable. You’ve probably heard of the infamous 404!

There are dozens of statuses, including 402, that servers should use to indicate when payment is required. It turns out that the specification for this is still reserved for future use. 

That means that all of the websites and applications (including those who replaced the protocols) that used HTTP and HTML had to figure out how to monetize by themselves and that’s how we ended up with banner ads and the attention economy. 

Soon, some of these websites and applications realized that in order to be more profitable, they would need to grow bigger. They realized that the more data they collected, the more attention they attracted, the more lock-in they had, the more profitable they could get (not just more revenues!). That’s how platforms wedged themselves into the middle of the internet. 

The platforms 

In order to maintain lock-in, platforms _privatized_ protocols and applied their own terms of services on top of them: that’s how Facebook now _owns_ the social graph or Google tried (tries?) to force its own syndication format, called AMP, onto publishers. In Web2, the permissionless internet of protocols was replaced with endless intermediates and gatekeepers in the form of platforms. 

Will Web3 let us reinvent protocols? 

The current state of the internet is … disappointing. The governance of our collective brain is being challenged by all kinds of governments, users are more and more frustrated with the behavior of these platforms and the internet is increasingly controlled by a shrinking number of corporations (or individuals like Mark and Elon). 

In the long list of internet protocols, a fairly recent one has been steadily gaining in popularity and awareness: Bitcoin. Don’t roll your eyes just yet. Bitcoin is a protocol for money. It lets people transfer coins in a fully permissionless and decentralized way, like HTTP lets them transfer documents. To understand why Bitcoin represents a new hope for a protocol-driven internet, we need to think about what blockchains are. 

So, what are blockchains good for?

Bitcoin is a distributed ledger. When it comes to ledgers, it’s a bad one, and worse than most other ledgers in pretty much every aspect but one: its ability to make people agree on what everyone’s balance is, without a central authority. Bitcoin shows us that blockchains are consensus machines: they are systems that let us all _agree_ on things, even if we don’t agree on anything else, and even if we try to lie to others. 

Agreeing is nice, but what are we really agreeing on? In software, there are really two types of things: data, often called a “state,” and algorithms. Bitcoin asks us to agree on balances in the ledger: Julien owns 15.4, Hannah owns 1337 and Giselle owns 42. That’s good, but not terribly useful beyond that ledger use case. 

In fact, a blockchain can also ask to agree on processes. These agreements on process are often called smart contracts. They are pieces of code that work in ways that cannot be altered, outside of what the code actually codifies. If the only thing a contract does is return the sum of 2 numbers, it will return the sum of 2 numbers, and no one will ever be able to change that program, without terminating the whole blockchain. 

Maybe, you see where I am going: these smart contracts, or collectively agreed-upon processes are, in fact, protocols. They are ways to codify the behavior of actors in a way that no actor could arbitrarily change how things work at the expense of everyone else (unless of course it has been codified like this). 

Dead code vs. smart contracts 

But there is one more thing. Usually, protocols are “dead code.” They are specifications, written in English, with lots of MUST and SHOULD, but, despite everyone’s best effort the translation from English (the lingua franca!) to actual computer code is subject to interpretation and lots of things can be lost in translation. With smart contracts, the protocols are, in fact, running code. There is no need to interpret English, and maybe even no need for a detailed specification because the protocol is the smart contract. 

It goes even further. Usually, the governance around the dead code protocols is pretty limited. A small number of large companies spend a few millions of dollars per year to get a seat at the table of the IETF, W3C and other governing bodies. Despite lots of good intentions, it’s pretty opaque and full of politics: I’ll let you have your DRM if you agree to my HTTP2. As a consequence, things are slow to move, and they will sometimes move in directions that do not serve small indie developers or internet users at large.

There again, blockchains do provide us an interesting opportunity, because the governance of a protocol is, in fact, a protocol too! Furthermore, a special type of smart contract, called a DAO, can provide a fairly good alternative to the typical “chamber” governance that happened until now.

And now what? 

First, it’s early. 

Then, beware. 

And only then, let’s experiment in ways that let us slowly deconstruct platforms, by replacing some of the core primitives that they own with open protocols that are collectively owned and governed by their own communities. 

For example, the identity primitive is a very important one. Almost every website and platform needs to identify its users. Emails and passwords were the norm, but passwords are bad, and asking users to (re)create identities on every single website is just too painful. So we moved into the worlds of OpenId and OAuth. These are useful ways to reduce the security risks that passwords introduced, but they also moved us from a self-sovereign world (I own my email address and password) to one where we have delegated our identities to Google and Facebook which is… not ideal.

The cryptocurrency primitives of public/private key cryptography are bringing us back to a world where we can have a globally shared identity, without password AND without having to hope that the platforms will keep providing one for us. Sign In With Ethereum is an effort in that direction. 

Of course, I believe that another core primitive that’s emerged on the internet is the concept of membership. Whether it is your paid New York Times access, the fact that you follow me on Twitter, or that Discord role: these are all memberships. Since they’re everywhere, I believe memberships should be normalized so they behave the same.

The platforms will always have a role. They will provide distribution, curation, differentiated user interfaces and other capabilities. But protocols will never act as gatekeepers, as they could not cut someone from the network without cutting themselves from said network. Despite its best effort, Apple will never be able to remove Safari from iOS to fully control the “application” experience on their phones. However, they can (and should!) compete for the best experience, speed, connectivity or battery life!

Julien Genestoux is the founder of Unlock.
DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers