Tuur Demeester's Ethereum claims


Supporting argument:

The only “evidence” of something being hard-forked away is the DAO hack.

Refuting argument:

Network upgrades have been on the roadmap since Ethereums inception.


Supporting argument:

Bitcoiners believe that the root blockchain should be as simple (or dumb) as possible and that all complex functions should be done on layer 2.

In saying this, I agree that Ethereum has a larger attack surface than Bitcoin but I think this is a feature, not a bug.


Supporting arguments:

Objectively, Ethereums bloat is an issue so I half agree with this. I don’t agree that it’s difficult for people to run a node.

Refuting arguments:

Active research/development into ways to reduce the amount of bloat on Ethereum:

  • Client optimizations
  • Sharding
  • Charging rent
  • Layer 2 scaling

Ethereum nodes come in different flavours. It’s true that running a full archival node is difficult (with the chain size approaching 2TB) but running a full node is relatively easy (and the current size is only 150GB). https://github.com/ethhub-io/ethhub/tree/master/using-ethereum/running-a-node.

A person only needs to run a full archival node if they wish to view the intermediate states of the Ethereum network at any given block height (such as token balances of an account). This data is only required for services like block explorers or Infura or in the cases of certain dApps.


Refuting argument:

Vitalik is not a central point of failure for Ethereum and does not have any direct authority over the protocol. There are currently 8 independent development teams building Eth2.0 with two prominent teams maintaining Eth1.0 (geth and parity). Parity is even building an Ethereum competitor (Polkadot) yet still supports Ethereum.

The “semi-closed meetings” spurred lots of discussion within the Ethereum community and a middle-ground was found (notes would be taken from meetings but not attributed to any individual).



Where does one begin?

Since Tuur’s claim wasn’t just about lack of current state channel projects live on mainnet that support ERC20 tokens (a false claim, as other replies have pointed out), but that it’s unclear if there ever will be any in the future, I would add that there are many more state channel projects in the works that will indeed support them, i.e.,

Counterfactual: https://specs.counterfactual.com/03-peer-protocol#type-definitions-and-global-variables

Celer Network: (whitepaper, pg 53) https://www.celer.network/

Stack https://blog.coinfund.io/the-state-of-state-channels-2018-edition-f5492134ab96

and many others.

In fact, it would probably be hard to find a state channel project that doesn’t offer support for ERC20 tokens.

… which makes sense, since by definition, if a “state channel” could only support ETH payments, it wouldn’t be a state channel at all, but a payment channel. The whole point is that that state channels can support more complex (and perhaps arbitrarily complex) transactions; as someone who has done development work with state channels and follows the space pretty closely, I can say that ERC20 tokens are an easy case to handle, and one with no uncertain or “unclear” difficulties.

And I would just add that nothing in the link he includes in the tweet supports his claim; it fact, it provides several examples to the contrary.

Frankly, this claim is so egregiously false, I’m genuinely curious as to the backstory of what could possibly have given him the impression that there was any truth to it.


Various tutorials by individuals call solidity to be as easy as JavaScript.
Being a developer, I only believe it easy to learn the syntax of solidity as it is similar to JavaScript. Although, gas optimisation is one concept that JavaScript developers aren’t familiar with and it doesn’t count in to be “easy”.


Ethereum architecture indeed is different from Bitcoin, but not opposite. Ethereum is currently based on the same consensus algorithm as Bitcoin. (Proof of Work)
Bitcoin do not offer on chain smart contracts and hence using ethereum smart contract one can create digital assets.(Tokens). Ethereum does support smart contracts as provided in its documentation.
If you believe Bitcoin is decentralised then so is ethereum.
SoV? using the power of community, ethereum averted the major DAO attack, each voice mattered there.
Eth can be called a “blue chip” because of a huge community and high paced research not because of the price of eth. No other crypto has a larger community support than eth.


I don’t believe this is an accurate representation of an archival node and the specialized hardware requirements to run one.

Péter Szilágyi (EF core dev) has a great comment I refer people to about syncing issues. Here are some main points, which, if we assume the client is using geth, does confirm some claims:

The current default mode of sync for Geth is called fast sync. Instead of starting from the genesis block and reprocessing all the transactions that ever occurred (which could take weeks), fast sync downloads the blocks, and only verifies the associated proof-of-works. Downloading all the blocks is a straightforward and fast procedure and will relatively quickly reassemble the entire chain.

Fast sync is “warp-sync” (processing just the block headers) up to a pivot-point (a certain number of blocks away from current block), where the geth client switches to “Full” sync (processing the block headers AND replaying the transactions to calculate the current state.

Tuur’s claim that we rely on warp-sync isn’t false and it’s an issue Péter consistently raises and is currently working on. It has a lot to do with the current method of read-writes to the Patricia Merkle client database, which is very computationally expensive and inefficient (to be sure it is secure).

Many people assume full nodes and archival nodes are the same thing, but they are not.


Tuur’s claim:

Let’s unpack this first, a warp node (parity warp or geth fast) is easy to run, much easier than a full —no-warp or geth full node with far less storage requirements. Next, where is the reference to someone claiming parity warp/geth fast is as “good as a full node.” Surely Peter’s never claimed it, nor Afri. Who has? No one, because it isn’t true and anyone vaguely familiar with Parity/Geth clients knows this.

To respond to your points. Yes, Parity ”warp” is a sync mode, in geth “fast” is also a sync mode and is the functional equivalent to Parity’s warp (they streamline state sync with snapshots). However, in Parity “fast” is a pruning mode (enabled by default), not a sync mode. Running Parity with “parity —no-warp” is “fast” (by default, w/ pruned state) and fully verifies all blocks from genesis, and inlcudes recent states (last 2048 I believe). This is more of or less the euivalent of “geth —syncmode=full” which also prunes state. Again both of these are full nodes. Parity warp and Geth fast are not.

The network also doesn’t “rely on” warp nodes. People run them by choice. It is true that full nodes have a higher hardware spec requirement but it isnt difficult, or overly cost prohibitive for those that need to fully verify the blockchain to do so. I have 3 full nodes running on consumer hardware, synced in the last 3 months from scratch.

No one is equating *full nodes to archival. Archival provide you intermediate states, affording you the ability to do balance lookups at historical block heights. That need is on a use case by use case basis. The chain can be fully verified from a full node without the need to run an archival node. Some dApps and block explorers will need an Archival node, but that is an example of specific use cases. Again, running an archival node is not impossible and need is business model dependent. Some have chosen to trust Infura for this… and that seems to be the issue most have.

No one is debating the state size growth and the storage bottleneck. What we are debating is Tuur’s misrepresentation of the current state and that the Ethereum community is somehow misleading people into thinking “warp nodes” are “just as good as full nodes.”


Putting aside the letter of Tuur’s comment and getting more to the spirit of it, I believe the valid criticism (with caveats, of course), is that Ethereum state IS large and getting synced to full is a problem which is leading to centralization. You say:

The network also doesn’t “rely on” warp nodes. People run them by choice.

This is incorrect. Geth, which has a larger Github stat count compared to Parity (poor metric, sure, but one of the few we have), does run on fast by default (best direct source I could find was Docker build instructions under “Running Geth”)

The default mode of Geth is fast and it is not a choice for the majority of people who are downloading it to sync to the network without running --syncmode=full.

I have 3 full nodes running on consumer hardware, synced in the last 3 months from scratch.

A simple google search of “Geth syncing problems” shows your personal experience easily running three full nodes on consumer hardware is probably not the dominant experience. One of the reasons Geth runs in fast mode is because less experienced users have issues doing a full sync (I’ll cite Peter’s Github thread post again here).

I’m looking at a particular part of Tuur’s /36 thread, a screenshot I’ve embedded below:


Because it is hard to do a full sync, do to the state of the Ethereum database, we are relying on centralized processes (fast mode which has weak Sybil-resistance, Infura / Metamask, etc.) to overcome this issue which does result in increased centralization.

I don’t think Tuur is being as subtle as this, but he is pointing to a general issue of increasing centralization through the difficulty of full node syncing (as evidenced by his grabbing this screenshot), which we should address as valid.