Eth Parity Does Block Height Reset When New Genesis Block Is Uploaded to a Network

(TL;DR: It has zero to practice with storage space limits)

Introduction

This is an instraight response to the following commodity by Afri Schoedon, a developer for the Parity Ethereum client, written less than a twelvemonth ago:

I desire to go far clear that I have respect for almost all of the developers in this space, and this is not intended to attack anyone. It's meant to elaborate on what the existent concerns are and explain how the original article does nothing to address those real concerns. I would really honey to see something that does, considering then we tin can throw it into Bitcoin. That being said, in that location are some developers who mislead, obscure, ignore, and assail via protocol confusion like what occurred with 2X and the replay protection drama, but most aren't like that. You tin't watch something like this or read something like this and detest these developers. They're genuinely trying to fight the same fight as united states, and I believe Afri is part of the latter group, not the onetime.

https://github.com/paritytech/parity/problems/6372

If yous've read my other manufactures you're going to encounter some small bits of that data repeated. Up until now I wrote primarily about Bitcoin from a "maximalist" perspective (still am) and focused on conflicts within that community. What you may find interesting if y'all but watch from the corner of your eye, is that the reason for "disharmonize" here is exactly the same. I'll even use Proof-of-Pale as further leverage for my argument without criticizing it.

Edit: It seems similar people are not reading the subtitle and misunderstanding something. This is non about archival nodes. This is about fully validating nodes. I don't care if you prune the history or skip the line to catch-up with everyone else. This is about about staying in sync, after the fact. Light nodes aren't nodes.

This has become a ii-part article. When you lot're washed with this article you tin can read the follow-up one:

Alphabetize

  • My Argument: Ethereum's runaway data directory size is just the tip.
  • My Prediction: It will all piece of work, until it doesn't.
  • My Suggestion: Transpose.

My Argument: Larger blocks centralize validators.

It'south that simple. It's the central statement in the entire cryptocurrency community in regards to scaling. Non many people familiar with blockchain protocol really deny this. The post-obit is an excerpt from what I consider to exist a very well put together caption of various "Layer 2" scaling options. (Of which, the only working i is already implemented on Bitcoin.)

https://medium.com/l4-media/making-sense-of-ethereums-layer-two-scaling-solutions-country-channels-plasma-and-truebit-22cb40dcc2f4

That article is written by Josh Stark. He gets it. His company even appear a project that'south meant to mirror the Lightning Network on Ethereum. (Which is oddly coincidental given Elizabeth Stark 'south visitor is helping build Lightning.)

The trouble? Putting everything about Proof of Pale completely to the side, the incentive structure of the base layer is completely broken considering there is no cap on Ethereum's blocksize, and even if one was put in place it would have to be reasonable, and then these Dapps wouldn't even work because they're barely working now with no cap. It doesn't even thing what that cap is set up at for this statement to hold because right now there is none in place.

Let's backtrack a bit. I'm going to briefly define a blockchain and upset people.

Hither is what a blockchain provides:

  • An immutable & decentralized ledger.
  • That's information technology.

Here is what a blockchain needs to go on those properties:

A decentralized network with the post-obit prerogatives:

  • Distribute my ledger — Validate
  • Append my ledger — Work
  • Incentivise my needs — Token

Hither is what kills a blockchain:

  • Any feature congenital into the blockchain that detracts from the network'due south goals.

A blockchain is only a tool for a network. It's actually a very specific tool that can only exist used by a very specific kind of network. So much and then that they require each other to be and fall apart when they don't co-operate, given enough time. You can build on top of this network, but quite frankly annihilation else built into the base layer (L1) that negatively affects the network's power to do its chore is going to bring the entire network to its knees…given enough fourth dimension.

Here'south an case of an L1 feature that doesn't effect the network: Multisig.

It does crave the node to do a bit of extra work, simply information technology's "marginal". The important thing to note is hardware is not the bottleneck for these (properly designed) networks, network latency is. Something as simple as paying to a multi-signature address won't tax the network any more than than paying to a normal accost does because y'all're paying on a per-byte ground for every transaction. It's a blockchain characteristic that doesn't damage the network's ability to continue doing its job considering the data being sent over the network is (one)paid for per-byte, and (2)regulated via the blocksize cap. Regulated, not "artificially capped". The blocksize doesn't restrict transaction flow, information technology regulates the amount of broadcast-to-all data being sent over the network. Herein lies the problem.

When we talk about the "information directory" size, it's a directly reference to the size of the entire chain of blocks from the original genesis block, just taking this at face value results in the standard responses:

  • Disk space is cheap, besides run into Moore'due south Police.
  • You tin prune the blockchain if you lot need to anyway.
  • Y'all don't need to validate everything from the genesis block, the last X corporeality of blocks is enough to trust the state of the network.

What these completely ignore is the data per-second a node must process.

You can read my unabridged article about Moore's Law if y'all want, but I'll excerpt the important part below. Over in Oz they try and argue "you don't need to run a node, but miners should decide what code is run". It'south borderline absurd, but I won't have to worry most that here because Proof of Pale completely removes miners and puts everything on the nodes. (They e'er were, but now there aren't miners to divert the argument.)

  1. Moore'south Law is a measure of integrated excursion growth rates, which averages to 60% annually. It's non a measure of the average available bandwidth (which is more important).
  2. Bandwidth growth rates are slower. Check out Nielsen's Constabulary. Starting with a one:1 ratio (no bottleneck between hardware and bandwidth), at 50% growth annually, x years of compound growth upshot's in a ~ane:2 ratio. This means bandwidth scales twice as slow in 10 years, 4 times slower in 20 years, 8 times in 40 years, and and then on… (It actually compounds much worse than this, but I'm keeping information technology elementary and it all the same looks really bad.)
  3. Network latency scales slower than bandwidth. This means that every bit the average bandwidth speeds increase amidst nodes on the network, block & data propagation speeds exercise not scale at the same rate.
  4. Larger blocks need better information propagation (latency) to counter node centralization.

Strictly from an Ethereum perspective with a future network of but nodes after the switch to Proof of Stake, you'd generally want to ensure node centralization is not an effect. The bottleneck for Bitcoin's network is its blocksize (as it should exist), because it ensures the growth rate of network demands never exceed the growth rate of external (and in some cases indeterminable) limitations like computational operation or network operation. Because of Ethereum's exponentially growing blocksize, the clogging is not regulated below these external factors and as such results in a shrinking and more than centralized network due to network demands that increasingly exceed the average users hardware and bandwidth.

Bitcoin SPV clients aren't nodes. They don't propagate blocks or transactions effectually the network, they leech, and all that they leech are the block headers.

Call back this because it's going to go very of import later in this article:

  • You tin can put invalid transactions into a cake and even so create a valid cake header.
  • If the network is controlled by 10 FULL-nodes, you simply need half of them to ignore/approve invalid transactions so long every bit the header is valid.

This is why validating the transactions affair from a network perspective, and why you need a large decentralized network. Information technology doesn't matter from my grandmas perspective and that's fine, but we aren't talking nearly my grandma. Nosotros're talking nearly ensuring the network of working and actively participating nodes grows, non shrinks.

This node was participating until it got cutting off due to network demand growth:

https://www.reddit.com/r/ethereum/comments/58ectw/geth_super_fast/d908tik/

It's not uncommon and it continues to happen:

https://github.com/ethereum/go-ethereum/issues/14647

Detect how the solution is to "notice a good peer" or "upgrade your hardware"? Skilful peers shouldn't be the clogging. Hardware shouldn't be either. When all of your peers are hosed up by so many others leeching from them (because the good peers are the ones doing the existent work), you lot create a network of masters and slaves that gradually trend towards only one master and all slaves. (If you don't hold with that statement yous need to brand a case for how this trend won't subside in the hereafter because currently that's the management this is going towards and it won't stop unless a cap is put in place. If your answer is sharding, I address that fairy dust at the end.) Information technology's the definition of centralizing. Unregulated blocks centralize networks. Large (but capped) blocks are but marginally amend, but set a precedent for an ever increasing block size, which is equally as bad because it sets a precedent of increasing the size "in times of need", which mirrors the results of unregulated blocksizes. This is why we won't budge on the Bitcoin blocksize.

I tweeted about information technology a few times simply clearly I didn't think that was plenty. My Twitter accomplish doesn't actually extend much into the Ethereum space.

That chart is symbolic and non representative of any actual numbers. It only serves to visually express the betoken I'one thousand trying to make. To clarify, the green bend represents an aggregated average of the various demands of the Ethereum Network. At some point your node will fall out of sync because of this or a blocksize cap volition be put in place. It could happen now, or it could happen in 10 years, or in 50 but your node volition fall out of sync at some bespeak at this rate. It will never happen in Bitcoin. You can deny information technology now all you want, but this article will be here for when it happens, and when information technology does asinine Dapps like CryptoKitties, Shrimp Farm, Pepe Farm, and whatever comes next will cease to office. This is exactly what happened to Ryan Charles' service Yours.org that he originally congenital on Bitcoin. The only difference being Bitcoin already had the cap in place and Ryan either didn't foresee this from a lack of understanding, or for some reason he expected the blocksize to keep getting raised. Instead of reassessing he doubled downwards on BCash, meanwhile Yalls.org took his concept and implemented the same exact thing on top of Bitcoin's Lightning Network.

My Prediction: Ethereum will implement a blocksize cap and it will race BCash to both of their deaths.

http://bc.daniel.net.nz/ ←No longer updating statistics, chart is edited & extrapolated using REAL current information.

https://ethereum.stackexchange.com/questions/143/what-are-the-ethereum-disk-space-needs

The chart above isn't even a prediction. This is me filling in the blanks (in yellow) on what was the last remaining graph that compared both chains data directories, and so extrapolating from it. Here'southward what we know:

  • Bitcoin's futurity is predictable. The blockchain growth & network demands volition always be linear. (Ideal)
  • The corporeality of data an Ethereum node is required to procedure per second is through the roof and climbing. (Unideal)
  • If Ethereum on-chain demand freezes where it is now, blockchain growth will continue the linear tendency highlighted by that dotted line. (Very bad)
  • If Ethereum on-chain demand continues to grow exponentially the amount of people lament about their node going out of sync will achieve a tipping indicate. (There's only one option when this occurs.)

That graph above? The owner stopped trying to maintain the node. Physical demands are an issue besides, like time constraints in your personal life. Servicing requirements need to be low, not high, not reasonable…low.

Do you know what I do to service my Bitcoin/Lightning node? I leave my laptop on. That's it. If I have to reboot I shut downward the services, reboot, and showtime them back up again. Mean solar day to twenty-four hours I use my laptop for an assortment of other tasks, none of which inhibit its ability to run the node software. With all due respect if a change was implemented and forced on me that resulted in my node no longer being compatible with the network and unable to maintain a sync, I would flip out over the idiocy that allowed that, if I was a misinformed individual. Fortunately I'one thousand not and I signed upwardly for a blockchain with foresight (Bitcoin).

The trouble? I don't call up most of the people running Ethereum nodes are informed plenty to know what they signed up for. I don't recollect they understand the fundamental incentive models, and I don't call up they fully realize where and why they break downward with something as simple equally not having a blocksize cap. Hopefully this commodity volition succeed at educational activity that.

So what happens when that psychological tipping point is reached? Do people surrender? How many nodes have to be lost for this to occur? The explorer websites aren't even tracking this data anymore. Etherscan.io is no longer tracking total or fast sync directories, Etherchain.org says: Error: Non Found

Etherscan as well isn't letting you zoom out on the retentivity pool, the queue of transactions waiting to be included into blocks. The reason fees go up is because this queue builds up. You should be able to run across this over time. Here's one that tracks Bitcoin'due south mempool, side by side with the Etherscan.io one:

Both of these charts are monitoring the rough total awaiting transaction counts on these networks, and the scales are nigh the same, four/5 days respectively. The difference? I can zoom out on the Bitcoin one and meet the entire history. Why does this matter? Psychology matters when your network has no regulated upper boundaries. Here what ours looks like zoomed out:

See what I hateful? Run across how scale matters? What if I zoomed out on Ethereum's mempool and saw that it was at the top of an ever growing mountain? I'yard not saying that's where it is today, but I am saying that this data needs to stop being obscured. I'thousand also maxim that if/when it ever is unobscured, information technology'll exist likewise tardily and nothing can be washed nearly it anyway. It'southward already as well late now.

Let'south take a look at cake and transaction delay on Bitcoin's network. Beneath yous'll come across 2 charts. The 1st one is how long it takes for a block to spread across the network, the 2nd is for a transaction. Transactions are processed by the nodes (all 115,000 of them) and held onto until a valid block is created by a miner and announced to the network.

  • Block propagation times have dropped drastically because of very well designed improvements to the software. Transactions are validated when they come in and kept in the mempool. When a new block is received, it's chop-chop cross referenced with all the transactions you already have stored, and very rarely includes many transactions you oasis't received yet. This allows your node to validate that cake extremely fast and send it out to all your other peers.
  • Transaction times on the other hand have slowly gone upwardly merely seem to be stabilizing. They've been "intentionally" allowed to become up as a upshot of privacy improvements in the software, but that's a worthy tradeoff considering blocks are 10 minutes autonomously on boilerplate anyway, so a delay of 16 seconds is acceptable. I'd imagine that once blocks are consistently full this growth will level off considering transaction fees from the blocksize cap volition cocky-regulate the incoming catamenia of transactions, assuming no other protocol changes are made.

Go on in mind, none of this information is available for Ethereum:

Bitcoin is designed with this in mind. The transaction count queue goes upward but the blocks are regulated. People finish upward learning how to use this tool we telephone call a blockchain the correct manner over time and transaction flow stabilizes. With an unregulated tool you end up with a bunch of people chaotically trying to use that tool all at once for some random "characteristic" like CryptoKitties that ends upward grinding the entire affair to a halt until the backlog is processed. All of the Ethereum total-nodes need to process every single ane of these contracts. You might not need to, and they might tell you that you don't need to, but someone does need to. And so how many of them are there? What exercise higher fees do? They deter stupid Dapps like CryptoKitties at the base layer. In that location is absolutely goose egg need for them, and larger more "functional ideas" will simply experience the same thing but much worse because blockchains don't scale.

These Dapps are crippling your blockchain considering it's unregulated:

But that was the promise though, right? That was the dream. That was the entire premise of the Ethereum blockchain: Bitcoin, but better. It's not.

Clearly unregulated blocks don't result in infinite transactions, but the real takeaway here is the network can't even physically handle the electric current amount, there just aren't enough nodes capable of processing that information and relaying information technology in a timely fashion. Do you know how many Ethereum nodes in that location are? Do you lot really know? The Bitcoin network has about 115,000 nodes, of which about 12,000 are listening-nodes. About all of them are participating nodes, because that'south regulated too. What a listening-node is, compared to a not-listening, doesn't matter here because they are all participating in sending and receiving blocks to and from the peers they are connected to. The default is 8, the client won't even allow you get more 8 unless you add them manually. This was intentionally put in place, and it'southward recommended you don't add more considering it'south unhealthy for the network:

https://bitcoin.stackexchange.com/a/8140

Remember this from before?

Observe a good peer.

That's not how y'all fix things. This is a prime number example of why a chain that allows participants the liberty to exist selfish via lack of regulation is bad. This only has one outcome: Master & Slave nodes, where the limited masters serve all the slave nodes. Sounds decentralized, correct? Peculiarly when the fiscal requirements to be one of those master nodes keeps going upwardly…

To be fair, and as an aside: This is the exact criticism the Lightning Network gets, just it'southward a completely different type of network. Blockchain networks are peer-to-peer circulate networks. State-Channel networks like Lightning are peer-to-peer anycast networks. The fashion information is being sent is completely dissimilar. Your fridge has enough hardware to exist a Lightning node. Lightning "Hub & Spoke"criticisms are with aqueduct balance volumes. Hub & Spoke is equivalent to the Master & Slave bug, just with channel balances there is no bottleneck on the data. Yous simply standardize the Lightning clients to open X amount of channels with Ten amount funds in each, then network forms effectually that standard, completely avoiding hubs or spokes, just like the Bitcoin clients standardize eight peers. The Lightning Network is new so we don't know what that standard should be yet considering we have almost zip information nosotros can measure. /endlightningdefense

Speaking of cipher data we can measure, why are these the simply charts for Ethereum node counts? Where's the history? How many of these nodes used fast/warp sync and never fully validated it all? You don't need to store it all because y'all tin can prune, but again, how many are fully validated? How many are but light clients syncing merely the cake headers?

https://www.ethernodes.org/network/1

Information technology's funny how propaganda sites like Trustnodes pushing BCash conspiracies publish pieces like the following ane with bold-faced lies, and so it gets circulated around and no one exterior the flow of right information questions information technology:

I'm non linking to a BCash propaganda site.

There are 115,000 Bitcoin nodes and they all fully validate:

http://luke.dashjr.org/programs/bitcoin/files/charts/software.html

So what do you do now? What do you lot practise as an individual who slowly comes to this realization? What practise yous practice as an individual who has no idea what's going on? What happens to a network that is primarily made upward of these individuals that slowly go out (not literally, but as a participating node downgrading to a light-node)? How many participating nodes are left? How many nodes hold a total copy of the original genesis block? What happens when v information centers are serving the entire network of slaves (lite-nodes) the chain? Who's validating those transactions when everyone is but syncing the block headers ? You can sit down at that place and echo fourth dimension and fourth dimension again that "the network but needs the contempo state history to be secure" all you lot want, but when your network is broken from the bottom up and most nodes can't even keep upward with the last 1,000 blocks, how is that secure in whatsoever manner?

The takeaway from all of this:

  1. Ethereum'due south blocksize growth is bad considering of node processing requirements, non how much they need to store on a hard-drive.
  2. To prevent complete collapse of the network, Ethereum will need to implement a reasonable blocksize cap.
  3. Implementing a blocksize cap will enhance fees and in return prevent many Dapps from operation, or severely slow down. Future Dapps won't work.
  4. If Dapps don't work, Ethereum's entire proposition for existing is moot.

Where does BCash fit in?

  1. BCash simply increased their blocksize from 8MB to 32MB, and is calculation new OP_CODES soon to allow "features" like ICOs and BCash Birdies.
  2. BCash has "room to grow" coming from a completely understressed blockchain, while Ethereum is a completely overstressed blockchain.

https://txhighway.cash

Ethereum is dying and BCash is trying to exist exactly like it while ignoring all the warning signs we've been trying to bring to everyones attention. They wanted bigger blocks and ICOs, they got it at present. Both bondage will become the same thing: Centrally controlled blockchains that will slowly dice, but given temporary life support via gradual blocksize increases to continue supporting fraudulent utility tokens, until the entire arrangement breaks down when no i can run a node.

My Suggestion: Stop using centralized blockchains.

This section has been extensively expanded on in the follow-up article . The diagrams have been completely redone. Reading that is a must after this.

The but one in that room that runs a fully validating node is the ane that's simultaneously belongings upwards the painting, and the Ethereum network. I've managed to make no mention of Vitalik this whole article so I can focus on the technicalities, but if this picture (or the original) doesn't represent the essence of the Ethereum space then I don't know what does. I applaud Vitalik for calling out scammers like Fake Satoshi, yet at the aforementioned time he as misrepresents the functionality claims of Ethereum.

Oh, and that aureate goose egg you telephone call sharding? It's hocus pocus. Fairy grit. It's the same node centralization issue with a veil thrown in front of information technology. It'due south finer forcefulness feeding you the Master & Slave network I just warned you lot about, under the disguise of "new scaling tech".

Forgetting Vitalik'due south diagram he put out because information technology's meaningless, allow's endeavor to simplify Ethereum's electric current network kickoff. The diagram below essentially shows all the light-clients in pinkish and the "skillful" full-nodes in purple. Your fast/warp sync node may exist purple at present until it tin can't or you give upward on upgrading/maintaining and just use the low-cal customer characteristic, then it joins the pink group.

As fourth dimension moves forrad, the pink nodes increase while the imperial decrease. This is inevitable because it's what anybody is already doing. Practise yous run a full-node or a lite client? Do you run anything at all? Switching to using the calorie-free customer is consistently recommended "if syncing fails". That'due south non a fix.

Don't worry though, Vitalik is hither to salvage the day. He'due south turning "nodes" into SPV clients that but sync the block headers:

Just what does that mean? Well, fortunately I wasted a lot of fourth dimension writing and cartoon this upwardly too then I can explain it visually, but get-go let'due south start with words:

In Bitcoin yous either fully validate, or you don't. You lot're either:

  • A Full-Node and do everything. You fully validate all transactions/blocks.
  • An SPV Client that does nothing, is simply tethered to a total-node, syncs just the block headers, and shares nothing. They are not part of the network. They shouldn't even be mentioned here but I'm doing information technology to avoid confusion.

Again, in that location are 115,000 Bitcoin total-nodes that do everything.

https://twitter.com/StopAndDecrypt/status/1002666662590631942

You tin either read most this in more depth in Part 2, or you can take a look at the standalone article below:

In Ethereum there are:

  • Total-Nodes that do everything. They fully validate all transactions/blocks.
  • Nodes that try to do everything merely tin can't sync upwards because of peer bug so they skip the line and use warp/fast sync, and then "fully"-validate new transactions/blocks.
  • Calorie-free-"nodes" that are permanently syncing but the cake headers, and I guess they are sharing the headers with other similar nodes, so let'south call these "SPV Nodes". They don't exist in Bitcoin, again SPV clients in Bitcoin don't propagate information around, they aren't nodes.

That Ethereum node count? Guarantee you those are mostly Light-Nodes doing absolutely zero validation piece of work (checking headers isn't validation). Don't agree with that? Show me wrong. Show me information. They are finer operating a secondary network of simply sharing the block headers, only fraudulently existence included in the network node count. They don't benefit the primary network at all and just leech.

In New-Ethereum (2.0) with Sharding, things change a bit. I've went alee and edited out this section because I wrote an entire second article on it that does a much ameliorate job at explaining this, and the differences between Bitcoin and old and new Ethereum (2.0):

This isn't scaling. When your node tin't stay in sync it downgrades to a calorie-free client. Now with sharding it tin can downgrade to a "shard node" . None of this matters. You're still losing a full-node every time one downgrades. What's fifty-fifty worse is they are calling all the pinkish dots nodes even though they are but syncing the headers and trusting the purple nodes to validate.

How would y'all even know how many fully validating nodes there are in this set? Yous can't even tell now because the only sites tracking it count the lite clients in the total. How would you ever know that the total-nodes centralized to let'southward say, x datacenters? You'll never know. Y'all. Will. Never. Know.

On the other hand, Bitcoin is built from the footing up to prevent this:

https://twitter.com/_Kevin_Pham/status/999152930698625024

So what are you going to practise?

What should you do?

Are y'all a developer? Take everything you've learned and start developing applications on top of a adept blockchain. One that isn't broken.

Are you a merchant? Start focusing on readying your services to support payment networks. Ones that are congenital on top of a good blockchain.

Are you an investor? Take everything you lot've invested and start investing in a good blockchain. One that isn't going to dice in the coming years.

Are yous a gambler? Buy EOS. Information technology'south newer, just equally shitty for all the same reasons I mentioned in a higher place, just no one knows it yet.

Are you an idealist? This is definitely not the chain for you. Detect i that is.

https://twitter.com/StopAndDecrypt/status/992766974022340608

Office 2

If you lot're interested in running a Bitcoin node that volition never go out of sync or demand that you update your hardware, check out this tutorial I put together:

camachotwerybould1997.blogspot.com

Source: https://medium.com/hackernoon/the-ethereum-blockchain-size-has-exceeded-1tb-and-yes-its-an-issue-2b650b5f4f62

0 Response to "Eth Parity Does Block Height Reset When New Genesis Block Is Uploaded to a Network"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel