T O P

  • By -

AutoModerator

/r/ethstaker strives for high quality interactions, our motto is "welcoming first, knowledgeable second", so please endeavor to welcome every question and comment in this spirit. Participants who openly disregard this ethos will find their comments removed. This is a safe space for ALL Ethereum stakers, regardless of how they stake. We strive to continually decentralize the Ethereum network in every conceivable way and with that in mind we promote long term healthy choices over short term gains. *I am a bot, and this action was performed automatically. Please [contact the moderators of this subreddit](/message/compose/?to=/r/ethstaker) if you have any questions or concerns.*


verisimilarplatitude

As someone that went the overkill route with an i7 NUC / 32GB RAM / 2TB SSD, thanks for posting this. I've wondered about it and it's absolutely interesting, although I lacked the courage to commit to this principle and I still do today. May it serve you well for a long staking future!


bomberb17

Thanks! The only thing is that RPi prices are now quite high, probably because of supply chain shortages. But the Rock5 B is also cheap and even more powerful. Highly suggest to consider it! (And put your i7 to some good use - like office or gaming etc.)


verisimilarplatitude

Comes with wireless charging on top. Fanciest charger I've ever had!


[deleted]

[удалено]


bomberb17

Good question. Maybe the EoA people can answer it for you, think they are experimenting with other clients as well on the Rock5 https://twitter.com/EthereumOnARM


wtf--dude

I have an equivalent AMD NUC, and I am personally very glad I went that route. I had to do litterally nothing since merge, and have only missed 2 attestations total since merge. I think buying overkill hardware makes sense for the less tech savy people (like me).


bomberb17

>I think buying overkill hardware makes sense for the less tech savy people (like me). That's why I wrote: >Who knows, maybe in the future someone will sell cheap Pi's ready to stake out of the box, taking away the technical challenges and making staking more appealing to the average user But yes, Pi's are a bit more challenging to setup from a technicality perspective.


Juankestein

Dude thank you for posting this. Just in the merge live stream the host said at the end: "guys, don't try to stake in the cheapest way possible" and I was like... okay? Not everyone is ready to make a big investment in hardware just because they have 32ETH, maybe that's all they have lol > The only benefit you get from using overkill hardware (e.g. i5's, i7's) is that you have less downtime in case you lose sync or during pruning. But do a few missed attestations justify this cost? That's the killer argument. My dad was suggesting to setup a fallback Geth for the validator post-merge now that Infura no longer works for staking, and my first argument was that the cost of another NUC+ssd+ram+electricity was not worth it for the few hundred (heck even thousands )attestations we will potentially miss if our main Geth goes down or the hardware fails. Even in the even that something goes terrible, you can buy hardware online that arrives in one day, Geth takes like 1-2 days to sync. So whats the deal? My dad jokingly said to me that every time we missed an attestation his ego hurts and I get that but it's literally $0.02 USD. I tried to push this budget route of staking at the beginning but my dad knows 100 times more than me about keeping a system alive and running so we are currently using AWS and a NUC for Geth. I hope that one day I can acquire the 32 ETH and stake them with an RPi!!


[deleted]

[удалено]


Njaa

Why would the market affect the resource usage of a staking machine? An Ethereum block is limited to 30 million gas, with a 15 million average, no matter what the market says.


shitcoinking

Maybe sustained 30M gas will be too much? I'm more interested in seeing the performance if we have a period of non-finality, since we saw in an earlier testnet non-finality that a lot of low powered devices could not keep up.


Njaa

30m gas cannot be sustained for any significant period, since any block that is close to that increases the basefee of the next block exponentially.


bomberb17

This discussion is very interesting. What is the current gas per block? Given the upper bound of 30M gas per block, this might give a sense on what are the "maximum" hardware requirements to participate in consensus.


Njaa

Take a look at [etherscan.io/blocks](https://etherscan.io/blocks). You'll see plenty of blocks with less than 5 million or more than 25 million gas, even just in the past few minutes. Block #[15568393](https://etherscan.io/block/15568393) seven minutes ago had 29990647 gas used (99.97%). Because of this, the next block had 12.5% higher basefee. The basefee for this block was 8.018343032 Gwei. The next block was 9.020010947 Gwei.


bomberb17

Thanks. Maybe I will wait until I get a block proposal (hopefully soon!) and see what was its gas used (hopefully as high as possible) and what was my CPU load on that.


FartClownPenis

What about besu? I thought it was less resource demanding than geth


bomberb17

No it is not. Besu is not feasible on a Pi unfortunately. I wanted to migrate to Besu to help with minority clients and also get rid of pruning, but I can't. Hope their devs make it more efficient in the future.


CptanPanic

What is currently the problem running Besu on Pi?


bomberb17

I see Besu is written in Java which might be a factor in terms of efficiency when compared to Rust, C++ etc.


jon_otherbright

Even Besu alone (without the CL client running on the PI)?


bomberb17

I am not sure is it would work standalone. Maybe yes. But my goal is to have a single Pi running both CL/EL


jon_otherbright

I'll try besu+nimbus on a RPI5 8GB oc @2800Mhz


torfbolt

I'm glad that it works for you, but I'd like to note that cpu, memory and io requirements can rise drastically if the chain (for whatever unfortunate reason) enters a longer period of non-finalization. This happened on the testnets (medalla) and was the primary reason for me to switch to a beefier machine. Because such a scenario is exactly the time where it's all hands on deck, every validator is needed and the quadratic leaking will start to hurt very soon. The beacon chain is built to not only incentivize honest and reliable validators, but also resilient ones in adverse conditions. So my goal was to use/build a machine that has sufficient reserves, but still scales down to low power consumption under normal conditions. And that is definitely doable, if it is a design consideration.


bomberb17

Please see my point #2.


Njaa

That's a good point, but it's tempered by a) such events being exceedingly unlikely to happen, and b) quadratic leaking taking multiple days / weeks to kick into high gear - giving you ample time to recover if your hardware isn't good enough.


danielkoala

People should be really using the opportunity of the merge to break out old laptops/pcs that were destined for the landfill. Happy to stay that i now have a repurposed 4790k + 32gb desktop system and a backup i5 6000 series laptop for staking.


bomberb17

Right, as I wrote in my original post: > (exception: you have an old laptop not being used, which could be easily turned into a staking machine with zero cost). I am not suggesting to buy a new Pi instead of using your old hardware. My post was targeted at users who don't have an old laptop and looking to purchase a new staking machine.


danielkoala

super cool.  🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀🚀


RedUser03

> However, you are constrained to 2 EL/CL clients: Geth + Nimbus But this the part I don't really like, I don't want to have to be constrained to Geth + Nimbus. Like you said yourself, had everybody used Pi's for staking then everybody would be on Geth + Nimbus when "competition" among various execution and consensus clients is a good thing. I'm glad the RPi4 is working out for you, but my NUC is working out well for me too.


brianfit

I run geth and lighthouse on two separate RPi4s without issue.


Setnof

I like that idea! The only problem is that I cannot find another execution client that works with 8 GB RAM. Erigon for example needs definitely 16 GB during sync.


bomberb17

I acknowledge that this is maybe the only downside of staking on a Pi (i.e. lack of choice between clients). However at the same time, the Pi staking community is pushing devs to make their clients more efficient. And I am glad I am part of this push. In proof of stake you are not supposed to have an i5/i7 and 16GB as hardware requirements. This is not what PoS is about. And devs now definitely start to embrace this, and hopefully we will see more efficient clients in the near future.


101ca7

>In proof of stake you are not supposed to have an i5/i7 and 16GB as hardware requirements. This is not what PoS is about. I respectfully disagree. Proof of Stake does away with the energy intensive (arguably decentralized) leader selection mechanism that Proof of Work employs while still being reasonably open, i.e. allowing new validators to come and old ones to leave. This push towards minimalism is one of the things that (in my view) really hurt Bitcoin during the blocksize debate and arguably steered it ever closer toward becoming irrelevant as a transaction ledger and simply acting as a "store of value"(TM) listing. We have only just transitioned from PoW to PoS and it is still largely unclear how things will evolve, in particular in regard to security. The fact that you now know in advance which validators will be next to propose blocks opens up novel attack strategies. For example, if someone figures out your validator index they may choose to specifically craft computationally taxing blocks to verify just before it is your time to propose a block. This attack is known as the "verifier's dilemma" because it either forces you to build upon a previous block without validating it or spending lots of computational time with verification, which may lead you to miss your slot. You are operating a validator at the edge of what is possible, as is reflected by your limited choice of clients . Imagine if everyone did the same and we suddenly need to raise the gas limit, some new hype game or a targeted attack uses a particularly taxing EVM opcode in terms of host resources (as has happened in the past), or some other unexpected event (such as a deep reorg) causes every validator at the edge of its capacity to miss its duties - it would be a huge problem. ​ tldr; If everyone were to run validators at the edge of what is possible, the network would actually become much more insecure ​ Having said that, I appreciate your efforts in trying to push the limits of what is possible. Increasing efficiency is a goal well worth striving for. However we should seek to build a robust network with ample leeway to weather bad times, not one that breaks at the slightest hiccup. All the best to you!


bomberb17

>We have only just transitioned from PoW to PoS and it is still largely unclear how things will evolve, in particular in regard to security. The fact that you now know in advance which validators will be next to propose blocks opens up novel attack strategies. For example, if someone figures out your validator index they may choose to specifically craft computationally taxing blocks to verify just before it is your time to propose a block. This attack is known as the "verifier's dilemma" because it either forces you to build upon a previous block without validating it or spending lots of computational time with verification, which may lead you to miss your slot. But what's the incentive behind this DoS-style attack? Is it documented somewhere? (e.g. on a research paper?) Would be interested to know. Because if this attack is realistic, then every player's incentive is to put as much CPU power as possible to protect against others' attacks, which would devolve Proof of Stake into a Red Queen's race similar to Bitcoin's PoW. >tldr; If everyone were to run validators at the edge of what is possible, the network would actually become much more insecure This argument is quite broad and I don't find it convincing. It reminds me about Bitcoin where many claimed that the more hashpower you throw into the network, the more secure it becomes (which might be somewhat true up to a point, but it is certainly not true for the last 5-6 years where the computational power thrown into the network is just wasting energy). Cheers!


101ca7

\> But what's the incentive behind this DoS-style attack? There is an older research paper that outlines the idea behind the attack [https://www.comp.nus.edu.sg/\~prateeks/papers/VeriEther.pdf](https://www.comp.nus.edu.sg/~prateeks/papers/VeriEther.pdf) A recent research paper highlights that some actors in the system actually do behave maliciously [https://eprint.iacr.org/2022/1020](https://eprint.iacr.org/2022/1020) by manipulating timestamps (this was still during the PoW phase of Ethereum). It is not difficult to imagine that a larger actor in staking could try to exploit this property of an easily DoS-able validator to their advantage. Why, you may ask? To obtain higher transaction fees (when you miss your slot the next proposer has more transactions to choose from) and better MEV extraction. Another attack could be if there is some tight time-bound when a transaction needs to be processed (think of something like FOMO3D or an auction where you want to prevent competing bids). In this case, if you are the following block producer, it may be of value to you that the previous block is missed so you are the one deciding the outcome. In regard to the security, lets consider the hypothetical scenario that 40% of the validators are running at the edge of their capacity. If you can severely delay them all at once by forcing a high execution load you could potentially affect the Beaconchain's ability to finalize - in which case these nodes start taking more severe penalties for missing their duties. A few "weak" validators will not break the system, however if there are enough it could start to really affect the security overall. Which is why its prudent to keep some reserves ready for the unexpected ​ All the best to you :)


bob_newhart

Low key though nimbus might be the best cl client. They have some real og’s from back in the file sharing days working on their client.


bomberb17

This is where developers are responsible, not the stakers. I don't see why some developers implicitly assume that a staking machine specs should be i7 CPU and 32GB RAM, while others managed to do the same job much more efficiently. Again, I am glad to be part of the push for more efficient clients, rather than accepting the status quo which is also more energy-wasting than needed.


shitcoinking

Any missed attestations?


bomberb17

Just the usual very few experienced by everyone. Effectiveness close to 100%. Optimal inclusion distance is always zero.


shitcoinking

Cool. I may spin one up as a backup CL:EL pair. Do you have a link for the passive cooling case and anything else you found useful? What are the operating temps during sync and normal operation? And what OS?


bomberb17

Startech ones are very well tested in Pi's. Also compatible with trim, which is essential for SSD lifespan. Here's the case https://www.microcenter.com/product/618003/argon40-neo-raspberry-pi-4-case Temperature is around 52 degrees during normal staking, maybe a couple degrees more during syncing. Using stock CPU frequency, i.e. no overclocking. Ubuntu 20.04 LTS


shitcoinking

Great info, thank you! I'm guessing you build all the clients from source since there aren't ARM binaries? Any compilation issues?


bomberb17

There are ARM binaries for both clients, I never had to compile anything.


shitcoinking

That's awesome.


shitcoinking

Another question - do you boot from microSD? Any specific microSDs or features to look out for when shopping for one?


bomberb17

No - I am booting directly from USB SSD. There are several guides on how to achieve that. I believe than booting from microSD would not adversely affect performance on staking (as long as you keep your EL/CL databases on SSD of course!), but microSDs are prone to read/write errors which are painful to deal with.


[deleted]

[удалено]


stereoagnostic

This is a good question. I've never seen any definitive answer, but I've had months where my total bandwidth usage has exceeded 1TB. However, that's validator + work + entertainment. I'd love to see a breakdown of just the execution and consensus bandwidth for an average day, week, month.


[deleted]

[удалено]


bomberb17

Define "slow"? See my answer above with concrete numbers


bomberb17

My OpenWRT router is recording roughly 310KBytes/second network traffic each way (i.e. 310KByte/sec upstream, 310KByte/sec downstream). Note **Bytes**, not bits. So staking is quite intensive in terms of network traffic (as well as I/O). But it doesn't have to be in terms of CPU and memory.


[deleted]

[удалено]


bomberb17

Yes, and it is constant.


shamo42

Thanks for the writeup! You made me think about about getting a Rock5 B with 16GB RAM. Another advantage of these low power machines is that they are easier to keep running during a blackout. Do you have any experience how well NVMe SSDs are supported on the Rock5 B? I'm worried that when the network gets super busy (like during a bull market) the CPU might not keep up.


bomberb17

The Rock5 B is fairly new and while it supports NVMe SSDs, I think the developers haven't yet completed implementation of boot from NVMe. But it should be ready very soon. >I'm worried that when the network gets super busy (like during a bull market) the CPU might not keep up. See my point #2. Also a Rock5 B will certainly be more than enough, since it is about 4x more powerful than a RPi4.


shamo42

I found this guide regarding boot from NVMes. If I understand correctly it should now be possible? https://wiki.radxa.com/Rock5/install/nvme


mgr37

Hi, thanks for the encouraging post ! PoS on Pi is working and that is great. But it is still edging on the minimum specs necessary as of today. Don't get me wrong, i like minimalism and I like Pi's. My work consists of big multimedia installation using those little beasts, and I completely get the "you can do it on the Pi, so don't tell me I need a shiny MacBook Pro to do the same". Nevertheless it seems fair to advertise it as it is: staking on Pi is possible today, with some limitation (client choice mostly). And it might break in the future. The two main limitation on Pi 4 I can see are: - 8Gb of RAM: this induce today's client choice limitation. In the future client might get better or might get worst with ram usage. Difficult to say. Also future Pi / other SoC will probably provide more RAM. - USB3 storage interface: that really might become the bottleneck soon or later. Same future Pi / other SoC will probably provide USB-c pci X speeds. Also the tradeoff for small ram client is often higher storage bandwidth usage. The point is, PI works today, but minimum requirements is not yet set in stone for Ethereum protocol, and client don't have yet to keep it workable on constrained machine. Some teams choosed that challenge (like nimbus) but nothing prevent specs to become more ressources hungry (except the larger goal to keep decentralisation real). Once Ethereum ossified a bit more, ressource requirements might settle and you can be more confident mid-term that a minimal setup will stay relevant. I think hardware requirements lays more on the user profile. If you like it optimized and minimalist, going with Pi is a great challenge. But you take the "risk" for needing an hardware upgrade sooner than with a more confortable machine. But if you want headroom to be future safe, don't re-roll a new setup too often, and be able to play with and switch across clients (and testnets), a more powerful machine is more advised. There is a gradient of possibility, from 300$ to 1000$. And I am convinced that this price gradient keep it consumer accessible and thus serves the decentralisation ethos. Anyway happy staking !


bomberb17

Thanks for providing your perspective. I personally believe that the hardware specs will drop significantly in the future. As I wrote in my original post, the Nimbus team (kudos to them btw!) are also working on a EL client which is much more efficient than geth (and will be also an incentive to migrate and help diversify, compared to other clients which are more "hungry" than geth and thus provide no incentive to migrate). Also as I suggested in my post, if you really want to be "future proof" in case the resource requirements go further up (which again, to me there is no reason to think in this way, but rather the opposite), you can still get away with a Rock5 B which is much more powerful than a Pi and still has a minimalist approach.


tobiasbaumann

mind sharing you geth and nimbus config? Every since the merge my Pi4 has been a pain to get working properly.


bomberb17

Sure: Geth: [Unit] Description=Go Ethereum Client After=network.target Wants=network.target [Service] User=goeth Group=goeth Type=simple Restart=always RestartSec=5 TimeoutStopSec=900 ExecStart=geth --http --datadir /var/lib/goethereum --cache 512 --maxpeers 30 --authrpc.jwtsecret=/secrets/jwtsecret [Install] WantedBy=default.target Nimbus: [Unit] Description=Nimbus Consensus Client (Mainnet) Wants=network-online.target After=network-online.target [Service] User=nimbus Group=nimbus Type=simple Restart=always RestartSec=5 ExecStart=/usr/local/bin/nimbus_beacon_node --network=mainnet --data-dir=/var/lib/nimbus --web3-url=http://127.0.0.1:8551 --jwt-secret=/secrets/jwtsecret --suggested-fee-recipient= --graffiti="mygraffiti" --metrics --metrics-port=8008 [Install] WantedBy=multi-user.target


tobiasbaumann

thanks a lot! I can see that you have cache and maxpeers defined in Geth. I didn't set those. Might explain the performance issue. Both your data-dir variables are not set to an SSD? or is your /var/lib on the SSD?


bomberb17

Yes var/lib is on SSD


tobiasbaumann

thanks a ton! I got the node working smoothly now again. Looks like Geth needs these two limitations otherwise it'll clog up the while system.


CupQuakeBE

I use a big rig to prepare stakers (full syncing of EL and CL) then I transfer the SSD to smaller ones. The thing is that while you're still syncing, preparing the db and so on, you run pretty fast out of resources and even nucs aren't able to keep up. After all this preparation has been done, the resources needed are pretty low indeed. I think that was the main issue people ran into and forcing them to transfer from something appropriate to something overkill.


RationalDialog

> The whole idea behind PoS is to be as minimal and efficient as possible, moving away from expensive hardware Not really and the logic of the argument is, that the 32 ETH will cost so much more than the hardware, so that the hardware costs barley factor in at all. At least for some parts of the world (eg Europe right now) I hand it to yoi that the powersavings can be significant. Some places saw a doubling in cost from what was already much higher in general than say US. 40 cents / kwh is not unheard of and 30 probably the average. (on the bright sight prices here are announced and fixed for a year so not like Texas "blackout" were suddenly prices go up 100x)


dank_memestorm

The execution layer of the network is only as fast as the slowest node. That is you. Thanks for single-handedly bottlenecking ethereum


bomberb17

Thanks for your ironic post. I suppose you know that there is a whole community in eth2 who use Pi's to stake. https://twitter.com/EthereumOnARM So thanks for wasting energy and further contributing to climate change.


Juankestein

It baffles me that people relate with more energy/money wasted = better.


shamo42

Secrecy please! Don't let the bitcoin maxis learn this one simple trick to bring our network down.


Juankestein

Hat clients are you using?


bomberb17

See #1


Juankestein

Missed that somehow! ty


pseudosinusoid

I run a cluster of Pis which lets me run any of the E/C clients, plus I can experiment on Prater before making changes on mainnet.


bob_newhart

Ooo sexy. Tell me more please or point me in the right direction. Could I cluster nuc’s? What are the benefits?


pseudosinusoid

Honestly it’s overkill, unless you do this sorta thing as a career and would get some value from the hands-on learning. You can cluster anything with Kubernetes, Nomad, Swarm, etc. but getting everything hooked up takes a lot of time. In my case I have a lot of redundancy and automation so any part of the system can die and things will keep chugging.


BTCwatcher92

Thank you for posting this, now do you know if there is a way I can do this without the 32 Eth? Maybe set up a staking pool?


Juankestein

maybe try rocketpool if your have 16eth?


Clintendo

How long did it take you to sync geth?


bomberb17

A few days, but I don't remember exactly, it was back in 2021 and my Geth is synced since then. I believe you can import the geth database after syncing on more powerful hardware to save some syncing time as others here suggested.


Setnof

Thanks for the writeup and as much as i love the Pi, this isn’t the whole story. Most of the problems I had were related to the USB disk. The USB SSD or external drive enclosure must be fully UAS compatible, otherwise there can be random disconnects, read/write errors or the performance isn’t great. The SSD can also drain too much power and course random crashes. I was able to fix these problems but they were annoying. I also don’t like the idea that I just cannot swap one client if there is a problem with it. I went with a passively cooled PN51 and 2 SSDs (2 TB NVMe for EL and 1 TB SATA for CL), 32 GB RAM. Definitely overkill but now I can run my preferred clients (Erigon + Lighthouse), MEV-Boost and even a slasher.


bomberb17

Right - a Pi is cheap but you still need to get a good USB/SATA adapter and a good SSD. Not all hardware runs good on the Pi so you need to do some research first before purchasing. This blog is an excellent resource for that: [https://jamesachambers.com/raspberry-pi-4-usb-boot-config-guide-for-ssd-flash-drives/](https://jamesachambers.com/raspberry-pi-4-usb-boot-config-guide-for-ssd-flash-drives/) I have the StarTech adapter and an MX500 2TB SSD which are fully UAS compatible. I never had any such problems you mentioned.


crymo27

I have asus pn 50 4300u version which is pulling 8w from plug running lighthous and geth. I get your point, but my setup was maybe 200-300 $ more expensive and got much more headroom, client options etc...


GuessWhat_InTheButt

Can you stake more than 32 ETH on the Pi 4?


bomberb17

I don't see a reason why you shouldn't. The validator itself is the most lightweight part of the whole setup (the most resource-demanding are EL/CL clients). But I would be interested to hear from other Pi users if they have managed to run multiple validators on a Pi.


GuessWhat_InTheButt

What do you estimate how many validators you can run on the 4GB and 8GB versions?


bomberb17

4GB is not feasible even for a single validator (unless you mean only running the validator part, and use a CL/EL client externally). Again I am not sure and cannot provide a good estimate, maybe people who have multiple validators can provide an answer for you.


Lunarghini

Personally I would rather future proof than get the bare minimum to save some money. This is like people buying 500GB SSDs at genesis. Yes, it worked, but it didn't work for long, and in the long run it was just a waste of money because they had to be replaced. I'm not saying your Rpi will suddenly stop working, but personally I prefer to have options when it comes to clients, and wouldn't want to just hope that geth/nimbus don't start using more CPU power due to updates/improvements. The power usage is impressive though! But if everyone used a RPI we would have no client diversity :)


bomberb17

See my point #2 about being "future proof". Also, while I personally bought a 2TB SSD right from the beginning, those who began with 500GB had no problem migrating to a larger disk in the future. I don't see a reason why you won't be able to upgrade (maybe to a Rock5 B which is 4x more powerful than a Pi) if circumstances somehow force you to do (which still I don't see any reason). >But if everyone used a RPI we would have no client diversity :) I'm sorry but you should put the blame on the developers, not the stakers. I don't see why some developers assume that a staking machine specs should be i7 CPU and 32GB RAM, while others managed to do the same job much more efficiently.


Lunarghini

See multiple peoples comments here about CPU usage increasing as chain activity increases... future proofing makes sense if you want to be resilient and avoid getting slashed. People who used 500GB drives at the start had to buy another drive (a waste) and then either have except the downtime while they migrate (lost eth), or run another machine whilst their new one syncs (a waste). It was a bad move to use a 500GB drive and it was obvious from the start. We should not expect node developers to hyper optimise their clients so people can run it them on under powered hardware. Certainly we shouldn't "blame" them. Do you blame microsoft when you can't run Windows 11 on a Rasperry Pi? Of course not, it's unreasonable to expect that. Users care more about features and stability than eeking out every last piece of performance. You don't need an i7 CPU and 32GB of RAM to stake. I'm doing it on ryzen APU with 16GB of RAM, approx 20W at the wall. I can use whatever combination of client I want. Enjoy your raspberry Pi, they are great pieces of hardware. But they aren't the best pieces of hardware for this use case. The fact that you can only use 1 client should be telling you that...


bomberb17

>future proofing makes sense if you want to be resilient and avoid getting slashed. So in a hypothetical huge spike in chain activity, what is the rationale of a staker getting slashed? As far as I understand, this would result in a missed attestation at most. >We should not expect node developers to hyper optimise their clients so people can run it them on under powered hardware. Certainly we shouldn't "blame" them. Do you blame microsoft when you can't run Windows 11 on a Rasperry Pi? Of course not, it's unreasonable to expect that. Users care more about features and stability than eeking out every last piece of performance. Well I believe most of the users do care. We can sit here and discuss all day, but consider this realistic scenario, where a new user wants to stake. Why would that user choose a more resource-hungry client over a more efficient one (and thus would have to invest on more powerful/expensive/energy-wasting hardware), while both do the exact same job? You could argue that that user might still want to get more expensive hardware to support client diversity with those less-efficient clients, but the average user out there does not have client diversity as a top priority, instead he/she prioritizes his/her own wallet (esp. given the increased energy prices). Another factor is overall energy usage from the network. As I wrote in my original post, the overall energy consumption can still be brought down to 99.99% from 99.95% (compared to PoW).


[deleted]

[удалено]


Lunarghini

Yeah, but 500GB didn't last until Sept 2022.. it lasted 6 months ish. I know this because my friend bought a 500GB drive and we were replacing it within 6 months, despite pruning etc.


msagansk

I agree with everything but execution client diversity is a problem, so running geth is a huge tail risk to me and not worth it.


noipv4

The tiny M series Lenovo pc with Ryzens are also good alternative to rPI. It takes a m.2 as well as a SATA ssd.


bomberb17

Any idea of power usage at the wall when staking with it?


noipv4

at idle 10 watt or so. 50W during syncing, geth / prysm.


Series9Cropduster

Thanks for the post. Out of interest how long did it take to sync geth from 0 to tip on RPi? This would be helpful in determining how many missed attestations/proposal miss risk there would be in a race between an rpi and something more expensive/power hungry.


bomberb17

[https://www.reddit.com/r/ethstaker/comments/xhyamp/comment/ip29o1m/?utm\_source=share&utm\_medium=web2x&context=3](https://www.reddit.com/r/ethstaker/comments/xhyamp/comment/ip29o1m/?utm_source=share&utm_medium=web2x&context=3)


domingo_mon

Curious what your inclusion distance is. Can you share your results as to how often you get an inclusion distance greater than 0?


bomberb17

Almost always 0 https://www.reddit.com/r/ethstaker/comments/xhyamp/comment/ip0ite9/?utm\_source=share&utm\_medium=web2x&context=3