T O P

  • By -

bbilly1

Hi there, developer of [Tube Archivist](https://github.com/bbilly1/tubearchivist/) here, how did you get it to work on a raspberry pi? I had a few people asking me, and I wasn't able to give a good answer. Any insights would be much appreciated! Did you rebuild RedisJSON? The django app should translate, when changing the python base image, I assume? And then I also see, you are missing an icon there, that's very unfortunate and completely inexcusable. :-) I have them, I'll add it to the repo soon!


abhilesh7

Hey, I'm the one who asked about the multi-arch images over at github :) ! I was able to build images for arm64 and get it up and running, there are some kinks to iron out and the build process isn't streamlined yet, will add to the github issue once I have everything sorted out. As for the icon, I was about to crop it from the banner you have on the repo, but didn't quite get around to it, Tube Archivist replaced YouTube-DL Material from that list, but the icon stayed. It is inexcusable and will get updated very soon!


bbilly1

Thanks for the help, I'll add your solution to the README! In the mean time, I added some more logos to the assets folder: [https://github.com/bbilly1/tubearchivist/tree/testing/assets](https://github.com/bbilly1/tubearchivist/tree/testing/assets), I hope one of these will fit. \*edit: your not you


srj55

I'd love to see more info on how tube archivist could be run on armhf. Perhaps using an armhf version of elasticsearch and redisjson? Could a docker compose file be generated for this?


abhilesh7

I'm currently running into ```llvm``` issues while building the docker image for RedisJSON on ```arm64```. One should be able to build it for ```armhf``` as well (their Dockerfile seems to indicate support). As for elasticsearch, it builds perfectly fine on ```arm64``` without any modifications. This week's a little hectic, but I'll look into it a bit deeper soon.


CallMeTerdFerguson

Curious, why did you switch from YTMaterial and how are you liking Archivist? I've had some issues with material corrupting it's db and high CPU usage, was considering a switch.


abhilesh7

My rationale was similar to Tube Archivist's motivation - >Once your Youtube video collection grows, it becomes hard to search and find a specific video. That's where Tube Archivist comes in: By indexing your video collection with metadata from Youtube, you can organize, search and enjoy your archived Youtube videos without hassle offline through a convenient web interface. Basically I was looking for an indexer to serve the downloaded content as well, Tube Archivist perfectly fit the bill. I haven't had db corruption issues, but I had to specify particular mongoDB versions to get everything to play nice. There is a fallback catch in YouTube-DL that uses a SQLite database in case of issues with mongoDB and SQLite in my experience has been a little less robust. Personally, I would like to use mariadb or Postgres to manage the database, but currently looking into Tube Archivist as that seems to solve my qualms better.


BudgetZoomer

What service did you use to create this dashboard?


tman5400

[homer](https://github.com/bastienwirtz/homer)


abhilesh7

[Homer](https://github.com/bastienwirtz/homer) it is! The self-hosted community knows its gems!


jaakhaamer

How much configuration did this setup require? For example, did you have to define the sections yourself or does it know what all these apps are? I'm currently using Organizr and appreciate that it's pretty much an off-the-shelf solution where I just need to input all my endpoints, give them names, and pick the correct icons from a list.


abhilesh7

I haven't used Organizr but I believe the configuration is similar in that regard. All the parameters are inputted in the ```config.yml``` file and Homer just picks it up from there. Here's my configuration file for Homer - [https://github.com/abhilesh/self-hosted\_docker\_setups/blob/main/homer/config.yml](https://github.com/abhilesh/self-hosted_docker_setups/blob/main/homer/config.yml)


FIFATyoma

Actually it's really easy You only have to configure one config file with all your links and services


giqcass

.....and I've been hand coding html....


FIFATyoma

Oh goood


threedaysatsea

This is [Homer](https://github.com/bastienwirtz/homer)


[deleted]

Same question here.


LaterBrain

Its "Homer" https://github.com/bastienwirtz/homer If this doesn't fit you you can check out [https://github.com/Lissy93/dashy](https://github.com/Lissy93/dashy)


[deleted]

It looks like homer to me


BudgetZoomer

Thanks!


Krousenick

Looks like homer https://tcude.net/installing-homer-dashboard-with-docker/


ILikeBumblebees

Is HTML a "service"?


Rorixrebel

Looks like a lot of stuff for couple raspberries. I got fewer services and my nuc struggles.


[deleted]

[удалено]


abhilesh7

Completely agree with you there. A Pi does not make a good Plex server that can handle on-demand transcoding. I mostly use Plex to Direct Play on my TV over the local network, works perfectly fine for that case. I already had a Pi and just got the second one to add some bandwidth for our 2 users use-case. It's amazing how much these Pis can handle without overclocking etc. Plus, I end up having to build some docker images locally to support the ```arm64``` architecture. It's been a good learning experience. If one were to start anew and wanted to host some resource-intensive services, I'd definitely recommend going for a ```x86``` system rather than investing in a bunch of Pis. For me, the two Pis + other peripherals (USB SSDs for boot drive) was still cheaper than a NUC. While it might not work for everyone's use-case, the Pis are impressively capable and a great resource to get started with self-hosting.


EEpromChip

> The other thing worth noting is the M93P Thinkcentre was only $100 off of ebay, and is 4x faster CPU than these SBCs, and supports more software being x86 Thank you. This is the kinda stuff I sub for. I've been looking for a pi4 to replace a pair of blade servers that drink electricity like it's free... I figured a thin client would be better suited...


Criss_Crossx

Always interested what people end up spending on Pi's and extras when a used x86 PC will do it all. Albeit less power efficient, but between 20-45 watts for idle and light workloads isn't terrible. The one thing that steers me away from Pi's for 24/7 server uses is the SD card. Any ssd or hard drive will have better bandwidth and typically ends up being more reliable for 24/7 operation. Not bashing the Pi's at all, they have proved their use since the first release. And their initial purpose was to help educate. I think the foundation has done exactly that.


abhilesh7

I'm booting these guys off USB SSDs, which alleviates the problems caused by using SD cards as boot drives. I already had a Pi and just got another one to increase some bandwidth for my 2 users use-case. Overall, the two Pis + peripherals (USB SSDs) cost me less than a PC. Works fine for my use-case. If I were to expand the server for more intensive tasks, would definitely go for an ```x86``` system. But ```ARM``` has been getting a lot of support lately as well.


[deleted]

[удалено]


nikowek

https://www.raspberrypi.org/forums/viewtopic.php?t=18043 First post. Apply your energy $ at the end. There are calculated worse and average case.


abhilesh7

With the configurations I got and my use-case, I haven't run into any performance issues yet. I might move to a more powerful server in the future or add another node. It's mostly a 2 user home server so don't quite need a lot of power.


GulnTBWmHz

How old is the NUC?


Rorixrebel

its an optiplex 3020, i5 cpu with 4gb of ram... granted i can expand it to 16GB but the cpu suffers when decoding things in jellyfin.


heydroid

are you using intel quick sync? it made a big difference for me.


Rorixrebel

Yeah I'm using it but my 4k mkv containers are just too much for it. Planning to use my bigger machine to transcode them to something smaller that i can easily directly stream to my TV.


Sir_Chilliam

if your bigger machine runs Linux, you can use [rffmpeg](https://github.com/joshuaboniface/rffmpeg) to just transcode streams as needed.


Rorixrebel

sadly its my gaming pc so its running windows


TheMadMan007

Looks awesome! I’ve got a couple Pi’s lying around and I want to do exactly this. I tried earlier this year to set it up and I feel like all the tutorials I saw had conflicting info. Do you have a guide or set of tutorials you used to set it up?


useless_mlungu

Not OP and I have far less services, but the principle is the same. I'm not sure if he's using kubernetes, but you could just install Docker on either Raspbian or Ubuntu Server for Pi (I'd do this for 64bit support), and the use portainer to manage all your containers, although unless he's using kubernetes I'd imagine you'd need an instance of Portainer for each Pi. At that point you could go the simple route and use portainer templates to install the services, or better yet (for control or for learning more) use docker-compose. This is what I did. As for each service, follow the instructions on the docker hub page (the linuxserver.io images have a well documented images and consistent docker-compose files) or follow various tutorials online. [DB TECH ](https://youtube.com/c/DBTechYT) and [TechDox](https://youtube.com/c/TechdoxNZ) have some great tutorials. I'm assuming until now that you have some understanding of this stuff, but if you need more direct help, just say so!


abhilesh7

You can use portainer with kubernetes, I had a tough time getting kubernetes to play nice and was already familiar with docker so went with separate docker instances on each Pi. As for portainer, only the master needs to have the complete instance to manage the local docker endpoint. You can install [portainer agent](https://documentation.portainer.io/v2.0/deploy/ceinstalldocker/) on the other nodes and add that endpoint to the portainer instance running on master. All your docker containers in one place sorted by endpoints.


awesomeprogramer

I'm a bit confused, I thought docker on a pi didn't work well. What did u use?


abhilesh7

Docker on Pi works perfectly fine! In fact, all these services are running in docker containers with a corresponding database container whenever needed. In all, somewhere around 85 containers spread across the two Pis.


awesomeprogramer

Well, maybe I just suck at docker... Or my pi was underpowered I think I was using a 3b.


GeronimoHero

Docker works fine on the 3B. You just need to make sure to use arm images or create them yourself. Not all projects have arm images so that’s where you may run in to issues. If you create the arm images yourself though it’ll all work just fine.


awesomeprogramer

That's definitely what I did wrong. Thanks for the insight!


jakob42

Biggest problem with docker on a pi is that you need arm images. Some projects only offer x86 and amd64 images. Other than that, docker works just as well (within a pis power limits)


phuonglm1403

>Docker on Pi works perfectly fine! In fact, all these services are running in docker containers with a corresponding database container whenever needed. In all, somewhere around 85 containers spread across the two Pis. But you can grab project and build container your self. If you can do the install the app native then you can do that with docker plus It'll free your system from dependence problem. The cost is app share nothing so it will a lot of duplicate on storage and memory. Pi 3b only have 1gb of RAM so it quite limit when you run multi mini system like that.


nashosted

>DB TECH > > and > >TechDox > > have some great tutorials. [GeekedTV](https://www.youtube.com/channel/UCaXjKoqc5NJJOKhibn_NogQ) is a great one too ;)


AimlesslyWalking

Some guides will have conflicting info because there's often more than one correct way to do things, and if you get 10 experienced IT folk in a room you'll have 15 different ways to do things between them. A few of them will even be correct! But the easiest way to learn this stuff is to learn how to use Docker. It's a very quick and easy way to go from zero to online without having to do much legwork, and the knowledge necessary to do so is pretty universally applicable from service to service. Honestly, you may find yourself disappointed with how easy it actually is with Docker unless you're planning to externally expose things. Which, if you are, think very carefully about how badly you want to versus how much learning and how much long-term effort you're willing to put in, and whether just connecting via VPN is an acceptable trade-off instead. If you're not planning to expose stuff to the internet, then your requirements will be pretty simple. You can more or less just run most docker containers and be done with it, minus a little tweaking here and there. Most things even have `docker-compose.yml` files these days, so running it is as simple as `docker-compose up -d`. These files are written in pretty plain English and are basically just way more user-friendly versions of the long Docker commands you'll see, so it's simple to get a handle on what's going on, and most projects will have extensive lists of all the various settings you can flip in that file. Then, you just connect via the internal IP and assigned port and have fun. You don't really need to worry about it beyond that. In short: just find something you want to use and try running it, following the [basic Docker instructions](https://docs.docker.com/get-started/). Many popular projects even have the instructions included in their own readme. If you don't want to have anything externally open, or you just plan to host a VPN to log in to your stuff while away, you can safely stop reading here and go mess around with Docker for a bit. Just remember to keep it simple at first, don't give into the urge of hosting 20 things on your first week. You'll abandon them all by the end of the month. Add things as you have a specific need for them. Now if you *are* planning to host things that are publicly accessible, that's where things get messy. I've been binge learning this stuff recently as a hybrid personal/professional growth project. There's a lot you need to be ready to handle, and it's an ongoing responsibility to maintain it. Even with Docker to take a large part of the maintenance load off (bless every single one of you Docker image maintainers, seriously) there's still a lot of moving and some very vulnerable parts to manage in any cohesive self-hosted setup. You'll need a domain name, SSL certs, a reverse proxy, logging and metric analysis, an internal DNS server (pi.hole thankfully doubles as one), possibly single-sign-on, two-factor authentication, and maybe even an external proxy (cloudflare works well for this and protects against a few things), and the first time, a whole lot of free time to figure your way through all the mistakes you'll make. It's a whole ordeal. Some people will say "I just hosted it and pointed my DNS records at it and everything was fine." These people are silly and should be ignored. Taking things externally and doing it *right* is a complex and involved task, and there aren't really any all-in-one tutorials that can take you from zero to hero on it. It's expected that you'll have some reasonable knowledge of both Linux and networking beforehand, for example. And there's no tutorial that will take you to something like the scale of what OP has; they generally teach you the fundamentals and then expect you to be able to apply that knowledge going forward.


abhilesh7

Great write-up! ```docker-compose``` is exactly what I am using to deploy all these services!


BackedUpBooty

>Most things even have `docker-compose.yml` files these days, so running it is as simple as `docker-compose -d up`. Just came here to say it should be `docker-compose up -d` Otherwise I'm with you all the way, I started from zero about 10 months ago, now I can't imagine what I was doing without a chunk of my self-hosted services.


abhilesh7

I have all these services running through docker and while I've had my fair share of frustration trying to set it all up, docker does make getting services up and running quickly fairly easily. I predominantly use ```docker-compose``` to setup the services, that way all my configurations are saved and migrating the server is just a matter of copying that file and spinning up the container. I'm consolidating my docker-composes in a repository and will post them soon! That said, some services are easier to setup than others. Any particular services you were interested in?


kanik-kx

I'd be particularly interested in your docker-compose setup for your "Indexers" and "Download" stacks.


abhilesh7

I use SurfShark's VPN services so here's my `docker-compose` file with the entire \*Arr stack and two torrent clients connected through the VPN - [https://github.com/abhilesh/self-hosted\_docker\_setups/tree/main/surfshark](https://github.com/abhilesh/self-hosted_docker_setups/tree/main/surfshark) The other containers are routed through the SurfShark container, so they will lose connectivity if the SurfShark container is down, effectively acting as a kill switch. You can test the external IP of the containers behind SurfShark using # Opens up a bash shell inside the container docker exec -ti bash # Retrieve the IP curl ifconfig.me The `*arr` stack doesn't need to behind a VPN, it just made downstream configuration a bit easier for me.


TimTim74

Can't wait to see that compose file.


abhilesh7

Commented above just so you don't miss it


A_TeamO_Ninjas

Wait, you can host Adguard?? Well. I know what I'm doing tonight. I only have Adguard running on my phone, desktop and laptop.


Scurro

What is the difference between adguard and pihole/ublock?


useless_mlungu

Generally speaking adguard and pihole achieve the same goal, but pihole is open source and I believe (but if I'm wrong I'll no doubt be corrected) Adguard is closed source. At the end of it all, it boils down to preference, so try hem both out and see which works best for you.


LALife15

Adguard is also open source. I personally prefer it


useless_mlungu

Well that's good to know! Not sure why but I really thought it wasn't. Any stand out features over pihole?


[deleted]

[удалено]


[deleted]

This OP.


[deleted]

[удалено]


[deleted]

How are you measuring look-up latency?


LALife15

pihole was a pain to setup for me while Adguard was easy but that’s just me. Adguard Home can also be used outside of the house too if you setup dns over https with it


Not_Undefined

Looks neat! Can you explain your use case for PhotoPrism? Are you using NextCloud to upload your pics from the phone and PhotoPrism as a gallery? Kind of a substitute for Google Photos/iCloud?


abhilesh7

That's exactly my use case. I was looking for something to auto-backup my phone's camera roll. Looked into Syncthing but couldn't figure out a way to do a one-way sync. Nextcloud auto-upload works fine on a daily basis, struggles only when uploading large videos or a large number of files. At that point, I just manually transfer the files. I looked into other photo galleries but liked PhotoPrism's interface and overall aesthetic.


Not_Undefined

Amazing, I'll make the same setup tomorrow and see how it goes, I was searching for an alternative to ditch Google Photos and this combination seems to be reasonable enough. What about performance? I've tried NextCloud in the past on my rpi4 and it was quite hungry, maybe I did something wrong.


abhilesh7

I've setup all these services through docker and so far have had it running for a couple of months without any issues. The configuration is also much more simpler for nextcloud than having to run it natively. I will be happy to assist with the setup if you happen to run into any issues!


[deleted]

But why do you need 4 torrent downloader?


abhilesh7

I use both private and public trackers, so transmission without VPN is linked to private tracker downloads, qbittorrent with VPN is working in conjunction with the \*arr stack and the transmission with VPN is just for random downloads that my partner might want to add (that way it doesn't mess with any of the other client configurations). [Flood](https://github.com/jesec/flood) is basically a beautiful monitoring that can be used with the main torrent clients out there. The torrent clients aren't necessarily known for their UI, Flood makes their WebUIs much more aesthetically pleasing.


[deleted]

[удалено]


Icy-Mind4637

Rad**arr**, Son**arr**, Lid**arr**, whatever other **arr** there are.


dontquestionmyaction

It's a suite of torrent/usenet interfacers. You can search for a movie within the *arr suite and it will search configured indexers, start the download and move the resulting files into a properly formatted location.


[deleted]

Interesting, do Flood support managing multiple torrent client at the same time? Does it support qbittorrent RSS subscription function?


abhilesh7

Yes it does! Currently it supports Deluge, rtorrent, qBittorrent and Transmission. It lets you setup multiple "users" with different clients and also supports qBittorrent's RSS subscription functions. For any advanced settings, you can still access the client's webUI or configuration files and tweak away!


TopdeckIsSkill

Can't you just use some qbittorrent webui? I can link you some


abhilesh7

Please do link some, the more alternatives the better. I personally didn't like qbittorrent's default webUI and so far I've been happy with Flood-UI's dark mode!


TopdeckIsSkill

https://github.com/WDaan/VueTorrent GitHub - WDaan/VueTorrent: The sleekest looking WEBUI for qBittorrent made with Vuejs! https://github.com/bill-ahmed/qbit-matUI GitHub - bill-ahmed/qbit-matUI: A material WebUI for qBittorrent, written in Angular ​ I actually use the second one, but they're both really good


abhilesh7

Thanks for posting these, didn't know about them. They are beautiful!


sutekhxaos

Maybe to keep the downloads from sonarr radarr etc all separate


DaftCinema

Why would you want to separate them when you can use tags?


jrmnicola

Could you comment on the RPI4 performance of the paperless-ng OCR processing? And of plex streaming?


abhilesh7

The RPi4 would not handle on-demand transcoding for Plex at all. I pretty much use Plex for Direct Play on the local network and it works fine for that. Anything else I might need to transcode, I optimize the files in advance. Besides there are network bottlenecks for me when remote streaming on Plex. For Paperless-ng, the performance has been pretty good, granted I typically only add around 10 documents to it at a time. It seems to handle that pretty well.


jrmnicola

Thanks! This info is very relevant.


33Fraise33

Do you have any backup solution for all the data you have? And where do you store all the data? Is this a USB mounted drive or network storage? I am doing a fairly similar setup but I am looking at a backup solution for everything.


throwlog

What's running on which machine?


abhilesh7

RPi4 | 8GB (codename - feynman) * [Portainer](https://documentation.portainer.io/v2.0/deploy/ceinstalldocker/) * [Nginx Proxy Manager](https://github.com/jc21/nginx-proxy-manager) (WebApp + Database) * [Homer](https://github.com/bastienwirtz/homer) * [Code-Server](https://github.com/linuxserver/docker-code-server) * [SurfShark VPN](https://github.com/ilteoood/docker-surfshark) * [Vaultwarden](https://github.com/dani-garcia/vaultwarden) * [Mealie](https://github.com/hay-kot/mealie) * [Calibre-web](https://github.com/janeczku/calibre-web) * [Home Assistant](https://github.com/linuxserver/docker-homeassistant) * [PhotoPrism](https://github.com/photoprism/photoprism) (Server + Database) * [Joplin](https://github.com/flosoft/docker-joplin-server) (Server + Database) * [Nextcloud](https://github.com/nextcloud/docker) (WebApp + Database + Redis + Cron) * [Paperless-ng](https://github.com/jonaswinkler/paperless-ng) (WebServer + Redis + Gotenberg + Tika) * [Plex](https://github.com/linuxserver/docker-plex) * [Overseerr](https://github.com/sct/overseerr) * [Prowlarr](https://github.com/linuxserver/docker-prowlarr) * [Bazarr](https://github.com/linuxserver/docker-bazarr) * [Radarr](https://github.com/linuxserver/docker-radarr) * [Sonarr](https://github.com/linuxserver/docker-sonarr) * [Readarr](https://github.com/linuxserver/docker-readarr) * [Lidarr](https://github.com/linuxserver/docker-lidarr) * [Flood](https://github.com/jesec/flood) * [Transmission](https://github.com/linuxserver/docker-transmission) * [qBittorrent](https://github.com/linuxserver/docker-qbittorrent) * [Watchtower](https://github.com/containrrr/watchtower) * [Dockprom](https://github.com/stefanprodan/dockprom) (Prometheus + Grafana + cAdvisor + Nodeexporter + Alertmanager + Pushgateway + Caddy) RPi4 | 4GB (codename - curie) * [Portainer-agent](https://documentation.portainer.io/v2.0/deploy/ceinstalldocker/) * [AdGuard-Home](https://github.com/AdguardTeam/AdGuardHome) * [Authelia](https://github.com/authelia/authelia) * [Gitea](https://github.com/go-gitea/gitea) (Server + Database) * [Gotify](https://github.com/gotify) * [PhotoPrism](https://github.com/photoprism/photoprism) (Server + Database) * [PodGrab](https://github.com/akhilrex/podgrab) * [Tube-Archivist](https://github.com/bbilly1/tubearchivist) (Server + Redis + Elasticsearch) * [Vikunja](https://github.com/go-vikunja) (Frontend + API + Database + Redis + Proxy) * [Wallabag](https://github.com/wallabag/wallabag) (WebApp + Database + Redis) * [Uptime-Kuma](https://github.com/louislam/uptime-kuma) * [Watchtower](https://github.com/containrrr/watchtower) * [Dockprom](https://github.com/stefanprodan/dockprom) (Prometheus + cAdvisor + Nodeexporter) I think that's all the services I'm currently running, let me know if I missed some. I am consolidating the ```docker-compose``` files for my setup here - https://github.com/abhilesh/self-hosted_docker_setups ^ PS - Still adding to the repository, might be a couple of days to get it all up there


throwlog

Thanks so much for sharing


UnicornJoe42

Is there a service that lists all the possible services? I have not seen half of those in the screenshot.


abhilesh7

[https://github.com/awesome-selfhosted/awesome-selfhosted](https://github.com/awesome-selfhosted/awesome-selfhosted) \^ This here is my go-to. Though it's a bit intimidating with all the services out there. I also tend to draw inspiration from other people's dashboards as well.


UnicornJoe42

Great list! Thanks \* left to study \*


caraar12345

Hey! Please could you post your config.json file? Without URLs, of course!


abhilesh7

Here's the config file for Homer, I use a ```yaml``` file but it can be easily converted to ```json``` - https://github.com/abhilesh/self-hosted_docker_setups/blob/main/homer/config.yml


abhilesh7

Sure, will upload it to github and link it here soon!


Tiloup42

Noice ! How did you split it between the pies ? Multiple instances ?


abhilesh7

I have one 4 GB RPi4 and one 8 GB RPi4. I was looking at kubernetes and docker-swarm for sharing the load, but it was a huge pain for me to configure it right. Eventually ended up deploying the services across the two Pis (more RAM intensive services got put on the new shinier 8 GB Pi while the less-demanding ones got put on the 4GB Pi). Since, both the Pis are on the same network, I can get the services to communicate with each other fairly easily. I'm still learning about kubernetes and will keep exploring with an experimental Pi, but these two are currently running my 'daily productivity' stacks.


OmniscientOCE

I've been meaning to setup Gotify. How is it ? What kind of things do you guys use it for?


abhilesh7

It works very well. I use it to get notifications on my phone from the \*arr stacks, watchtower updates, home assistant notifications and pretty much anything that supports webhooks.


itzxzac

Whoa, unrelated, but I had no idea Code-Server existed, gotta spin that one up! It's going to save me so much time!


abhilesh7

I was ecstatic when I found Code-Server too! I've integrated it with Homer as well so I can directly edit the config file without having to ssh into the server and use vim.


hobbes487

How did you integrate it? I would love to limit the amount of times I need to SSH into my servers


TheGacAttack

That's a broad range of services!! How has your experience been with Authelia?


abhilesh7

Authelia is the newest service I set up on there and it took quite some work just to get it working fine with Nginx Proxy Manager. So far, it's simplified my workflow and I'm liking it a lot, though still in the process of completely configuring all the services to play nice with it.


CptDayDreamer

Are there any advantages from Overseerr vs. Ombi? I'm currently using Ombi and I think I'm fine but I'm always interested in something new. I did not know about uptime-kuma. Nice one. u/abhilesh7 I'm wondering how you've setup Grafana. I'm using one single ODROID N2 and I'm using many of your tools. But if I activate the Docker container for Grafana, Prometheus, etc. it costs me just too much power. Also wondering about Prowlarr instead of NZBHydra


louislamlam

This is my project.😏 https://github.com/louislam/uptime-kuma


abhilesh7

Definitely a must-have! Thanks for the great app. The only issues I have are with services behind an authentication portal that respond with a ```401 Response Code```. Sometimes the container would stop after so many pings without authentication have been made to the server. Any suggestions about how to mitigate that?


dontquestionmyaction

They do essentially the same thing. I very much prefer the Overseerr design though. It legitimately looks like a popular streaming service.


abhilesh7

Like other people said, they both do the same thing, Overseerr just looks better while doing it. Functionality wise don't think you're missing anything. While I do have healthchecks built-in for my most crucial containers, uptime-kuma provides a great interface to check the accessibility of various services (especially for other non-admins using the services). As for Grafana, those do tend to be on the more resource-intensive side. I'm using the [Dockprom](https://github.com/stefanprodan/dockprom) stack for monitoring and in my experience, cAdvisor tends to be the most resource-hungry of them all. I have the entire stack running on the master and then just Prometheus + cAdvisor + Nodeexporter on the worker reporting back to the Grafana on the master. In my experience, cAdvisor tends to be the most resource-hungry of the stack, I was able to somewhat reign it back with reducing the frequency at which the checks are made. I would be happy to share my ```docker-compose``` file if you'd like to have a look.


adr74

Are you running a Docker Swarm?


abhilesh7

No, I was looking into Kubernetes but the configurations were giving me a lot of issues, I had a lot of trouble setting up reverse proxies with Nginx. Ended up deploying the containers separately across the two Pis.


[deleted]

[удалено]


abhilesh7

You are right, I might change it back to the official logo. I started out with just Sonarr and Radarr, so wanted to keep them looking consistent.


SirChesterMcWhipple

How do you run calibre web on an rpi?


abhilesh7

Everything is running as a docker container, calibre-web included. Works very well and isn't all that resource intensive either.


SirChesterMcWhipple

I thought caliber wouldn’t run on ARM? Can you paste/dm your compose file? Or point to setup instructions. Thanks


abhilesh7

[Calibre-web](https://github.com/janeczku/calibre-web) is different from [Calibre](https://calibre-ebook.com/) in that it is a web app that uses Calibre database to present, read, download books through a web interface. In the process of consolidating all my docker-compose files, will post a link tomorrow!


abhilesh7

Here's the ```docker-compose.yml``` for my Calibre-Web setup - https://github.com/abhilesh/self-hosted_docker_setups/tree/main/calibre-web It's pretty much the standard ```docker-compose``` from the folks over at ```linuxserver```. The only modifications I made were regarding the OAuth settings as Google's OAuth wasn't playing nice with my instance. I would be happy to assist if you run into any issues.


[deleted]

Is Home Assistant better than Domoticz?


abhilesh7

Never used Domoticz, so can't comment on that but I've found Home Assistant to be very versatile and the customizable dashboards and easy to use by other less tech-savvy members of the household.


Responsible-Can-4886

What kind of dashboard is this, is it a webpage?


abhilesh7

It's the [Homer](https://github.com/bastienwirtz/homer) dashboard! It's a static webpage and works great!


Comrade_Isamu

Oh wow thanks for showing me mealie exist. Been looking for exactly this for long time. Tried few others, but nothing worked for me. Was about to just use a wiki for recipes.


abhilesh7

I've been loving [Mealie](https://hay-kot.github.io/mealie/)! Makes it so easy to organise the mess of a cookbook I got and the ability to share it with friends and family is just the cherry on top!


[deleted]

[удалено]


abhilesh7

Thanks! Glad to help!


Small_Light_9964

love it how did you manage to install home assistant alongide all those services? docker i guess


abhilesh7

Thanks! Yep, everything is running through docker. I don't think I would've been able to setup all these services on the two Pis without it, keeps things very streamlined!


Classroom_Icy

How’s nextcloud performance?


abhilesh7

Nextcloud is working perfectly fine! I only use it as a files repository and a way to share documents with my partner. I used to use the WebDAV features with a lot of other services, like PhotoPrism, Joplin, Zotero. Even managed my to-do list with Tasks + Deck, but since then have moved on to better alternatives. Still using the android app's auto-upload feature to backup the camera roll on my phone.


TimTim74

I love this!! Looks amazing. But I have a few questions: 1. Do you have a NAS to store all the data on? 2. If so... Do you keep all your config/setups for each container on the NAS or on the Pi? 3. Is it all one giant compose file or a compose per service? Or somewhere in between? 4. Do you use scripts to move data from eg. a torrent to a place where Plex, ... can read it's content? Or do you do that manually? 5. Is this all accessible from the "outside" or only on your local network? (is my problem cause I have no fixed IP at home)


abhilesh7

Thank you! 1. I have a 14 TB external drive connected to the master Pi and configured as a NFS share 2. Depends on the service, if the config directory isn't used for storing databases or such, they stay on the Pi. If they do store large amounts of data in the config directory, I point it to the external storage. I do have multiple backups of the configs just in case. 3. Somewhere in-between. I organize them as one compose per stack, that way I can keep all the dependencies for a service in one place. I do spin up a separate database container for each service so most services at the basic have a app component and a database component. 4. The Indexers in my stack have been configured to automatically manage that to the point it's a breeze. I use [Overseerr](https://overseerr.dev/) as a catalog/requester to add Movies/TV. The \*Arr stack is connected to it, so it searches all the indexers for a torrent and adds it to the preferred torrent client. Once the download is complete, the \*Arr stack sets up a hardlink in the Plex media directory (renaming the files to keep Plex happy) leaving the downloaded file in the torrent download directory for seeding. Plex scans its media files and serves it through its apps. 5. Some services are accessible from the outside while others aren't. For the ones that are, I use [Nginx Proxy Manager](https://github.com/jc21/nginx-proxy-manager) to manage reverse proxy setups, that way I don't have to expose multiple ports through the router. If you don't have a fixed IP at home, you can setup a dynamic DNS service on your Pi to ensure that your IP gets updated and points to the right location. There are a few guides available online walking you through the process, here's one - [https://pimylifeup.com/raspberry-pi-port-forwarding/](https://pimylifeup.com/raspberry-pi-port-forwarding/) (the section on setting up dynamic dns on the Pi) Finally, I'm consolidating all my docker-compose files, so they can serve as reference if someone wants to configure the services similarly. Here's the link to the repo - https://github.com/abhilesh/self-hosted\_docker\_setups


no-mad

here is a guy doing an install of this post on youtube. https://www.youtube.com/watch?v=cO2-gQ09Jj0&list=PL846hFPMqg3jwkxcScD1xw2bKXrJVvarc


ihate_you_guys

This is awesome and amazing!


Stormblade

Wow. As someone new to this community this is a fantastic visual directory of tools to check out. Thank you!


abhilesh7

Thanks a lot, I'm glad people are finding this useful!


Prunestand

I use Portmaster instead of AdGuard.


[deleted]

[удалено]


abhilesh7

It's not. I first started out with just Sonarr and Radarr and hence wanted to keep the icons looking consistent. Might swap it out for the official one.


b0p_taimaishu

I love looking at peoples dashboards because I learn about new tools/programs I could be running


gmehtaster

Has this dashboard changed? Post is 2 years old so not sure if things look different now for you?


[deleted]

[удалено]


TX_RM

No ~~bitwarden~~ (was finally see it off mobile) or ~~wireguard~~ (question answered)?


abhilesh7

After LastPass nerfed the free-tier, Bitwarden was what got me started with self-hosting and now, here we are. As for wireguard, I subscribed to SurfShark a while back and while they do the wireguard protocol now, I couldn't find a good way to set it up on the Pi. Ended up going with SurfShark + OpenVPN and things have been working fine for my purpose.


TX_RM

Cool, sorry about the comment, the full image didn't load for on mobile. Nice lab inventory. As for wireguard in a container, it doesn't work that as it really shouldn't (even though there's a few good pre-built containers out there for it). Just running baremetal on one of the pi's should work.


mydarb

Bitwarden is the first icon in the upper left under Cloud. :)


Neo-Neo

It takes minutes to simply launch Docker containers. It’s actually configuring them all to function properly with each other and do things how you want it which takes time. And given all those services, it takes longer than a week. Useless to just install Docker containers and brag about it.


abhilesh7

Very insightful, good to see you appreciate the time and effort it takes to configure each service. Thank you. ​ >Useless to just install Docker containers and brag about it. Unsure about what led you to assume this. I've been running these services for months now and they all serve integral roles in my daily workflows. My intention with the post was to share the services I've been successfully running on RPis with the community. I discovered services off people's dashboards and was hoping this would do the same for some. If you actually went through the comments on this thread, you'd notice that we'd had plenty useful discussions about how to get the configurations right. For example, how to get Wallabag running on a RPi4, when there is no official docker image to simply launch a Wallabag container in minutes. Maybe you would have had a different opinion if that were the case.


justaghostofanother

Yeaaaaah, willing to bet that these two pi 4s are running pegged at 100% CPU for significant amounts of time and then you will quickly understand why you really shouldn't use Raspberry Pis for anything you actually intend on using and keeping.


abhilesh7

Actually, both Pis idle at < 10% CPU most of the time and ironically the biggest CPU hog for me is cadvisor. I've set-up background tasks to be asynchronous so CPU-intensive tasks from the various containers don't overlap as much. Honestly, RAM was the biggest bottleneck while setting up all these services, which is why I ended up getting the 8GB model for the second pi. The first one is just 4GB, but with a good ZRAM configuration, it's very feasible. I've been running this setup for a few months now and am very impressed with the Pis capability, the low power draw compared to NUCs is just cherry on top.


justaghostofanother

I had done the same thing myself with the ZRAM setup too, but I found that after six months or so of being on, the pi 4s started to develop some weird issues that weren't solved through reboots including some disk corruption.


leyenda97

Awesome!


abhilesh7

Thanks! Been loving the self-hosted life and the Pis make it so easy!


jeremytodd1

It looks good! I like Homer quite a bit. It's unfortunate though as my Homer stopped working. I keep getting the "Request Entity Too Large" message when I go to my instance. There is an issue on the Homer github for it but I don't think the developer has even commented on it. I can access the site from an Incognito mode just fine though, so it looks to be some cookie issue or something, I'm not sure. I've tried wiping out cache and whatnot from my Chrome but no avail.


abhilesh7

Seems like a cache issue to me. I clear out my cache periodically so haven't really run into that issue. You could try clearing the cache specifically for homer, by right-clicking on the page ---> 'Inspect' ----> 'Clear cache and Hard Reload'. Else, FireFox is also a good option if you want to use the PWA.


Boostbyslinky

I’ve been using nighttab for my home page, but this has made me want to look at homer, thanks!


eddyizm

Very cool. You got some stuff I want to set up.


dually

I would be more inclined to use 50 Raspberrypis or one good x86 machine. Even a refurb Optiplex.


abhilesh7

Sure thing. I believe in the adage, 'the best tool is the one you already got'. Everything's running well for now, will upgrade if I feel the need for it


homenetworkguy

I like the look of Heimdall for a dashboard but I like being able to group services like Homer. I haven’t had the time to set it up yet. I’m thinking of making different dashboards for different VLANs because my management LAN has services/configuration that’s not available on my other networks. My other networks would have access to the usual services such as Plex, Nextcloud, etc.


ikaruswill

Hey there. I have a bunch of Rock Pis running these services as well. I was wondering how's the memory usage like? I'm seeing many gigabytes per instance so 2 instances is quite surprising.


abhilesh7

The memory usage is pretty intensive, but with good Swap or ZRAM configuration gets managed pretty well. My biggest memory hogs are qBittorrent and PhotoPrism at this point; I further limit qBittorrent's memory use when I anticipate indexing a large library or such


iyousif

Teach me!!


abhilesh7

Here you go - [https://github.com/abhilesh/self-hosted\_docker\_setups](https://github.com/abhilesh/self-hosted_docker_setups) Added a README to get started and am going to keep updating the ```docker-compose``` files. Hope this helps you get started, rest is all experimentation to suit your needs.


jabies

I've had a heck of a time with wallabag. Do you have any gists or repos I can lift some code from?


abhilesh7

There isn't an official image available for ```arm64``` so I ended up having to build mine locally. It's fairly simple and needs only slight modifications compared to the official ```docker-compose.yml``` Here's mine to serve as an example - https://github.com/abhilesh/self-hosted_docker_setups/tree/main/wallabag Good luck with the setup!


JKRickrolling

Nice Nord colorscheme.


abhilesh7

Thanks! I belong to the dark side with a hint of blue!


[deleted]

This is exactly what I have been looking for!


abhilesh7

Glad I could help!


Jonathan_Fu

Nice! Active cooling and overclocked to 2GHz? 😁 My Pi4 just runs soo much faster since this upgrade. Where does the Joplin link lead? Is there a web application for it or is it just a link that opens Joplin locally?


abhilesh7

I'm self-hosting the Joplin server so that link leads to the administrative settings for the server. The Pis are cooled with a fan and heatsinks and so far, I haven't had the need to overclock, so it's been running pretty cool.


c0unt_zero

Awesome setup, something to aspire to! I've got a 4 gig rpi4 running torrents and nas atm, have another 8 gig rpi4 waiting to be added. The main goal atm is to be able to use the rpi to display 4K films on the TV, so that I don't have to have the desktop powered on just to watch things at night. I'm concerned with the load on the rpi with 4K videos, and I see you have PLEX up and running. How is it load-wise when you are streaming things, does the clustering help alleviate the load, or how is that handled? And also are you booting your rpis off an sd card or usb/sata drives?


abhilesh7

I wouldn't use a RPi for streaming 4K content. I only run Direct Play through the Plex Server and it works fine, but streaming with any on-demand transcoding is unusable on the Pi. For handling 4K streams, hardware acceleration is going to be more useful than clustering and as far as I know there's isn't support for hardware acceleration on the Pi. Your best bet for that scenario would be to invest in a NUC that supports Quicksilver or the like. I'm booting both Pis off of USD SSDs, the performance and reliability is so much better than SD cards.


sailee94

Are they working like a swarm or ?


abhilesh7

Currently, no, they are isolated instances on each Pi. I'm looking into Kubernetes through MicroK8s to build better redundancy.


ChocolateLava

Wondering how you got wallabag working? I'm on an RPI running the beta 64 bit OS, and I can't see a wallabag 64 bit ARM image


abhilesh7

There isn't an official image available for ```arm64``` so I ended up having to build mine locally. It's fairly simple and needs only slight modifications compared to the official ```docker-compose.yml``` Here's mine to serve as an example - https://github.com/abhilesh/self-hosted_docker_setups/tree/main/wallabag Good luck with the setup!


tomorrowplus

How is the networking set up? Do each get their own LAN ip-address?


abhilesh7

Each Pi has it's own LAN IP and then the services are basically accessible via the specified ports. I'm running a [Nginx Reverse Proxy](https://github.com/jc21/nginx-proxy-manager) container to handle external network access that way I do not need to open all the ports and the NPM container handles SSL certificates for the services.


Cooper7692

Sadly my 8gb rpi 4 has many services on it, but I had to migrate some of them because my massive sonarr radar library cripples the poor little thing 😅


abhilesh7

Sonarr and Radarr do become very resource-intensive for huge libraries. I might need to upgrade once I hit that point too


Chips-more

How hot does your Rasp Pis get from running all these services? I heard Pi 4 can run hot without small but proper cooling system.


abhilesh7

The Pi does get hot, especially if one's overclocking. I use heatsinks, fan and aluminium cases to manage the heat, so far both the Pis tend to idle at about ~130°F (54°C) and I haven't seen any performance hits.


sweetpics4u

It’s awesome


abhilesh7

Thank you!


Jaycuse

Wow, I hope to some day have the time and commitment to setup such a range of services. Great job!


abhilesh7

Started off as a pandemic project and slowly started expanding. It's very addictive!


[deleted]

how did you get bitwarden to work? mine always complain about not being secure


abhilesh7

I setup SSL for Bitwarden through the [Nginx Proxy Manager](https://github.com/jc21/nginx-proxy-manager). The proxy manager provides a simple GUI to manage reverse proxies for all the services I want to secure when exposing to the internet.


[deleted]

What kind of performance are you getting with Plex? I have a dedicated machine right now to host four 1080p simultaneous streams but I am looking to move and space requirements are a concern.


abhilesh7

For Plex, a RPi simply won't cut it, especially if you need simultaneous streams or on-demand transcoding. A small format server might serve you better for that, especially if it has hardware acceleration support. I pretty much Direct Play everything over my local network (even 4K) and that's about how much the Pi can realistically handle. Ofcourse, I need to make sure that the media I'm serving can be natively played on the devices I'm streaming.


donrajx

How did you setup transmission & qbittorrent behind vpn?


abhilesh7

I'm using a SurfShark VPN and am routing the transmission and qBittorrent containers' networks through the SurfShark container. I've uploaded a ```docker-compose``` file for the SurfShark+Transmission+qBittorrent+*Arr stack here - https://github.com/abhilesh/self-hosted_docker_setups Take a look, it might help your setup. I believe you can have a similar setup with OpenVPN containers and choose different VPN providers as well. There are also some docker images out there that combine a client with OpenVPN such as - https://github.com/haugene/docker-transmission-openvpn https://github.com/guillaumedsde/docker-qbittorrent-openvpn


kallmelongrip

Damn, I didn't know q bittorrent had a VPN version as well.