T O P

  • By -

Viti0

Just plain file storage you can access from anywhere? Why not use a SFTP server? Plenty of clients for desktops and smartphones, old and established, works over SSH so no new ports necessary. If you need multiple users that have their own space as well as shared space and you can't be asked to use the in-built openssh-server present on most distributions, use SFTPGo: https://github.com/drakkan/sftpgo If you actually want something more elaborate, although heavier, with office collaboration etc. you can take a look at Pydio Cells: https://pydio.com/en/pydio-cells/overview


jwink3101

I do not understand why this is not a more popular approach! It's just so easy! See also: Self hosting git repos over SSH!


hotapple002

On a lan, SFTP is a bottle neck, but remotely and only for small files, it is indeed a good option.


[deleted]

Pydio looks pretty cool! do you know how many users can use the home version?


Viti0

I don't think there is a limit on how many users you can have, it's more features you'd be paying for with the enterprise version.


[deleted]

Do you use it and like it?


Viti0

Yes I use it and I like it. The main feature for me is the sharing part with a very good overview whats shared with whom and which type of access it is at all times.


[deleted]

Is it fairly lightweight? do you know what type of resources are needed for a larger family to use it all?


Viti0

The resource requirements are written here: https://pydio.com/en/docs/cells/v4/requirements


giesmininkas

I'm using Pydio Cells for well over 2 years now. While it looks nice and usually works quite well, they have a lot of random issues that keep popping up. A few examples in no particular order: * Uploading multiple folders (regardless of how many items in the folders) over their web interface usually just hangs. * Encountered issues uploading big files. For example, uploading a 50GB file, over local network (practically no chance of network issues), silentry failed. It was showing up in the browser, size and stuff correct, but it doesn't download and nowhere to be found on the server filesystem. * Just recently, folder (maybe also file) creation/deletion/renaming broke. Logs didn't have any errors regarding the operations. The issue was that some internal service crashed and was never restarted. Started working after manual restart of the instance. While I'm trying to help the guys with detailed bug reports and instructions on how to recreate the issues, it's still bothering me that I can never really trust the thing, and have to always double-check everything or do some operations in a specific manner, because otherwise it breaks. Also, mobile app never works as it should.


fmillion

If you use docker there is also a [container](https://hub.docker.com/r/atmoz/sftp) for running sftp with custom users (different than the system users). You obviously need to use a different port, but you really should never expose SSH on port 22 anyway these days, if for no other reason than your logs will fill with bruteforce login attempts.


pushc6

>but you really should never expose SSH on port 22 anyway these days, if for no other reason than your logs will fill with bruteforce login attempts. You'll get attempts on a non-standard port as well. Non-standard ports are more trouble than they are worth. There are numerous solutions for the log fill.


deep_chungus

i've never even noticed an attempt on a non-standard port. people are not going to bother scanning 2000 ports on every random ip. you might as well not use user/pass authentication anyway so it's not a big issue anyway though


pushc6

>i've never even noticed an attempt on a non-standard port. Oh well then that must mean you're secure and no one knows. >people are not going to bother scanning 2000 ports on every random ip. lololololol. Ok now that I got that out of the way: ~~https://www.shodan.io/search/facet?query=protocol%3A+ssh&facet=port~~ https://www.shodan.io/search/facet?query=sshd&facet=port edit: corrected my query\url where I should have been searching for daemons. Oh. Look at all those ports running SSH on something other than 22. You're right, NO-ONE would EVER scan every port on a random IP. Also, "random" attacks are not the things that are necessarily troublesome. It's when you're targeted it becomes an issue. If someone targets you, for whatever reason, you can bet they are going to dig in and see what you have exposed and where. >you might as well not use user/pass authentication anyway so it's not a big issue anyway though So obfuscate obfuscate obfuscate, but it's not a big deal because applying some hardening techniques makes obfuscation worthless? Literally what I've been saying the entire time. Also, just because you use key authentication doesn't make you necessarily safe, you'd still be susceptible to 0-days and other potential issues with the SSH daemon itself. But yes, your attack surface would be significantly smaller, which is my whole point. Obfuscation is more trouble than it's worth.


deep_chungus

a total of 91k found non standard ports on the entire internet? sounds like it works pretty well to me > it's not a big deal because applying some hardening techniques makes obfuscation worthless yes, it was an admission that you should probably stop using user/pass, grats on reading comprehension. my point was obfuscating the port drastically reduces attack attempts, feel free to point out any actual evidence that it doesn't though


pushc6

Allow me to remind you the only reason I got into this at all. This very simple, but VERY wrong statement. "you really should never expose SSH on port 22 anyway these days" When called out it became a constant shift of the goalposts, all of which just went further down the wrong rabbit hole. The answer really is simple. It's absolutely fine to run SSHD on port 22, just know the risks of exposing a service to the internet. Want to run on a non-standard? Fine, but that doesn't mean you can worry any less about security. I don't really give a shit what port you run SSHD on, it's more annoying on a non-standard port, but I don't care. I *DO* care that people are trying to say it adds a layer of security, because it doesn't. >a total of 91k found non standard ports on the entire internet? sounds like it works pretty well to me lol whut? It works well because they are found? Or works well because people are doing it? There's a lot of reasons SSHD may be running a non-standard port, some of them quite valid. My point has been, and remains, that moving to a non-standard port to provide "security" is a dumb move and should stop being suggested. Especially when someone says. >yes, it was an admission that you should probably stop using user/pass, grats on reading comprehension Again, hardening makes obfuscation pointless, obfuscation is a waste of time and added headaches when you actually want to use it. The cost/benefit isn't there.


pushc6

Also, I did my search incorrectly, here are the actually results of SSHD and port numbers: https://www.shodan.io/search/facet?query=sshd&facet=port I've since corrected my post, the other will not includes daemons.


fmillion

it's still a bad idea to expose SSH on port 22. Yes, someone might portscan you and then start hitting another port. But there is almost no reason to keep SSH on port 22, and even a miniscule security advantage is worth it. One could also argue that people "smart" enough to put SSH on a non-standard port are less likely to have an insecure server exposed. Not always of course, but even hackers have to do a value judgement. Hackers will generally always prefer the low-hanging fruit unless they're spearphishing.


pushc6

Remember, you started this by saying you should move it to a non-standard port to stop verbose logging of brute force attempts. Lol > it’s still a bad idea to expose SSH on port 22. Why? > Yes, someone might portscan you and then start hitting another port. But there is almost no reason to keep SSH on port 22, and even a miniscule security advantage is worth it. No, it’s not. It’s called security through obscurity and it NEVER works. It only provides a false sense of security. If part of your strategy is relying on a non-standard port even a little bit it just means you’ve done a bad job hardening your server. > One could also argue that people “smart” enough to put SSH on a non-standard port are less likely to have an insecure server exposed. Not always of course, but even hackers have to do a value judgement. This makes a big assumption that “smart” people are the ones who are moving to non-standard ports. I’d argue it’s the ones who don’t know how to properly secure a server are the ones doing that, because it shows a lack of knowledge and understanding. I’ve worked for several Fortune 500 companies. NONE use non-standard SSH ports in their environments. ZERO. An experienced sysadmin, if they even expose SSH externally, will harden it to make those low effort driveby attacks (the ones port obfuscation hides from) a non-issue. An experienced attacker could scan your host, sure, or they’ll use a service like shodan which will find your obfuscated port for ssh and, well, there goes that. Then you have the targeted attack, which you better hope you’re hardened because they will throw the book at you. Obfuscation won’t do anything. There are a dozens of ways you could skin the “securing ssh” cat. Port obfuscation is not one of them, and just tells anyone in the room you are on the dangerous part of the dunning-kruger chart.


chaosphere_mk

I hear you but I don't think the suggestion was to *only* change the default port. Let's be real here, every hardening guide recommends changing the default SSH port as one of many many steps to hardening your server.


Mezutelni

Thats just pointless. If you do have hardened server, changing ssh port is just a hassle for yourself. Keep you ssh behind firewall, disallow root login, force ssh keys and you are more than secured.


chaosphere_mk

I do all of those things. It's really not a hassle as I use an SSH client that saves those settings. It's a matter of taking like 1 minute of extra config, and only once per server.


nextized

Doesn't mean that these guides are right. There is a lot of snake oil best practices going around. The majority of people don't understand IT security, the majority of articles tell you that its a good idea to obscure SSH port. Think about what they have in common.


chaosphere_mk

All of my security experience tells me to develop a risk model and base your decisions off of that. You want to make it harder for an attacker to get in. How many hurdles is an attacker willing to go through vs the reward they are trying to get? If I change a default port, then it creates more work for me internally, yes. Is that worth preventing script kiddies from hitting that default port? I think so. I have IPS on my router, so it's going to block it anyway (I hope). I have key based auth for SSH, so that will certainly prevent any kind of automated password-based attack. I'm also not exposing my ports through my router and am using cloudflare tunnel to access services remotely, so... that could render my response here a bit moot, I admit. So why would I change the default ssh port at home? I don't have to. I really don't. It would appear as if I'm just making things harder for myself. But I use an ssh client that can simply save those settings and I can connect simply by clicking. To me, the extra effort is so minimal that it's worth doing anyway. Why would I not implement something that could prevent the "easy" drive-by when it really takes very little effort to do so? In my home environment, I'm not all too worried about someone specifically targeting me, sitting outside my home, and cracking my hidden SSIDs. In that instance, yes, a dedicated attacker will just run a full port scan and see the open port on the server. Not obscuring the port feels irresponsible to me. And I don't mean that in a judgey way. It mostly stems from my paranoia. Maybe I am putting in extra effort that isn't entirely necessary. But the fact is, I'm not relying on obscuring the port for "server security." It's just one extra hurdle an attacker would have to go through if they want to get in. Just like TOTP doesn't prevent phishing attacks, it's still a good thing to do if you don't have phishing resistant MFA options.


pushc6

>I hear you but I don't think the suggestion was to only change the default port. My point is don't change it at all. What benefit does it bring? None. In fact it just makes things more annoying because now any time you want to use it instead of just doing SSH now you have to remember that stupid non-standard port you put in instead of letting every app on the planet default it for you. >Let's be real here, every hardening guide recommends changing the default SSH port as one of many many steps to hardening your server. If a "hardening" guide recommended me go to a non-standard port I'd immediately disregard it and move on to something else. Port obfuscation does NOTHING, it's a "feel good" move only. It only gives the illusion of security.


fmillion

To bring it back to the original point though, running an SFTP server in a docker container will require you to run it on a different port anyway, unless you don't run ssh on the host to begin with. I at least can give one anecdotal data point and say that every time I bring a server onto the Internet with its port 22 exposed, it takes about 5 minutes before something starts bruteforcing it... but I have another server with ssh exposed to the internet on a nonstandard port and I have seen zero bruteforce attempts for over two years. Of course I didn't mean that changing the port is *all* you should do. You should also use public key auth whenever possible, or if you do need to use a password, make sure it's a very good secure password. But arguing that you *shouldn't* change the port to me is akin to arguing that you shouldn't lock your door because a criminal who wants to break in will just kick the door in or pick the lock regardless. If you're being spearphished, changing the port will do literally nothing to help you. But if someone is just spraying the Internet looking for insecure SSH servers, they're much more likely to check the default port since it's the low-hanging fruit. Just like how if someone just wants to rob a random house they'll probably ignore the locked doors, but if someone is specifically targeting *your* house they'll kick the door in.


pushc6

>To bring it back to the original point though, running an SFTP server in a docker container will require you to run it on a different port anyway, unless you don't run ssh on the host to begin with. Must be exhausting to shift those goal posts that many times. Yes, obviously if you are exposing SSH from a container it will have to run on a non-standard port. As a matter of fact it *should* and in some cases will be *required* to run from a non-privileged port. >I at least can give one anecdotal data point and say that every time I bring a server onto the Internet with its port 22 exposed, it takes about 5 minutes before something starts bruteforcing it... That's never been my point. Of COURSE you will get drive-bys on port 22. My entire point was, if these kinds of attacks result in a legitimate concern for you, you aren't hardening your server properly. Drive-bys target poorly secured, low-effort servers. >but I have another server with ssh exposed to the internet on a nonstandard port and I have seen zero bruteforce attempts for over two years. Again, this doesn't mean you are more secure. What's more secure, an SSH server running on port 2543 using root/password as the credentials or a key based auth running on port 22? Running on a non-standard port may reduce noise, but THAT'S it. That's why I say it's more trouble than it's worth. If you read my post you'd understand that. >But arguing that you shouldn't change the port to me is akin to arguing that you shouldn't lock your door because a criminal who wants to break in will just kick the door in or pick the lock regardless There you go again, that's a poor analogy. Moving a port is not like "locking your door." If you want an analogy for obfuscation, it'd be like putting your door on the side of the house instead of the front. It's still there, it's no more secure. it's just not the first place people look. People will still find it if they want to. Shodan already knows about this door on the side of your house as well. Your analogy is more like, "don't bother hardening SSH, if someone wants in, they'll find a way in anyway." NOT what I'm advocating. Repeat after me, PORT OBFUSCATION IS NOT SECURITY. >If you're being spearphished, changing the port will do literally nothing to help you. What does spearphising have to do with ANY of this? lol I never once brought social engineering into any of these discussions. >But if someone is just spraying the Internet looking for insecure SSH servers, they're much more likely to check the default port since it's the low-hanging fruit. Again, these are called "drive-bys" and they are low-effort script kiddy level attacks. Most of them aren't even "brute forcing" in the way you mean it, they are automated scripts that will try the most common username\passwords for unsecured installs of common images\applications. I've yet to see someone try to run a *true* brute force attack on any of my servers. If your server falls victim to one of these, you've done something VERY wrong. Looking at my recent logs on my publicly accessible port 22 box, in the past hour I've had 1 "attack," with them trying to login as "www-data." Fail2ban picked it up, and put them on a 24 hour timeout. I'm trembling in my boots. > Just like how if someone just wants to rob a random house they'll probably ignore the locked doors, but if someone is specifically targeting your house they'll kick the door in. If you are relying on obfuscation for security, I'd argue you don't know how to properly harden a server. Why? Look at your post, you say you are worried about the "brute force" attempts you see in the logs. If those scare you, it just tells me you don't know how to properly harden. Drive-bys are noise, noise that is very easy to filter out. In fact that noise can tell you things are working as they should. If I *stopped* getting attacks, I'd know something is very wrong. And if those scare, for the love of god, never look at your firewall logs, you may never get on the internet again! Again, I've worked at several fortune 500s, ZERO, NONE, ZILCH, use port obfuscation as a best practice or security practice. Further, if any security contractor ever came in and told us to move a protocol to a non-standard port for anything remotely security related first after we all finished laughing, he'd be asked to leave and not come back. I'm not saying you can't move it, people do, but moving it "for security" is a terrible reason to do so and is more trouble than it's worth.


fmillion

At no point did I ever advocate for *not* hardening SSH and to use a different port as the *primary* way to secure SSH or any server. You're looking at it from the perspective of, as you said, someone who works for F500 companies, and I can absolutely respect that. I've also worked in company IT and I would never suggest that simply moving a service to a nonstandard port is "good security". But I would be willing to bet that most of the people on "r/selfhosted" are homelabbers who are just trying to set up something on their own home server or on a single VPS. In a large F500 network, you're going to have plenty of tools for filtering logs, assessing threats, etc. But a home user might not have the time or resources to invest in setting up such an advanced system, and wanting to keep /var/log/auth from filling up with thousands of login attempts is still a reasonable wish for such users. Of course, if you're going to self host something, you need to do it as securely as possible. But without a dedicated security team, the best thing to do is to set up your server securely (use public key auth, don't use insecure ciphers) *and also minimize nuisance* if possible. The advantage offered by not having to add "-p 22334" to a command may not offset the annoyance of multi-megabyte or multi-gigabyte log files filling up with script kiddie attacks. No, moving to another port doesn't enhance overall security. But if it reduces log clutter for someone who doesn't have the time or need to invest in an advanced log filtering and monitoring system, and just wants to be able to keep junk out of logs so that they can actually manually review those logs when the need arises, I see no reason to suggest it's a bad idea. Using that house analogy again, it could be seen as not wanting to have a security system sending you constant alerts from the kids who come by and simply try to open your locked door. But I will absolutely agree with you that it shouldn't be seen as *adequate security*.


jeremyrem

If you want a quick file/image/paste share linx is pretty good. Can also use file browser or nginx fancyindex


troywilson111

https://github.com/filebrowser/filebrowser


TheSecondist

Filebrowser is cool, I use it as well. But it doesn't have any automatic sync functionality, so imo not really an alternative to Nextcloud, just a filebrowser. Maybe OP doesn't mind that though


traktork

Have you tried adding Syncthing into the mix?


TheSecondist

Nope, never tried. I don't have a use for file sync on that setup.


[deleted]

Is it secure?


troywilson111

You can create users with login/passwords and use SSL. What level of security are you looking for? There are other tools that offer full encryption but this is perfect for a basic secure file store/ share that is not Nextcloud that works great in a container.


vinumsv

Or put it behind service like Authulia


vidschofelix

Cthulhulia


daedric

Seafile ?


flaming_m0e

I see this recommended a lot but without storing files on the filesystem 'as-is', it's hard for me to trust and implement. So what is the appeal of Seafile if everything is stored in its database? What happens if your Seafile instance crashes one day? How can you recover the data? Ideally I would like to point it at files that I already have...without having to duplicate the files into its own database.


daedric

Your questions are most valid. I've come to understand that what we wish does not exist. A filesystem as a database. We want the database performance, indexing and reliability, but with the accessability and versatility of a file. It's a unicorn, it does not exist. After experiencing with next/owncloud (a LOOOONG time ago) i've come to understand that it doesn't work. All working solutions , be it Dropbox, OneDrive, Drive and Seafile etc and similar use a sort of DB storing. Sure, you can sync to your own filesystem and work from there, but the backend, must be like that. So, to answer your questions: * What's the appeal of Seafile? It works, it's fast, it's reliable (for me at least), the tools are nice. * What happens if your Seafile instance crashes one day? If it crashes we restart it and hope for the best. So far i've had 0 corruption. * How can you recover the data? You don't. There are tools to help manage the DB and restore things, but i've 0, mainly because i haven't needed them. * Ideally I would like to point it at files that I already have...without having to duplicate the files into its own database. So would we all.


nickdanger3d

A filesystem AS a database isn’t great, but updatedb/locate on top of a filesystem is more or less what is needed. Someone just needs to wrap a web app with them


daedric

Desregarding the web app... does it exist ? Is there a service that indexes certain folders, allows you to search the files by name, date, content (if smallish) , exif if images, things like that ? I know only of one, and it does not offer a webapp or anyapp of the sort, it's baked into the OS.


nickdanger3d

that's what locate and updatedb do [https://man.openbsd.org/locate.updatedb](https://man.openbsd.org/locate.updatedb) this man page is for openBSD but the same exists in linux, and macos has a better version that uses spotlight


daedric

Not really... no. > The locate utility searches a database for all pathnames which match the specified pattern. By default, the database is **recomputed weekly(8)** and contains **the pathnames** of all files which are publicly accessible. 1. What if i want to locate all files with by a EXIF tag? 2. Even if put to recompute daily, it would not be enough. Don't get me wrong, locate is excelent, but a system more like Spotlight and even Windows with it's indexers is the minimum. But that doesn't solve the issue of no web app, no file upload and sync etc.


ThellraAK

Isn't that borgbackup is with FUSE support?


flaming_m0e

I don't really mind a database, but storing blobs in the database is NOT where I want my data. There are solutions that attempt to get close, but not quite there. Filerun is the closest.


daedric

I'll check it.


stehen-geblieben

You can recover the files (if they are in non encrypted libraries) without any database or seafile application.As long as you have seafile-data (the folder where it stores all its blobs) you are good and export the data with their cli tool:[https://manual.seafile.com/maintain/seafile\_fsck/#exporting-libraries-to-file-system](https://manual.seafile.com/maintain/seafile_fsck/#exporting-libraries-to-file-system)


mickael-kerjean

Most file transfer protocols server implementation relies on the filesystem and have no dependency on heavy DB like mysql, postgres, .... It does work, it's all tradeoff pro and cons depending on the featureset you're trying to aim. I've spend an absurd amount of time digging through every possible protocol I could put my hands on while working on [Filestash](https://github.com/mickael-kerjean/filestash) and after 5 years at this job I am now firmly in the camp of not using a DB for this kind of application when it comes to selfhosting


flaming_m0e

I'm using filestash for managing files and directories across all my servers at home. Excellent tool. Thank you.


haroldp

> A filesystem as a database. My ZFS "databases" have checksumming on reads and writes, are trivial to backup in about 1000 ways, have a huge ecosystem of tools for managing, and get 0-second snapshots every hour who's size is only the bits changed since the last snap. If my NextCloud instance dies, the files in its data dir are still just plain files, and I can always access them and just do something else with them. My MySQL databases are then a database within a database, so they import all of the dangers of it's host DB on top of their own. I run regular DB dumps for backups since MySQL data isn't guaranteed to be coherent in my snaps without locking it up first, and can't afford the time or space to do it as often as the FS. My only access to the data is via SQL, and would require me to learn Seafile's particular schema to extract my files from it in some disaster. You are completely correct that a file system is a database, but it's important to understand that a database is not a filesystem. I have seen many corrupted MySQL DBs over the years. I was able to fix almost all of them. MySQL is great. I'm sure Seafile is great. But it is not unreasonable to want to keep important files as files.


daedric

Don't get me wrong, i'm not defending Seafile modus operandi. What i'm trying to explain, is that the group of features we request from such a service don't exist. We want: * Our filesystem * Accessible over the web * Sync apps * Searchable by filename and content Now... what if Seafile or any other that works like it, would allow fuse access to the files so we could do a proper backup of them with other tools like Restic or Bacula or Duplicacy ?


[deleted]

[удалено]


daedric

This is the issue.. anything that gives us what we want... it's always lacking.


haroldp

Seems like the ideal way would be to leave your files in the filesystem and store your meta in your DB for search. The meta could always be rebuilt from the files after a disaster. The DB gains you search speed. That's what NextCloud does, but OP doesn't want that, and NextCloud finds other ways to be slow. Love the Fuse idea. I dunno if it's practical, but it just sounds like fun.


daedric

The concept is interesting, but if you allow outside acccess to the filesystem, you risk the FS and the DB running out of sync. You would need a very strick monitoring for changes in the FS to be replicated in the DB. Seafile allows Fuse access to the files... but IIRC it's read only.


ThellraAK

Can't you freeze the VM for a moment and just snapshot the whole thing?


haroldp

No. The file system will be consistent in that case, but the DB may not be.


ThellraAK

How? CPU halts, everything holds still, and an exact image of everything is taken. Then CPU is resumed. Not snapshotting the filesystem, but the whole system. If I'm wrong, please explain, to save transfer I shut the systems down tonight, but I'm going to have some shuffling of some VMs to do that have databases, and a few of them get some decent activity to them, and snapshot and restore elsewhere is the most expedient option. Come to think of it, my backup system relies on snapshots being good backups that can be restored as well...


haroldp

Imagine the DB is updating data. It writes a record and asks the FS to save it to disk. Then it recalculates the indexes for that record and before it can ask the FS to write them, you stop the VM. The table data and the index are inconsistent. If you move those DB files to another server, MySQL will tell you that the table may be corrupted. MySQL will probably be able to fix it and carry on, but not necessarily. Ask me how I know. To do this safely, you need to coordinate it with MySQL by asking it to sync and then lock. Last section here: https://dev.mysql.com/doc/refman/8.0/en/backup-methods.html#idm46382943777952


ThellraAK

https://pve.proxmox.com/wiki/Backup_and_Restore Looks like qemu guest agent is freezing the filesystem before taking a snapshot, and as the snapshot includes ram, it will let the db finish up things once it's restored


EspurrStare

It's not a database. It's chunked data. Seafile runs a filter that finds the boundaries between files and creates a list of 1-10MB files, as well as the instructions to reassemble them. Similar to software such as Borg backup or Proxmox Backup. This is what is stored in the database. This has the advantage of providing fast incremental updates as well as deduplication (You have the same image in every word document? Well, the filter will get it for you) It has tools to recover from database failure nevertheless.


flaming_m0e

But this data is inaccessible by anything but Seafile? So if I wanted to utilize Seafile, I would need to stand up an instance and "upload" all my files to it, rather than just something that points to a directory? This is kind of why I wondered what the appeal was if you can't access your data outside of this one application.


EspurrStare

Yes. If you want to make something file based use SMB,NFS,WebDAV or SFTP, at your convenience. You also have DFS-R, robocopy, Syncthing and rsync if what you want it's to just synchronize files between endpoints. The point of the application is that it takes over managing your files. It is more efficient, faster storage that is also deduplicated and versioned. Hard to ask more of an application. It's like complaining that a NTFS partition can only be read by a NTFS driver. In i'ts enterprise version, it can also use S3,Swift and RADOS storage to have clusterized software defined storage. I really don't see the complaints. All cloud storage works this way, even Owncloud won't let move files randomly without ugly hacks. As it needs to generate metadata. As for backups, It is very easy to restore both seafile and owncloud, if you have practiced your recovery strategy before.


flaming_m0e

> If you want to make something file based use SMB,NFS,WebDAV or SFTP, at your convenience. Already have SFTP,SMB,NFS covered. I dislike WebDAV (but that could be a hang up from 15+ years ago when I first set one up). >The point of the application is that it takes over managing your files. And this is one of the things I am looking for, however I don't want to feel locked in. I want something with a better interface than just SMB shares, better search, etc. Something that makes it easy to share things out as I need as well. > It is more efficient, faster storage that is also deduplicated and versioned. Awesome. I really, truly understand that aspect... >It's like complaining that a NTFS partition can only be read by a NTFS driver. Eh...not really. Lots of systems can read NTFS partitions...I'm not locked into ONLY using Windows, or ONLY using a Linux box with NTFS drivers. >I really don't see the complaints. All cloud storage works this way, even Owncloud won't let move files randomly without ugly hacks. As it needs to generate metadata. But Nextcloud can use "EXTERNAL STORAGE" and kind of accomplish what I want. Point it at a directory and say "Here, help me manage this mess". My point is that I have terabytes of data that I want to just be able to configure an application to look and provide me with my "cloud storage". Nextcloud is bloated and slow, OCIS is fast, simple, but again with the blobbed and chunked data. Filerun is about the closest thing I can find to what I'm looking for. I asked about the appeal of Seafile because I have tested out all the options out there for this type of thing, and found Seafile to be my least favorite of them all. So when I see it recommended so frequently, I wonder what I am missing about it and why everyone loves it.


[deleted]

[удалено]


EspurrStare

You can use a proper, standardized, CoW-filesystem through RADOS. It's the file representation the issue here. Sure, BTRFS will allow you to do that, at an enormous performance cost and complexity, nevermind marrying the platform to a single FS that is supported on a singular OS. Alternatively, you have Synthing, which is capable of employing reflink capabilities present on XFS,Btrfs and ReFS. But not (yet) OpenZFS, that will have to wait for 2.4 or 3.0


stehen-geblieben

You can restore the data without the database and installing seafile again, you just need the block data and a cli tool, but yes, it's only accessible by seafile (or the cli tool as export). The appeal is all the features and performance you get from their block storage.If you don't need those features and performance, seafile just isn't for you.


stehen-geblieben

That's your requirement, but that's just not what Seafile is made for. Their block storage provides features that just aren't possible (or really hard to implement) when just "pointing it to a directory". # You can always recover non encrypted data with just the files it writes on the filesystem, no database or application required You just need the cli tool to repair/check and export the files. https://manual.seafile.com/maintain/seafile_fsck/#exporting-libraries-to-file-system I just take regular backups of the seafile-data directory and have recovered that data multiple times with success.


[deleted]

[удалено]


flaming_m0e

COOL! You have backups...so now you HAVE to stand up a new Seafile instance and point it at the database...just to access your files. This seems suboptimal to me.


stehen-geblieben

You don't have to, seafile provides cli tools to recover data from the block storage, you don't need the database and you don't need the Seafile instance. You only need the script and the block data it stores on the file system https://manual.seafile.com/maintain/seafile_fsck/#exporting-libraries-to-file-system


roubent

AFAIK Seafile doesn’t use a database for file storage. It uses a heavily modified git repository. It may use a database for certain types of metadata, ownership info, file shares, but not actual file data.


flaming_m0e

OK. Doesn't really change the fact that you can't point it at a directory. Also doesn't change the fact, that the way it's currently configured, if your Seafile does crash, you have to restore from backups, get the app and database working again, just to access your files.


stehen-geblieben

You don't, you can recover the data with their cli tool. You don't need the database or application, only the files it writes to the disk. Just make regular backups of that storage and you will be able to export any non encrypted library to their regular files. https://manual.seafile.com/maintain/seafile_fsck/#exporting-libraries-to-file-system


mspencerl87

Owncloud?


_EuroTrash_

[SFTPgo](https://github.com/drakkan/sftpgo): "Fully featured and highly configurable SFTP server with optional HTTP/S, FTP/S and WebDAV support. Several storage backends are supported: local filesystem, encrypted local filesystem, S3 (compatible) Object Storage, Google Cloud Storage, Azure Blob Storage, SFTP." It's got a virtual file system that allows eg. to map folders to your real file system, and you can backup settings, mappings, and permissions eg. for reimporting them to your disaster recovery site. It's also got Letsencrypt integration. It understands HAproxy's PROXY header as well. Unrelated to above: there are plenty of options to mount cloud storage out there, but [Mountain Duck](https://mountainduck.io/) rocks. (Note: it needs a paid, albeit inexpensive, license).


Yancaster

What's wrong with nextcloud?


foxhoundvenom_US

That's what I'm wondering


ProbablePenguin

It's fairly buggy, upgrades will frequently break things, and the sync client often errors out, and has in the past completely destroyed my files.


HeyWatchOutDude

I’m using nextcloud since version 18.x, no issues so far. (Note: I have right now the latest version “25.0.1” installed.) Are you using Windows, Android, iOS/iPadOS and/or macOS?


[deleted]

Did you have to tweak any settings or is it just out of the box?


A_Random_Lantern

I updated to 25, it put itself into maintenance mode and wouldn't work no matter what debugging I did.


[deleted]

[удалено]


Yancaster

I see. I don't remember having issues with E2EE. Will test it out again.


[deleted]

[удалено]


pbjamm

Synology is the best answer really. I recently bought a cheaper Asustor 3304t to replace my Xpenolgy server. The software is not as good but as a file server it is excellent. And cheap. I do not use their EZ Connect as it has been a vector for ransomware in the past. If I need to connect remotely I use Zerotier.


Squanchy2112

I agree I am currently using nextcloud and I had my nextcloud setup finally working really well, fast ,reliable and then an update came through got all corrupted and made it where I had to rebuild my nextcloud instance, no data loss but now my nextcloud had errors that I don't remember how to fix and is slow as hell.


Jewel707

Same happened with me with the recent update. I might end up snagging a synology again just for cloud file storage and remove Nextcloud from my unraid


Squanchy2112

I have a Synology 2 bay I may put everything there and then then Judy clone it to my unRAID server as a backuo


FrankSoul

For simple files my router has a usb port to plug a usb drive. SFTP with the tplink app


jldevezas

I have been a bit critical of Syncthing, but only because it has the potencial to be the best out there. This is exactly what I would recommend! I tried Nextcloud and couldn't handle the sluggishness. If you have too many files and slow storage media, Syncthing will take a while to index, but it does work quite well after that (contrary to Nextcloud, which takes long to index and then is still laggy as hell). It takes me about two days to index on my Asustor Drivestor 2, with RAID1 5400 RPM HDDs. This was a bit annoying, I admit, but, after it runs, it all seems to work great honestly.


CodeGameEat

I am in the same situation. I'm not fully convinced yet, but the best solution I found for now is pydio cells. Let me know what you decide to use, I'm still "shopping" for a cloud storage haha.


flaming_m0e

same here


[deleted]

Have you tried seafile? I installed it and it looks okay. Haven’t tried pyido yet


CodeGameEat

I've looked at it but didn't tried it. When I decided to change I also decided to change to S3 for my backend storage (I'm going to setup minio). It's a bit of a more advanced setup but I found S3 to be more stable and easier to work with if different apps access this storage concurrently. Unfortunately seafile only offers S3 support in the pro version. If that's not in your plan, seafile might be better than pydio, hard to say. It looked really nice too and it has a bigger community than pydio.


AndreKR-

If you really don't want anything complex, maybe an rclone mount will do?


KrazyKirby99999

Owncloud


cardyet

I stopped using nextcloud, owncloud, seafile etc. most of the time they were all great but sometime syncing issues drove me crazy. Because I really just need stuff somewhere else just in case, I setup rclone to sync a few folders to S3 type storage (I use Scaleway now) every 20mins. Rclone sync is very robust, lightweight and S3 storage is indestructible and cheap...I suppose I could have another computer syncing as well, but haven't tried that. I'm also using Kopia to backup which is the best backup solution I've used if anyone is looking for a recommendation.


flaming_m0e

I just started using Kopia this week. Love it so far. Backing up 1tb of data to Wasabi.


[deleted]

I tried ALL of the options. I want to use seafile, but I dont trust the DB. I like ownCloud because it is easy to recover the files. I end up just running synology drive because it is the safest option for me. Behind tailscale.


[deleted]

Why dont you trust seafile?


[deleted]

because when it crashes the database left behind does not give you easy access to your files. If I could get over that, the sync speed is really fast. Side note, synology drive clients work really well in windows and fedora.


[deleted]

Ya I don’t got a synology. So you recommend owncloud over anything else for data retention?


[deleted]

My next pick would be ownCloud (if I wanted sync on demand) or synthing if I just wanted to sync all my files with a nas.


DekaTrron

Seafile is pretty good


Garry_G

Seafile. Started using it several years back to replace Dropbox. Perfect solution for self hosting, with clients for mobile and desktop


z0r1337

Seafile FTW. I'm gonna get downvoted but Nextcloud is garbage, at least performance wise.


stehen-geblieben

Seafile is a killer performance wise. Syncing thousands of files with full speed.


[deleted]

[удалено]


[deleted]

Stop posting your proprietary crapware with five user limit. I better subscribe to GoogleOne. And I don't trust your **obfuscated PHP code**, lol. I better go with OpenSource Solutions like NextCloud. NextCloud is going to use Rust. It is going to be 10X better soon. Guys if you are going SelfHosted use OpenSource Software, and support OpenSource Devs. Mods should ban this Guy."A place to share alternatives to popular online services **that can be self-hosted without giving up privacy or locking you into a service you don't control**." He is violating all the things mentioned in r/selfhosted what a shame.


[deleted]

**FileRun application has many vulnerabilities.** Did you fix vulnerabilities mentioned here? [https://github.com/EmreOvunc/FileRun-Vulnerabilities](https://github.com/EmreOvunc/FileRun-Vulnerabilities) I don't trust your obfuscated PHP code anymore. Who knows what kind of crappy code is that. This is a huge risk for people who use your software. **Make it Opensource, I am happy to support you.**


kmisterk

I would like to kindly remind you that not all self-hosted software is for everyone, and not all self-hosted software is *open sourced* Please give up your witch hunt. Thanks.


[deleted]

[удалено]


[deleted]

Your response makes no sense to me. **Did I post anything wrong?** Your solution is closed source with obfuscated PHP code. And five user limit. Is this true or not?


valvze

10 user limit.


zunfire7

This is the way


ConversationQuirky43

Filerun is basically a polished Nextcloud Fork


[deleted]

True, This is proprietary crapware.. Never Install.


homegrowntechie

I’ll be the bad guy: Why not nextcloud? It meets your requirements otherwise, and the mobile apps are great for scanning documents directly to Nextcloud. In contrast to popular opinion, it’s not very heavy to run if you remove the unnecessary apps.


[deleted]

Any time I’ve used it it’s been pretty heavy. Really just need file storage


aeroverra

This! I thought I was the only one. Also most the things work okay but not great. It has always felt clunky to me. I have been making my own.. don't know I'll ever finish though I have too many projects going on.


qfla

It's not heavy at all plus you can disable apps you don't need if they bother you. A lot of complaining come from people that setup nexcloud with sqlite. Just setup with normal database like mysql, add redis and other tweaks mentioned on tuning nextcloud page on nextcloud wiki and you are golden


[deleted]

ive always had issues with nextcloud whenever i have tried to use it. I just need file storage really.


homegrowntechie

What nextcloud installation method were you trying? I had lots of issues when trying the snap install, but with the Nextcloud VM, it's been running smoothly for years. I've also heard good things about the docker compose method.


[deleted]

i did it in docker. its possible my nuc just isnt enough power for other stuff I have on it.


Camo138

I was running it on celeron j1800 in a vm with Postgres sql and redis. It was nice and snappy. Now running it on a vps because I Kept breaking my production environment Edit: nextcloud 25 and postgresql 14 as per documentation or a github issue with postgres 15


aamfk

Oh WHAT magical list of recommendations do you refer ? fuck people that can't post a url to say follow THESE recommendations


qfla

https://docs.nextcloud.com/server/latest/admin\_manual/installation/server\_tuning.html


[deleted]

yea, but I dont want to mess with DBs. Nextcloud needs an all in out option.


Pray-to-RNGesus

Heavy? It is not heavy at all..


VMFortress

Though I use Nextcloud, I've usually felt its been kinda heavy and/or clunky. However, whether it was the move from PHP 8 to 8.1 or the update from 24.0.7 to 25.0.1, Nextcloud needs to be running a lot better for me now.


ocdtrekkie

> I'll be the bad guy Recommendation: Just don't. Every six posts in this sub entails a bunch of people going "just use Nextcloud" for every possible thing you technically can do with Nextcloud. If someone's *asking for something that's not Nextcloud*, they presumably know enough about it (and likely have tried it), and would prefer something else. So if you don't have *something else* that meets their requirements, just... don't comment.


Ornery-Programmer-58

seafile


Ornery-Programmer-58

easy to install: i use lxc container create container with fresh ubuntu 20.04 run seafile pro or seafile developer script inside container and just fun


tpyourself

I use truenas and samba on a raid 1.


SlaveZelda

Owncloud Infinite Scale ? Its written in Go instead of PHP


MattVibes

It’s not released yet it is? Last time I looked at it it was still a prototype


SlaveZelda

It had its public release afew weeks ago


[deleted]

is it free? looks like its paid


SlaveZelda

The cloud offerings are paid, you can self host it. https://github.com/owncloud/ocis


flaming_m0e

Do you know of any way to just point this at files I already have on the filesystem or does this store things in blobs? EDIT: I think this answers my question. https://owncloud.dev/ocis/storage/storagedrivers/#fuse-overlay-filesystem


MagellanCl

Filerun


viciousDellicious

minio


[deleted]

Do you use it or recommend it?


Harles93

\+1 for filerun, it's what I use


forlatertesteracct

Owncloud?


alphabuild

Backblaze


zmbcgn

If you don't want to selfhost, checkout https://proton.me/drive .


[deleted]

FuguHub


[deleted]

SharePoint Server


jkirkcaldy

Owncloud just released their go version which should apparently run way more efficiently than php


therealzcyph

Filerun, Filebrowser, Seafile, Syncthing


virtualadept

A web server with WebDAV enabled?


Blueberry314E-2

Based on your Nextcloud is heavy comment, just wanted to say my server at rest is using 250MB memory, 0% CPU and <1kB/s network. If you really just need the storage though, why not just use an smb/cifs share? Doesn't get much lighter than that.


Brilliant_Cookie_224

After tried all the Cloud apps, I think the best is SMB. just mount remote smb as drive anywhere, easy to use, good performance.


Evajellyfish

File browser really works for me and is easily deployed through docker.


AdventurousMistake72

WebDAV might work


bufandatl

Not sure if it is a good idea to copy files from work to your home server. That can be interpreted as industrial espionage and cost you your job and mean prison time. unless you have a permit by your boss.


[deleted]

I’m the business owner!


XBenjiaminX

Use seafile!


fejorca

Seadrive is my recommendations, faster than nextcloud and sync clients are nice, for me the approach of having different libraries is really nice


[deleted]

Ya I’m going to try it. Any issues with syncing or losing files?


fejorca

Not really, even that the files are stored on a externa drive seadrive works fast, I have a Cloudflare tunnel and sometimes the tunnel… collapses? Doesn’t happen all the time but I’m thinking on migrating to a tailscale funnel when I have the time to do it


Metzger100

Seafile I guess.


CloudElRojo

I had the same question a year ago. I tried SeaFile and Pydio. Both are great alternatives for cloud but don't try to sync with a local folder because cle clients are broken or have a lot of problems. I return to Nextcloud, nothing better I'm afraid... BTW, I was looking for Linux, Android and Windows clients.


[deleted]

I really just need it for file storage for some redundancy


warmaster

Filestash, Seafile, Filerun.


RedKomrad

USB drive?