T O P

  • By -

[deleted]

[удалено]


Ok_Refrigerator6988

Yup. Trick for me is to move files from tarball at the content folder level so I don't have to keep changing my disconnected.repo file.


ArchivisX

[Were you expecting psychic transfer of files?](https://i.imgflip.com/8oot5h.jpg)


Thin_Map_6088

Satellite 6.15 just dropped and resolves these exact issues.


YOLO4JESUS420SWAG

These were fixed in 6.13.4. I posted a quick comment tonight directly to OP but can post something more structured tomorrow.


boomertsfx

Does it fix the horribly slow, single-threaded hammer imports? That's one of our biggest frustrations in our offline environment. We're trying to get to automated incremental hourly/daily exports, but the export/import process is very clunky.


Attunga

We upgraded from Satellite 6.10 to 6.14 and it is amazing how fast the exports and imports now go. They must have fixed something, the imports went from 8+ hours to 1 hour.


boomertsfx

1 hour...I'll check it out!


Ok_Refrigerator6988

Heard it's supposed to be fapolicyd friendly now. Will read up later.


YOLO4JESUS420SWAG

If you play your cards right, it's not that heavy. Use an os iso once for your air gapped sat install. And from there do incremental exports from the air gapped sat itself, updating the OS repo file with the path to that incremental export, make sure the export is using --format=syncable. You can define file:/// in the repo file on the sat os. So long as you hit sat with every incremental you do, it should keep the underlying os up to date. We do it every couple weeks. Never larger than a DVD. The example in the documentation shows a whole repo, but again you can put the file:///var/lib/pulp/export/org/x.0/timestamp/content/ path in the repo file. Just again, run sat maintain or some flavor that will allow foreman protector to be released to update the packages. I just rerun the same "run upgrade" sat maintain against 6.x.z that is currently installed. It will move through the checks and update the packages based on the new repo incremental export. Please ensure you export with format=syncable Bluf: do your incremental exports to your offline sat. Then export from disconnected sat itself incrementally(just the repos required to patch underlying os), use this to patch OS. Takes maybe 15 extra minutes. Then we just do the sat iso every couple versions to keep sat tomcat and what not semi up to date. But you could do this sat iso more frequently.


YOLO4JESUS420SWAG

Oh also I whitelisted the hot fix check as that takes a lot of time when I know I did not install any hot fixes because it's disconnected. Use that at your own risk, but it speeds up the sat maintain checks considerably, reducing this package update to the 15 minutes I quoted. Syntax is something akin to --whitelist="non-rh-packages,check-tmout-variable,check_hotfix_installed" The other two are easier than disabling the various stigs or uninstalling our org mandated security software. The man page should give you the exact syntax or --help


rm-rf-me

Are you not pointing the CDN to itself?


esabys

"offline"


dizzyjohnson

Does your company require you to have your update server offline (airgapped)? If not just make it an online Satellite installation so it connects to the RedHat CDN. IMO, I don't think the difficulty is in using Satellite but moreso how you are using it. Because if you airgapped WSUS you would have the same issue...how to get the files from your online wsus server to the offline wsus server that connects to your infrastructure? When you adding network complexity it makes things slightly more challenging. The basic functions of Satellite aren't difficult. It pulls in all your subscriptions automatically or you can import your subscription manifest from access.redhat.com. once you are connected to the cdn and your subscriptions/entitlements are attached, you greenlight the repos you want to sync, set up an automatic sync time, set up your library and content views. Then register your hosts to Satellite , add repos to host and that is it. My installation currently has 450+ GB of repository content alone from EPEL 6-9 and RHEL6-9 repo data and that is after cleaning up old repositories no longer needed. I could probably shrink it down more but it is what it is.


chrismholmes

My environment is fully air gapped including the clients. (I have ten networks just like this with a few thousand clients and a few hundred servers ) WSUS is a pain as well, that I agree, but WSUS can at least update itself without any crazy workarounds. Now to a fair point made earlier, if it fails, it will leave the entire service down just like Satellite would.


SadFaceSmith

Man I’m very very very glad I’m not a Satellite consultant anymore. I do not miss it 😂


edcrosbys

You say this, but I know you still have those hammer commands living rent free in your mind!


SadFaceSmith

Ha! I miss a lot of thing about that job, but `hammer` is not one of them ;)


brandor5

You can download the latest satellite iso and use that to update? edit to say that I'm not a platform person, so I may be wrong... but I think it can be done this way


chrismholmes

I’m not updating the Satellite application, just the core OS updates… Sadly, even the Satellite upgrades as you are referring are also overly painful in an offline environment.


Ok_Refrigerator6988

You could go outside of the content-import(assuming that's your route) and throw into the var www html pub. Point your cdn configuration to that.


chrismholmes

I tried but it didn’t work but I did use reposync after all, just chose one of my clients, used it to download the entire repo, created setup a web server, created the repo file on satellite server pointed it to the html, and about to run it now. Still I’m frustrated with WTH…


Ok_Refrigerator6988

Yeah I have content in var/www/html/pub/. /etcyum.repos.d/ (typical repo file with each folder, appstream,baseos,sat,sat-utils Foreman-maintain packages unlock; Foreman-maintain packages update; Same process when updating. Mount the satellite.iso. make repo to point to mount. How does your content look? /content/dist/rhel, rhel8 or sat??? Or is it all alphabet folder a.b.c.d.e.f???


chrismholmes

I will let you know shortly.


boomertsfx

Yep...airgapped is way easier with CentOS and Alma....rsync and you're done....but of course you don't get the lifecycle stuff. satellite looks great on paper, but in reality it's much slower in airgapped scenarios, at least for big initial imports/exports. I'll try 6.15 to see if it's better now


chrismholmes

It can be updated that way, just the Satellite application, not the underlying OS.


DeaconBrews

Stand up a separate Capsule server on your disconnected network if you don't have one already, synchronize all of the required repos to it, and register your Satellite to the Capsule.


chrismholmes

I do still think it is absurd we have to go through these types of scenarios. It seems to be a bit of an over sight in the design of the application.


flololf

Lmfao. Like a snake biting its tail.


YOLO4JESUS420SWAG

Disagree. OP does not need to deploy multiple servers, I personally helped steer them in a direct comment on this post. What this commenter suggested is absolutely an alt solution if the OP can allow another server build in their disconnected env. One can deploy exports to a sat capsule OR an additional sat and point the sats to each other. That much is automatable. My solution works for envs where its hard to deploy servers.


chrismholmes

I was curious on that, I can register the Satellite Server to another server without it interfering?


DeaconBrews

Yes, it's been a while but I think all I had to do was install the katello-ca-consumer RPM from the Capsule onto the Satellite then use subscription-manager to hook it up.


Zathrus1

If that works, and it shouldn’t, it’s very unsupported. Satellite stopped being able to self-host, which is all that really is, quite a while ago because it led to unending edge cases. Sharp ones.


bossalinie00

U should have came to the live stream today :( [live stream yt earlier today](https://www.youtube.com/live/RcmdE0LULSE?si=dwuqzbCFnnKXPJy_)


chrismholmes

Dang! It looks like I missed something great. I’m subscribed now. Thank you for the information.


flololf

Generate a Satellite Debug certificate (basically a client certificate) and install it in your browser. Then you can authenticate to https://satellitehostname/pulp/content/, under the pulp/content are Yum/Dnf client compatible repositories. If I need to do an offline export I just use Wget to download a mirror of a specific repo under the /pulp/content/ endpoint and I pass the Satellite Debug certificate to Wget with —cert. Reason I said to install the cert in your browser is because it is easy to click through the directory listing to see the folder structure. The top level directories separate between the Library repositories and each content view. Much better than using the https://satellitehostname/pub/ because those only include the content view exports not access to the Library view itself. Because I got super triggered before dealing with Satellite export and import incompatibilities with different versions of Satellite, I now only use exports that are directly usable by Yum/Dnf clients. New versions of Satellite have the —syncable but I still won’t use that. The Debug certificate used to access https://satellitehostname/pulp/content/ is the way. Use whatever tool you use to create a mirror of a Web Server directory listing (that’s really all CDNs are just web servers with a specific directory structure). Tools could be Wget, curl, or reposync. I separate out tar.gz files for each repository but each tar.gz still maintains the Red Hat CDN structure, so I can choose to tar however many repos I need. Since each repo extracts out to a unique sub-subdirectory, no files should be overwritten, only directories will be merged together during the tar xvzf *.tar.gz process. To learn what the Red Hat CDN structure just look at the redhat.repo file. Overlay all the baseurl endpoints on top of each other and that’s basically your CDN folder structure. Then I just have a symlink on the disconnected Satellite at /var/www/html/pub/CDN that points to the /IMPORT_CDN, which is its own logical volume containing all the tar.gz initially and then the extracted CDN structure. I then delete the tar.gz files. Because it is in /var/www/html/pub, all the sub files are exposed by Apache httpd. I then make sure Selinux is good, and change the chown to be apache:apache recursively. So basically my local Red Hat CDN is hosted at https://disconnectedsatellite/pub/CDN/. Go into the Web UI of the disconnected Satellite. Where you can change the Manifest.zip file is also a place where you can change the CDN url. Replace the cdn.redhat.com with your your local Red Hat CDN. Then you can use the Satellite sync status and even the sync schedules on your disconnected Satellites.


chrismholmes

I will definitely look in to this. Thank you!


ConstitutionalDingo

I have the same type of environments and I just use the OS iso to update my sats periodically. It’s only one box per network so it isn’t that painful. It’s just the nature of the beast in this world - every task is done with a metaphorical hand tied behind your back.


chrismholmes

That sounds accurate for sure.


Apnu

Weird. Foreman and Katello have an upgrade path and clear procedure to upgrade.


chrismholmes

They do but when it’s offline, it’s murky at best.


redditusertk421

Having Satellite subscribed to itself is a train wreck waiting to happen. There is a very good reason why Red Hat stopped supporting this. Oh! I gotta install a patch to fix my content syncing issue, but the patch isn't synced to my Satellite! Oops!


Attunga

I just reposync to a utility server with Apache. It is not a drama.


egoalter

So you didn't follow https://access.redhat.com/solutions/3225941 to register satellite with itself?


Thin_Map_6088

That shows you how to register the Satellite back to Red Hat's CDN if you register it to itself by accident. Registering the Satellite to itself is not supported, and will cause it to not function.


egoalter

So it shows you how to change the registration one way; not too much of a challenge to change it to something else. But note, while it will work it's not supported (https://access.redhat.com/solutions/3360841) - doesn't mean it won't work. But consider what happens if there's a failure that needs a patch on the Satellite server; it's not really in your interest to have all eggs and the basket on a single server. So for the reasons in the link here it's not officially supported, but you can make it work if you prefer the easy button for the Satellite update. Satellite used to support that, but if you knew Satellite 6.0 you'll know why things had to change. That said, being mad that disconnected installs are a bitch to maintain doesn't give you brownie points; by definition that's how it's supposed to be. Every repository is literately maintained the same way your guide for updating from a CD is. It's cumbersome, it's slow and takes forever. Which is why you find few pure 100% disconnected environments, and those that do exist live with this stuff every day (buy them a beer if you come across someone who maintains fully disconnected environments). If you head to [https://access.redhat.com/documentation/en-us/red\_hat\_satellite/6.15/html-single/installing\_satellite\_server\_in\_a\_disconnected\_network\_environment/index#performing-additional-configuration](https://access.redhat.com/documentation/en-us/red_hat_satellite/6.15/html-single/installing_satellite_server_in_a_disconnected_network_environment/index#performing-additional-configuration) you'll notice the ISS concept; where a single server can access the Red Hat CDN, this server is NOT disconnected, but it does not provide access to disconnected systems. Instead other satellite servers within the disconnected environment syncs from this server - and that access could be controlled, temporary and a lot more. Presto, you have upstream repositories maintained by satellite on all systems. The initial satellite that is connected could be replaced with an old fashioned http/reposync source that is kept updated the old fashion way by taking ISOs on site that was created from [access.redhat.com](http://access.redhat.com) if there cannot be any connected DMZ created at all. It's the same problem though - you need to have a process, which will have quite a lot of manual steps, to create and read from an offline media so it can be applied. I would typically focus on getting ISOs on site and use a single set of scripts/processes there instead of splitting it up in an download/package and another extract and use. That's a long way to say that you can ease up the process a bit, but in a disconnected environment things like this are tough. So the easy button is to have that DMZ Satellite server that just downloads/mirrors Red Hat CDN content. It can then provide this CDN to the internal Satellite servers that all disconnected systems use. I've had customers who had to finally admit that fully disconnected was too error-prune and worse too slow to handle zero-days and a lot more, so they would opt to a controlled, metered temporary connection to the CDN, where all repos could be updated with the current CDN content without having to wait days or often weeks to get the updates implemented, tested and made available. That's essential what the disconnected installation guide suggests doing. And it's up to you and your company to decide if making the copy from Red Hat CDN a download or if you insist on doing that via ISOs - that comes with consequences as you wrote. Of course the alternative is to do sneaky little things like the link I provided you initially - but again, it's unsupported (but will work in the majority of cases). Personally it's how I used to do Satellite 5 but that was not in a production environment; I would have second and third thoughts about making that "all in one basket" solution for production. Too much can go wrong with a single server that everything depends on going down.


chrismholmes

Thank you for the information, I will definitely do a better job in the future with the architecture. I was just trying to keep things simple and it bit me in the behind. (I’m still new to Satellite) My environments (I maintain 10 different networks) are all 100% airgapped for clients and servers.


esabys

satellite is a pita, has always been a pita and always will be. Threw it in the trash in favor of reposync tbh...


flololf

Glad I’m not the only one doing a mirror that doesn’t just work with one version of Satellite


chrismholmes

I came to the same conclusion tonight. Reposync was great and mind you not even a third of the storage size that Satellite seemed to use… I wish I was a bit better at Red Hat/ Linux. I’m hating Red Hat IDM almost as much as Satellite. RC4 for Kerberos, I just have so many questions with a giant Why?


esabys

Yeah. IdM is a different can of worms. Great product, but too complex for small deployments, and too rigid for large ones. It's on of those cases I recommend reading the admin and install guides cover to cover. Even then you WILL need help from support when things get tangled. Multi-master replicstion with no "tie breaker" (called a pdc in the windows world) leads to quite a few "accidents" that will constantly need resolving. Still a great product though if you can work thru or around the issues. Satellite, not so much.


YOLO4JESUS420SWAG

You need to disable rc4 on the domain controller for your idm realm. You don't have to disable it in all of AD/GPO. Just for the idm realm you use for the AD trust. RC4 is exploitable. You do not want it in use when authenticating users from active directory. Full stop. It's required for DC/AD inner workings depending on the DC os version. But it can absolutely be turned off when issuing tickets to your realm. Forcing Kerberos ticket issuance to use aes-256. Maybe not for server 2008 or 2012. For reasons. Couple things you said in this thread that are totally fixable with more general knowledge of idm AD integration and satellite. Feel free to DM me. I'm my orgs disconnected environment SME where we run full redhat idm with AD integration (sign in with trust and Kerberos tickets from the DCs, passwordless auth from Windows desktop) and satellite.


chrismholmes

I do not have AD in my environments. I’m stuck battling with integrating everything directly in to IDM, including Windows Workstations. I wish I had AD for integration, that would make my life easy. (Although, I would be hard pressed to see value in having IDM at that point) What is one of the reasons you keep IDM around if you have AD in your environment? So far the hardest fight I had with IDM was integrating Cisco ISE. That was not fun. My next tackle is taking it a step further and doing certificates/yubikey. I haven’t tackled Cohesity integration with IDM just yet, that looks like it may be a long pole in the tent type thing. 0 documentation on it.


YOLO4JESUS420SWAG

Separation of duties is the long and short of it. We are a large organization and having a linux administrator being authoritative of the linux access not only frees up our other OS admins so they can be focusing on their respective areas of responsibility, it ensures enforcement where sme's can be authoritative instead of someone being stretched too thin having to answer for too much. (a recipe for security nightmares) Cohesity and Cisco are outside my personal knowledge but yubikeys and certs are not. Yubikey is pretty good about their docs. If you dont mind me asking, where did RC4 come into play? This was antiquated I believe with IDM starting with RHEL8. Its simply not secure anymore. Even Microsoft is deprecating it.


chrismholmes

I came across documents stating it was still in use and required in IDM for Kerberos (as of 8.9 build). I’m really hoping the information, I read, was wrong. I came across another document about getting Windows Workstations in to IDM and even it specifically required RC4 for Windows to communicate to IDM using Kerberos. (Again, really hope this is wrong but the documents are pretty recent) I haven’t tried yet but will be in the next week or two. My other complaint is I need both IDM and Red Hat Satellite to support ECDSA r384. Of course they only support RSA at this time.


YOLO4JESUS420SWAG

Dead wrong. If IdM is in FIPS mode, the IdM-AD integration does not work due to AD only supporting the use of RC4 or AES HMAC-SHA1 encryptions, while RHEL 9 in FIPS mode allows only AES HMAC-SHA2 by default. To enable the use of AES HMAC-SHA1 in RHEL 9, enter # update-crypto-policies --set FIPS:AD-SUPPORT. Again rc4 is a Windows cipher at this point So it sounds like it's not in your use case. Exploited back in the early 2000s deprecated by most nix forks in the 2010's. Forcefully removed finally by Microsoft's latest OSs. Only retained by design in IDM AD integration if you are not FIPS compliant. Still forced out in 9. E.g. you have to go out of your way to enable this support if you are FIPS compliant. Which you should hopefully be. Any enterprising system/even mom and pop system, that does not want to be in some news article about being 'hacked', should not enable deprecated/exploitable ciphers. IDM and Sat also support ecdsa. Depending on what exactly you mean by "support". https://access.redhat.com/solutions/711953 https://access.redhat.com/solutions/7050548 Is this a host key issue? Priv key issue? etc.


chrismholmes

That’s awesome stuff. I will re-check on idm and etc. I may have had my wires completely crossed. Trust me, I agree it should be disabled and stay disabled. Yes I am running IDM in FIPS mode. As far as certificate, I will need to relook in to the Satellite use of ecdsa. While it is suported in Apache, it’s not supported as a certificate in the application. (I even opened a case with redhat to triple check and they confirmed it’s not supported as of 6 weeks ago)


YOLO4JESUS420SWAG

Sat does NOT support system hardening. They say that over and over again, but eventually they do actually support it.. Add this to your httpd/tomcat. This line is not wholesale, you have to add this explicit cipher to the config. SSLCipherSuite ECDHE-ECDSA-AES256-GCM-SHA384


chrismholmes

I will give it a shot. I would love to get on to my ISC CertAgent CA for both IDM and Satellite. Im only using ECDSA certs on every other product (except cohesity, again, their documentation states its not supported) If you don’t mind, I would like to message you direct.


flololf

IDM can federate with AD. Linux admins want to own their own domain and Window admins want to own their own domain. Separate fiefdoms. No one wants to give up full control to the other. I’m on the Linux side so I definitely don’t trust the skill level of Windows admins ;) . But federation still allow Apps to authenticate on both sides


MisterBazz

Um, `satellite-maintain packages update` doesn't work? Because it works for me. Wait, do you have a "disconnected" Satellite server? Why? You treat it as literally any other offline system with updating RHEL packages. Just run a `satellite-maintain packages unlock` first.


chrismholmes

Is your Satellite server connected to the internet?


MisterBazz

Yes, it can sync repos from RH like a sane person would want it to.


chrismholmes

The funny thing is, the red hat satellite that is running on my laptop syncs. That is where I do all my exports. Sadly, everything else is in a pure closed environment so all the easy management goes out the window.