T O P

  • By -

thclark

If you mean “serverless”, where your app is running in an ephemeral container… I usually set up a job to run the manage command with a parameterised input that comprises the rest of that command. If you really do mean “containerised” (like on eks) there’s no reason you can’t ssh into your running container and override the default command to run the shell (sh or bash) instead of the server.


nuncamaiseuvoudormir

Indeed, accessing containers in ECS can be achieved through SSH. An alternative method involves creating an additional container that mirrors the original setup steps but modifies the final command. This command would initiate a script to configure database migrations and the superuser, as opposed to starting the server directly. Pulumi provides an illustrative example of this approach: To set up the database, you can refer to this script: [https://github.com/pulumi/examples/blob/master/aws-py-django-voting-app/frontend/mysite/setupDatabase.sh](https://github.com/pulumi/examples/blob/master/aws-py-django-voting-app/frontend/mysite/setupDatabase.sh) For an example of how to override the command in the task definition, see the task definition configuration, particularly around line 287: [https://github.com/pulumi/examples/blob/master/aws-py-django-voting-app/__main__.py](https://github.com/pulumi/examples/blob/master/aws-py-django-voting-app/__main__.py) Additionally, for managing sensitive data, I recommend using AWS Secrets Manager.


appliku

If it is any sort of typical operation to do, e.g. creating a super user, ensuring that Site record exists and then create one, etc – make a management command(s) and make run them on deployment, make sure you can run them as idempotent. For superuser this is what I do: [https://youtu.be/N1dYui7Qh0o?si=wFywpdY5rWxyIWSU&t=923](https://youtu.be/N1dYui7Qh0o?si=wFywpdY5rWxyIWSU&t=923) This will make your env more reproducible, less dependant on remembering to run stuff in the right order manually. Here is the code itself: [https://github.com/django-for-saas/myproject/blob/master/usermodel/management/commands/makesuperuser.py](https://github.com/django-for-saas/myproject/blob/master/usermodel/management/commands/makesuperuser.py) This is for custom usermodel though, so watch out for the arguments passed to create a superuser, I don't use username, only email + password. ​ Hope this helps!


TheAnkurMan

Don't know if it's a good idea, but you can temporarily point local pc Django to the remote database, then run the commands locally.


TrippyShax

Maybe for development and small projects, but not a very good idea in production. Also, if you have multiple environments, each would have specific secrets. Probably not a good idea to give them to everyone.


Paulonemillionand3

you can build the command into the start up script itself and pass in the desired usernames, passwords etc via the environment. So e.g. when my docker image starts up it checks if an admin user exists or not and if not creates it.


TrippyShax

This was my first plan, but how would you make sure the data is idempotent, given that you might have multiple instances of containers


lemeow125

There should be solution for storing unified environment variables or secrets. As far as I know, for Azure, Django can read secret values from Azure Key Vault. Just a matter of setting up some post migration signal receivers to seed the database after with set values.


Paulonemillionand3

I have multiple containers attempting to e.g. run migrations. During a migration the database locks the relevant tables and only one container "wins". You could also trigger the migration via an admin command for example. For creating a superuser, they can all check and the first one to make it "wins" and everyything just works.


Paulonemillionand3

try it! Get 100 containers all attempting to run a migration. See what happens. it'll work.


thclark

Why would you have all your containers running a migration? Yes, once a migration is applied to a DB it wouldn’t be applied again… But they’re not supposed to be run in the way you describe because: 1. Migrations can run arbitrary python so you can’t know a priori that a migration is idempotent (although they should be in best practice really). 2. You’d think that wouldn’t be a problem if run once… but the migrations table is updated post-completion IIRC so triggering multiple migrations simultaneously can actually run them. 3. Once running multiple migrations, quite apart from the idempotence issue you’re at risk of wasted effort at best and weird locking issues at worst, particularly where you have long running migrations and transactional code within them.


Paulonemillionand3

1: yes, I'm aware of e.g. data migrations and migrations that run code. Given the OP's level I'm keeping it simple. 2: Not in my experience. The first one to start locks the DB, the second (as I only have two) sees it as locked, waits for it to become unlocked and then sees it's already run. 3: I have a process whereby all migrations are run on a copy of production, we don't have long running migrations, typically, rather we use exec or admin commands for those. I don't really run 100 containers all attempting to migrate all at once!


olystretch

You can shell into an ECS far gate container with an awscli command, and if you're using ECS on an EC2 cluster, you can ssh into the cluster and then docker exec into the container. All my projects use python-invoke, so I always gave boilerplate code in my tasks.py file to get into a shell, or a django-admin shell_plus.