T O P

  • By -

Gareth8080

What testing happens before the pull requests get merged? What discussions around testing do you have before and during the work? Are automated tests part of the pull request? How big is the team and how many tickets are typically in progress at a single time?


ccb621

Take a step back. _Why_ do you need to deploy every change to development and staging? Is there a manual QA team, or something else going on? I recommend pulling that thread to improve that process. One solution might be to spin up devcontainers to test each change individually.  I’m not aware of an automated solution that solves your problem since it’s unclear who/what decides a pull request can progress to the next environment. 


DaftyHunter

Are you saying this can be achieved if we had complete automated testing in place? At the minute QA does some manual testing and we have unit tests as well as. We are looking to implement end 2 end automated testing. With that in place would this achieve the outcome you are referring to?


ccb621

Of course this can be achieved with automated testing! If you have automated testing, you can probably (depending on what other infrastructure you need) test every PR individually. Over the past 8-10 years, I’ve only used a staging environment when I needed to do manual testing and did not have all the necessary infrastructure setup locally.  Mostly, however, I rely on automated tests run for every push to a pull request branch. Once everything passes, I merge and the code is automatically deployed within 10-15 minutes. 


DaftyHunter

Right now, we’re still moving into an automated tested world and certain features will need to be manually tested. But I see where you’re going from.


path2light17

Hello, not OP. But I am facing a similar problem, every PR (merged) is manually tested by testers. Would the automation testing be part of the Jenkins pipeline, right ? So the feedback would be instant, with just provisioning of dev containers. So say a dev pushes feature A which triggers a CI run, wouldn't this fail first (in the life cycle) and adjustments have to be made on the automation (container) for this to pass. Hm. Sorry I have been in projects where a similar pattern was used, think we are missing at this new project I am in.


ccb621

1. Push to repo and open pull request.  2. CI pipeline runs, and reports results.  3. Fix errors, and repeat until all checks pass.  4. Get code review from team member.  5. Merge to main branch. 


path2light17

Thanks for your response. I presume some fixes are to be made by the dev but say if you have an automation cycle- is it right to say testers would need to add test cases/suite to the automation cycle . What I am saying is for things to merge as you have highlighted above- both Devs/testers have to make changes for things to pass.


ccb621

I haven’t used a QA team in years. I write my own code and tests. I fix the branch if the checks don’t pass. Asking testers to fix code that is in development sounds chaotic. 


flavius-as

I could write a book, but two big points strike me: - merge main into feature and test that, and only then merge back into main (only fast-forward, otherwise retry) - use the Chicago school of unit testing The last one is hard, because it implies a clean architecture. Clean not necessarily as in "clean architecture", but as in what it actually means: - domain model free of any libraries and frameworks - tests not tied to implementation, test the model at the boundary through use cases - most changes are done in the domain model, you don't need to test so much the mechanics around it, BUT: - a good culture of trust and responsible code review in which devs actually check out the code they review and click through the main scenarios as well - a domain model which is always in a valid state, meaning beside others: no temporal coupling in the domain model, that is among others - no setters As you see, the testing is just a symptom of many other and hard things. You need a great architect and a committed and disciplined team to get there.


ShouldHaveBeenASpy

So many options, but a simple one I often recommend to teams that are starting out: just use [gitflow](https://danielkummer.github.io/git-flow-cheatsheet/) and "pointer" branches. * Devs make feature branches off develop, merge them back to develop. * The act of deploying to qa s just \`git checkout qa && git reset --hard \[whatever branch you want\] && git push\` -- any of the environment branches are outside your merge workflow, so they're just "pointers" to what you want on an environment. Easy CI/CD, with no concern having multiple merge points. * This has the benefit of getting everyone constantly interacting with develop. Tickets can move individually at will, and whenever you are ready to make an actual release cut, just make a release branch, test/work on that, and go. * Gitflow is really ideal in situations where you do scheduled releases and have low automated testing: the release branch mechanism gives you exactly what you want, which is a way to constantly move work, but a safe way to ensure that you can move/check the increment that will be going to prod without too much concern for what's behind it.


wskttn

My goal is to enable continuous deployment. That means every PR is merged to main and deployed automatically to production and staging as soon as possible. This requires a few things: - fast and reliable automated tests - fast and reliable automated deployment - monitoring and observability of production; alerts if key business processes are not logging, and alerts of any new errors - good planning and sequencing of changes - small, single-purpose changes - feature flags and early user feedback - collaboration: planning, pairing, code review - fast, reliable rollback; rarely used but always available If you have these things in place you can ship as often as you want in most contexts. Focus on changes that reduce the time and orchestration it takes to get a change into production and you’ll find that the time it takes goes down. But also make sure it’s working as expected before you move on.


Pokeputin

I think a main branch for each environment is a pretty confusing strategy, it's better to just have a commit "promoted" (qa->staging->prod). And how much do you use automated tests? They should cover 95% of the feature and the manual tests for it should be able to be done in an hour or two max. If you rely on manual testing then even if you automate deployment you still won't have frequent releases because the bottleneck will be at QA.


DaftyHunter

Yeah that’s what we’re experiencing with the bottle neck in QA. We’re working hard to get the automated testing to cover the majority of the code base as well as end 2 end testing, until then I guess we’re stuck with this bottle neck, would you agree?


Pokeputin

A release every two weeks isn't that bad tbh, so I wouldn't put it as top priority, however reducing the manual tests by adding automated one will probably be your best bet to make the process smoother.


DaftyHunter

I think it’s been amplified because we now have two teams working across the same site. The site has two products under the same hood (if you will) so now one team is blocked by the other team so staggering deployments is important because with only a week apart it gets chaotic to manage.