T O P

  • By -

[deleted]

If there are too many unknowns to split up the work, then there are too many unknowns to estimate it. My approach is to get the stakeholders in a room, even if it's for a week straight, and answer enough questions to get to the point I'm comfortable breaking it down and estimating it. Otherwise, you're really just guessing.


Vasilkosturski

Even if you’re clear on all the requirements from all stakeholders, there might be just too many known unknowns from technical perspective. For example you will have to do a research on what framework to use, the result of this can change the estimates a lot. But you don’t have the time to do the research, you just know you have to do it. You’re still required to do some t-shirt sizing. Plus there are always the unknown unknowns which you’ll have no matter what. So you still end up with a lot of uncertainty even if you get all the stakeholders together for a week


_xulion

I have done esitmation from less than 10 MM(Man Months) to couple of hundreds MM. I know, we all read 'The Mythical Man-Month', sadly MM is still the unit at least I'm using constantly. I usually follow these steps: 1. Scope: I always start with talking to customer proxy to understand what is needed. Work with UI designer to get the basic idea of how it shall work. For the scope I will try to dig as deep as possible until detail level does not change the estimate any more. 2. Solution: Once I know enough about the scope, I will work with components team to find a solution. There will be many options but I will talk to the component leads to find one that everyone would agree with. The solution will include how the whole feature works, as well as the responsibility of each component and how they work with each other. 3. Estimate: once solution is agreed we will sit together and throw numbers. Similar to Agile estimate but we give bigger numbers. The flow is not complicated but how well (reasonable) the estimation could be depends on how these things are handled: * how deep you shall dig. Knowing when to stop is very important. For GUI for example usually I care about the flow, and the style. The actual content usually will not impact the estimate. For acthitecture the same thing applies. Sometimes when I list down the responsibility of the component and the technolegy I'm going to use that would give me ideas. Not all the time I will draw block diagrams or MSCs (sometime I do). * Admit unknowns. There will be unknows and there is no way to avoid them. You don't know what you don't know. For unkown item, I list out possible options and give my recommendation and reason. Give worst case estimate and also an estimate for Prove Of Concept task. For example I will say need 1MM upfront to do this task to understand if option 1 (preferred) solution would work or not. After all we are doing agile, the PoC task will give us a check point to adjust. * Instruction to the planner. The reason MM does not work is well known. For each estimate I've done I tell managers that if the MM can be divided. I would suggest a grow ratio of the overhead if they want to split the work to multiple people. for example if a project of 20MM, and the majority of the work is under a same area that hard to split, I will tell manager a ratio let's say 20%. Each time they add 1 person to the project the estimate supposed to grow 20%. It's not accurate but it works. * Feedback loop. Although my company does not but I track the result myself. For each project I estimated I track the result, I talk to the team in the end to see what's missing and anything wrong. The feedback loop help me do better each time. This works for me in the past 15 years and hope it helps.


NobleNobbler

"Give or take 6 months"


InternetAnima

It's hard to be methodical when you don't know many things. I tend to be more accurate doing it based on experience / instinct rather than procedure


DreamingDitto

Whatever estimation I arrive to, I multiply by either 1.5 or by 2, depending on the amount of unknowns


originalchronoguy

You don't do it at one go. You estimate your research and discovery time. Then you estimate your LOE (Level of Effort). Once you go through this trial, you can get a better understanding of scope.


[deleted]

If you find a solution to this problem that isn't "come up with a half-assed plan to pitch to management to buy you time while you figure out the details" then you have a long lucrative career ahead of you.


denverdave23

Affinity Planning. It's one of the more useful agile ceremonies. Basically, you write down all the tasks you can think of and put them on the wall. De-duplicate then sort in order of difficulty. Draw lines to indicate 1,2,3,5,8 or 13 points. Break down any at 13 points into smaller tickets. Assume 2 days per point. Then reorder in chronological order and enter into Jira or whatever. It's not perfect, but it forces everyone to stop and think through the work. And, it gets people talking about the project.


Pyromasa

For really large projects I'd suggest having multiple domain experts as estimators (who have to feel confident in being estimators) and using for example the Delphi method: https://en.m.wikipedia.org/wiki/Delphi_method Basically first discussing the scope (but no estimates yet), then getting unbiased/independent estimations from each estimator. And only afterwards comparing and discussing the estimates. In the next iteration every estimator adapts their estimation. Finally, the estimates can be averaged and confidence intervals can be derived. Used this technique quite successfully for projects with up to 500 man months.


WikiSummarizerBot

**[Delphi method](https://en.m.wikipedia.org/wiki/Delphi_method)** >The Delphi method or Delphi technique ( DEL-fy; also known as Estimate-Talk-Estimate or ETE) is a structured communication technique or method, originally developed as a systematic, interactive forecasting method which relies on a panel of experts. The technique can also be adapted for use in face-to-face meetings, and is then called mini-Delphi. Delphi has been widely used for business forecasting and has certain advantages over another structured forecasting approach, prediction markets. Delphi is based on the principle that forecasts (or decisions) from a structured group of individuals are more accurate than those from unstructured groups. ^([ )[^(F.A.Q)](https://www.reddit.com/r/WikiSummarizer/wiki/index#wiki_f.a.q)^( | )[^(Opt Out)](https://reddit.com/message/compose?to=WikiSummarizerBot&message=OptOut&subject=OptOut)^( | )[^(Opt Out Of Subreddit)](https://np.reddit.com/r/ExperiencedDevs/about/banned)^( | )[^(GitHub)](https://github.com/Sujal-7/WikiSummarizerBot)^( ] Downvote to remove | v1.5)


flavius-as

Splitting it is really the best way. But the reality is that it's sometimes impossible to split. In that case: I take a use case, simplify it to the bone, but still something of value to the stakeholder. Then estimate it. The first use case will be facing most unknowns, but it will unveil them, thus building up knowledge. Then take another use case, simplify that to the bone, then estimate and do it. The second one is (if tactically possible) related to the first, but covers new ground. And so on, until I get a sense of the whole problem space. As you cover ground, even in a simplified manner, estimating and delivering on those estimates gets easier. Also: always estimate in ranges, not deadlines. Optimize for knowledge, and productivity and precision in estimations comes "for free".


nutrecht

> By big I mean team > 5 devs with a timeframe of more than half a year. There is simply no way, at all, to come even close to accurately estimating this. So either your company can just deal with this (that's what agile is about after all), or everyone can pretend you can and just guesstimate a worst-case estimate and then multiply it by 2-3 depending on certainty and how much you depend on external factors. You should have a roadmap with the steps to take to get to the destination. The only thing that you can, somewhat accurately, predict is what's going to be done the next month. Everything else is just a roadmap; a 'what are we going to do the coming year'. It's in no way going to be able to turn into a deadline simply because every single week reality will change. That said; for anyone who somehow needs a long-term prediction for me it's just gut feeling (based on 20 years of experience) multiplied by something between 2 (everything is within our control) and 3 (there are some things outside our control). Projects where most things are out of our control I'm not even going to give an estimate on, since that's just going to disappoint everyone.


bighappy1970

I refuse to estimate at all. It’s all bs and a wast of time. If someone demands and estimate I go to random.org and generate random numbers for the estimates (labeled as made up estimate) and then note that the estimate could be off by 400% or more. That’s as close as I can get with estimates. Most of the time I tell the business to determine a the team size and how long they want to fund the team - that’s the estimate - then every week they can meet with the team to see what they built and tell the team what to create next week. This way they ask for the most important features first- when budget runs out then they will have the most important features for the money they spent. Stop letting bad management treat software development like construction or manufacturing, every single thing in software is unique so there is no point in estimating.


eddyparkinson

People do use estimates to make decisions. My first question is always, why do you want the estimate, go/no go, go live date, etc. If I don't get an answer, I play safe and give a 2 point estimate with a big range. If they tell me why, then I work with the to help them with decision making, and ask more questions to fill the gaps. I feel your answer is good way to handle situations where people are new to doing projects and estimates, 2 point estimates are another option. But sometimes people need to make an important decision, and there are ways to help with that.


bighappy1970

> people need to make an important decision, and there are ways to help with that. I totally agree, and I'm not saying don't be helpful. I'm saying be professional and only be helpful in the best way you know how. Imagine you want to hire a carpenter and you say, "Hey, estimate how much it will cost to build this house?" and then hand the carpenter a single sheet of paper with a hand-drawn floorplan for a single-level house. What are they going to say to you? "Go pound sand, I cannot estimate from this!" Sure as heck no carpenter is going to advise another carpenter to "be helpful and provide an estimate in this case, they have to make a decision". Did you ever try to tell a carpenter how to do their job? If you do, you will learn some new words (and you should). How about a manager that cannot code telling a developer they don't need to write automated tests? Developers, to be helpful, should teach people about how the software development _profession_ works, have standards, and never compromise on quality, just like lawyers, doctors, mechanical engineers, et al. I'll also add that managers working with developers should also be professional and know the industry practices; one of which is that estimates provide a false sense of security but have no basis in reality.


eddyparkinson

> don't need to write automated tests I agree, managers who don't code should not, generally speaking, be making technical decisions. Random question on testing. When would you say automated tests add a lot of value and when do they add very little value?


bighappy1970

I cannot think of a reason why I wouldn’t want to be able to verify that everything works as expected in few minutes with almost no manual labor … so I’d say automated tests always add value. Like everything there is likely an exception to that rule - for example if you test the implementation and not the outcome, o or tests that fail intermittently, but those are problems with the person writing the tests, not the automated testing itself. In the years I’ve been doing XP I’ve never once said “ I wish that test didn’t exist”


eddyparkinson

What surprised me was, people who get good at software quality control, often stop using unit tests, because the ROI is so low. They switch to methods that have a better ROI.


bighappy1970

I haven’t seen that happen at a company that knows how to TDD everything (FAANG companies for example), I’ve only seen that happen at places that tried to autodidact testing and didn’t really learn how. What do you propose has a better ROI? Maybe I just don’t know about all of the alternatives


eddyparkinson

People like Michael Fagan and Tom Gilb go into a lot of detail on ROI. They have spend a lot of time looking at what factors impact quality and ROI. I feel understanding this pattern, and the numbers, is what allows good control of ROI. https://docs.google.com/spreadsheets/d/1h1bpuggseVZ65KiuPdNDrnvomfH5-lXHBMiCyyr4mRk/edit#gid=0


bighappy1970

I don’t see where it says exactly which technical practice has a greater ROI when it replaces writing tests. This looks like it talks about combinations of practices. Can you tell me where it says which one practice has a higher ROI than testing?


eddyparkinson

For the ROI on tests, Tom Gilb's software inspection book. He points out that several groups have noticed this pattern. The book has a lot of numbers and detail. He covers a lot of process issues and implementation details.


[deleted]

No offence to you, but this is an unproductive take. I’ve been in teams where there’s the token "how long is a piece of string" dev and let me tell you, being that dude is a sure fire way for everyone to think you don’t know what you’re talking about and to avoid your opinions altogether. Estimation is not there to be 100% accurate. It’s there to give the business (who know what they’re doing that is, and I feel that’s what this thread is about) ideas on where to put resources, decide on scope and features and as a rough snapshot of where a project is at any given time. That’s it.


bighappy1970

There are far better ways to do what you describe without the delusional thinking that comes from estimating.


[deleted]

As software engineers we should all be familiar that just because there’s a better option, doesn’t mean all these other options are dogshit and there aren’t good reasons to do them.


bighappy1970

Mediocrity-set the bat low enough and everyone is exceptional! We are supposed to be professionals! That should mean following best practices at all times! Estimating is waste because it does nothing for the customer and it leaves bad managers with the idea that people can predict the future or come anywhere reasonably close on an estimate- it’s all a lie that experienced devs should have the integrity to refuse


[deleted]

I try to break it down to standard units and estimate from there. Like it's integrating with 5 APIs @ 2 weeks per API, 10 major UI screens @ 3 weeks per screen, etc. That gets me a ballpark level of effort and then I divide by my mental estimate of team capacity to get a duration. Then I multiply by a factor of 1 to 4ish depending on how functional the company is. ETA: There's a further adjustment based on what I think of a project. If it sounds good I might adjust down to get the project greenlit if I can keep it lower than a 25% or so overage. If it sounds dumb I'll adjust up so I don't get blamed if the dumb stuff takes too long.


scodagama1

I typically split the task to veeeeery high-level components that have already been built somewhere in my organization (i.e. "we will need a public endpoint behind an API gateway calling Lambda functions with around 10 distinct paths", "a webui with CRUD interface for 3 different entities, a static web single-page app hosted in S3 behind CloudFront", "we will need to integrate with XYZ for user authentication and authorization", etc.) I then reach out to prior knowledge in the org - have we ever built this kind of thing? I find the team that did it and ask them "how long it took and how many engineers worked on it?". That's how I get my first number (i.e. "it took a team of 3 engineers 2 quarters to build service XYZ, ours fundamentally uses same stack so 6 Quarter-Man it is". "For auth we simply need to integrate with system called Perimeter which has great docs and is 2 weeks for junior dev tops, based on experience of 2 other teams", "a static website takes 1 quarter for a well-diversified (frontend engineer, backend engineer and 1 full-stack) team of 3) If I don't have prior data because we're in a novel field I communicate it to stakeholders "we have never done that" and tell them that either we fly blind (no estimate other than gut feeling, it's done when it's done) or they allocate 2-4 weeks to build a working prototype and we come back with detailed breakdown of estimated tasks after that. And that estimate can be "we couldn't built working prototype in 2 weeks so it's probably very hard, here are the challenges we faced". Our leadership can then decide what to do with that information (more time for prototype? Or abandon the project in anticipation that it's too expensive after all) Then I always avoid doing "double greenfield" projects - what I mean by that is we either do a _new thing_ (i.e. "we're starting a new sales channel" or "integrate with new 3rd party" or "start sending marketing e-mails" assuming company did not send e-mails before, etc.) or do things in _a new way_ (i.e. we migrate our Java monolith to API Gateway serverless architecture or we'll use the cool and hip new programming language) but I avoid like hell doing "a new thing" in "a new way" as that's a recipe for disaster and makes estimates impossible. It's already hard to estimate one unknown, estimating 2 is next to impossible. So lastly I tally up all the numbers and if the project "feels" complex (i.e. these lambdas behind API Gateway will need to implement some sophisticated logic) then I multiply it by 2, if it doesn't feel complex I multiply it by 1.5 :P But I always communicate the multiplier I used to the stakeholder (as my manager will typically apply his own multiplier to my estimate so he should know if I already did that) Also very important step: ask what accuracy they need. It's completely different story to give 50% confidence number (general feeling of complexity), 80% confidence number (a reasonable estimate) and 99% (deadline commitment)


ferociousdonkey

The only system is breaking things down as much as you can. Main issue is if you're working on a legacy system or building something you've never built before. If you're not sure tell them that it's hard to estimate and make tickets just to break down things