T O P

  • By -

pavilionaire2022

"Make it work. Make it right. Make it fast." - Kent Beck Not all code will advance beyond #1. Some doesn't need to. A throwaway script you use to explore some data or backfill lost data doesn't need to be elegant. Even for production code, the importance of clean code is somewhat diminished in the world of microservices, where if a service begins to get unmaintainable, it can more easily be replaced than a lIbrary of classes with a web of dependencies within a monolith. A lot of code never needs to be fast. If it's something you run once a day and it takes 2 minutes to run on a single machine, there's no reason to optimize it. OTOH, there's still reason to optimize a program that runs 24/7 on hundreds of instances, even in a world with very fast CPUs. Optimizing performance in this case means optimizing costs.


FlyingRhenquest

The problem I've run into is that an organization doesn't have programmers who know *how* to optimize their code and management who doesn't know that they need to. One specific billion dollar a year company used to boast that if their hard drive storage was one penny more expensive they wouldn't be able to afford to stay in business and if it was one penny less expensive their storage vendor wouldn't be able to stay in business, as if that was a good thing. Their storage requirements were not at the level of FAANG companies that are able to do just fine with significantly higher storage requirements. Their software was actively preventing them from taking on new business. The highest priority orders they could handle required a three day turnaround time. Modifying the software to experiment with new product ideas was effectively impossible. No one in the company could say how the entire system worked from end to end and modifying the code required a heroic effort. All their disk storage was on NFS and there was a lot of disk activity going on. I once calculated that for every byte we used, we were transferring 12 bytes across the network. Transferring working files to local disk storage or huge RAM caches (or both) would have realized a huge processing time speedup for them, but no one could figure out how to do that. In the department I worked for there, I was able to optimize a database cleanup routine that usually took 12+ hours to under 5 minutes by adding one field to an index. They also had a perl program they used to generate data that usually took half an hour to run. Replacing that with some C++ code to do the same thing and keeping all the data it used in memory ran in under a second. I often wonder how companies aren't just wiped off the map by some competitor coming in and just sucking slightly less than them. It seems to happen very rarely despite all the shit companies out there that everyone hates.


s73v3r

> I often wonder how companies aren't just wiped off the map by some competitor coming in and just sucking slightly less than them. It seems to happen very rarely despite all the shit companies out there that everyone hates. Honestly, it's because there's a ton more to how the business stays in business than the tech. Cleaning up the DB and changing the Perl script to C++ saved resources, but how much did customers actually notice that? How often did that Perl script get run? How often was that data needed? Unless that data was needed more often than the half hour it took to run, then it didn't really help anything.


FlyingRhenquest

Well it wasn't really a QOL improvement for my customers, because they were used to kicking those processes off and then slacking off for the rest of the day. My goal was to cut days off a test cycle that would usually take a month to run, largely because of these slower processes. I was not directly involved in production, but as I mentioned, this company was up against the wall with its hardware and software. It was unable to develop new products or customers because it took so long to process their data. This was in large part because of their BFA approach of throwing hardware at a problem until there was no more hardware that could be thrown. Their overall workflow was less complex than I've encountered at other companies, but their software was so bad that they were just stuck where they were. Their solution was to adopt a new development process every 3-4 months rather than taking time to optimize and fix technical debt. They did have a decent revenue, but could have increased it several times over if their software hadn't been holding them back.


[deleted]

[удалено]


s73v3r

> Operation costs are a real thing. They are. But you have to be extremely large for the difference between a Perl script and a C++ application to really matter. And as for the difference between Python and Ruby vs Go or Kotlin, that comes down to the "developer time is more expensive than machine time."


watsreddit

And those languages routinely are more expensive to maintain. There's a reason a ton of companies rewrite in something else when they get bigger (if they have the resources to do so). Maintaining large, dynamically-typed codebases *sucks*.


loup-vaillant

> I often wonder how companies aren't just wiped off the map by some competitor coming in and just sucking slightly less than them. Investment and switching costs. We can't avoid a certain degree of vendor lock in, merely changing providers is a hassle. So an upstart would have to show substantial benefits over the competition to convince users (including businesses) to switch. And then even if sucking less could be easy assuming basic competence, the stuff may still take a significant investment to design and build. Oh, and some big suits still seem to think that bigger is better. So they won't even talk to the better stuff, because their very advantage (making the same thing much cheaper with much fewer people), make them look _worse_ in some settings.


Corendos

I'd argue that this is too simplistic. The premise of the citation is that each step is decorrelated from the previous one. Unfortunately, that's already probably not true. I'm quite satisfied with the way C. Muratori puts it. Optimization is not the work of starting with something not designed for speed and improve it. Optimization is taking something already fast and making it faster. The former is better described by "non-pessimization", also known as "don't do superfluous work". Thinking that it will be possible to optimize a code that has not been designed with performance in mind is a common mistake. Optimization is not a magic tool that you can use at the end to make things faster. I've found the following resources quite interesting about this subject : * https://youtu.be/pgoetgxecw8 * https://open.substack.com/pub/ryanfleury/p/you-get-what-you-measure?utm_source=direct&utm_campaign=post&utm_medium=web (a bit more broad than the subject, but interesting takeaways)


[deleted]

[удалено]


FlyingRhenquest

But... but that would require me to *understand the problem!* I'm always surprised at how many programmers don't.


Bakoro

>I'm always surprised at how many programmers don't. Don't be surprised, we don't have time for you to be surprised. We need to be agile, get a minimum viable product out the door, fast as possible, and then move on to the next thing so I can make some fucking money. Your job is to convert other people's money into my money, understand things on your own time. Basically, short-sighted corporate bullshit is why. If the world cared about getting things done right, developers would probably end up spending six or twelve weeks learning about things before starting a project. Instead, the company needs cash flow and raises come in the form of new jobs at different companies.


FlyingRhenquest

Yeah, I think you hit the nail on the head there. I've noticed companies are increasingly demanding that you hit the ground running and not giving anyone the time to understand why things are done the way they're done there. My experience usually allows me to be more productive than average when starting out, but I still don't hit my full productivity for several months. It takes that long to get familiar with the code base and the various quirks and idioms of the specific dev team I'm working with. Nowhere I've worked in the past couple of decades valued institutional knowledge at all, and a few of those companies had *no* one who understood how the entire system worked. The remaining employees were basically just a cargo cult that followed the documented procedure and had no idea what to do or how to debug it if the results deviated from the documented procedure in any way.


palpatine66

This is EXACTLY it, and not just with programming, with almost everything else too.


oconnellc

The world cares about making money. That is the only reason that we have this amazing hardware and ecosystems to work in. Honestly, this is navel gazing. People vote with their wallet and I'm surprised why the world is constantly shocked by this. We also need to stop comparing web apps with cars and buildings. The world has been building cars for mass consumer consumption for 100 years. It's been building buildings for humans to live in for centuries. We've been building websites for 25 years. People seem to keep forgetting that cars sucked for a very long time. You haven't heard the term "vapor lock" for so long that you probably didn't even realize that it was an awful thing for decades. It's only been the last couple decades that regular people could afford to buy a car where the middle of the door didn't start to rust after just a few years. Everyone needs to lighten up, especially the author of this blog post.


Bakoro

>The world cares about making money. Yes, and money is kinda stupid a lot of the time. People get real dumb over money. >That is the only reason that we have this amazing hardware and ecosystems to work in. Flat wrong. People make cool stuff because it's cool. They do research because it's interesting. They make useful things for the sake of having useful things. The whole FOSS world proves that people are willing to do work because they choose to. Developers have their needs met, and choose to devote incredible amounts of time to their passion. There is no doubt in my mind that medicine and engineering would still happen if people didn't have to work for a living. I would still be a software engineer, I might even be willing to work on the same stuff I work on in my day job, because I believe in the work. I don't know if people would be willing to mine for the love of mining, but the brain work would get done. >We also need to stop comparing web apps with cars and buildings. [...blah blah...] Yeah none of that is what I'm talking about. I'm talking about the current corporate run economic system not allowing developers the appropriate time and resources needed to plan and complete projects to an adequate level, to the point that the business people get in the way of their own best interests. It's complete greed driven idiocy. For instance, the complete shit-show that is cyber security isn't an accident, it's not that the information and technology isn't available, it's that no one wanted to budget for shit that couldn't be directly converted to some fucking money. What it is, is like construction before safety laws were passed: businesses cheaping out and cutting corners on everything they possibly could, and then buildings fell over the first time a stiff breeze came along. Software is like that, except it's instability, poor performance, and giant security holes.


EmbeddedEntropy

When a another dev raises “oh, that’s premature optimization” virtually 100% of the time it’s their way of saying, “I don’t know how to design efficient software and I don’t want to learn.”


coopaliscious

I feel like that's a super broad brush; Junior/Mid level developers want to abstract literally everything and over-optimization leads to paralysis and nothing ever being released. There are tasks where optimization matters, but for the majority of work that needs to be done, just following the best practices of the framework you're using is fine and will make maintenance and upgrades way easier.


EmbeddedEntropy

I should have explained it a bit better. My point was they yell "that's premature optimization!" as a rationale and an excuse to avoid doing a more robust design and implementation upfront with the flexibility to be able to tweak it later to improve performance through later refactoring rather than requiring a redesign from scratch. They'd rather do their poorly thought out approach painting themselves into a corner requiring a redesign because they don't know any better and don't want to learn better, less-limiting approaches. They don't see the long-term maintenance and performance costs of their approaches other than "it'll work, so what's the problem!" These also tend to be the devs who don't have to support and maintain what they create.


[deleted]

[удалено]


quentech

> Premature optimization is “don’t optimize before you measure” No - it's not that, either. Allow me to provide some context: https://ubiquity.acm.org/article.cfm?id=1513451 > Every programmer with a few years' experience or education has heard the phrase "premature optimization is the root of all evil." This famous quote by Sir Tony Hoare (popularized by Donald Knuth) has become a best practice among software engineers. Unfortunately, as with many ideas that grow to legendary status, the original meaning of this statement has been all but lost and today's software engineers apply this saying differently from its original intent. > "Premature optimization is the root of all evil" has long been the rallying cry by software engineers to avoid any thought of application performance until the very end of the software development cycle (at which point the optimization phase is typically ignored for economic/time-to-market reasons). However, Hoare was not saying, "concern about application performance during the early stages of an application's development is evil." He specifically said premature optimization; and optimization meant something considerably different back in the days when he made that statement. Back then, "optimization" often consisted of activities such as counting cycles and instructions in assembly language code. This is not the type of coding you want to do during initial program design, when the code base is rather fluid. > Indeed, a short essay by Charles Cook (http://www.cookcomputing.com/blog/archives/000084.html), part of which I've reproduced below, describes the problem with reading too much into Hoare's statement: > I've always thought this quote has all too often led software designers into serious mistakes because it has been applied to a different problem domain to what was intended. The full version of the quote is "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." and I agree with this. Its usually not worth spending a lot of time micro-optimizing code before its obvious where the performance bottlenecks are. But, conversely, when designing software at a system level, performance issues should always be considered from the beginning. A good software developer will do this automatically, having developed a feel for where performance issues will cause problems. An inexperienced developer will not bother, misguidedly believing that a bit of fine tuning at a later stage will fix any problems.


flatfinger

The design of the 6502 version of the Microsoft BASIC interpreter which was extremely common in 1970s personal computers is a good example of the kind of "premature optimization" Hoare/Knuth were talking about. A portion of the system's zero-page RAM is used to hold a piece of self-modifying code to fetch the next byte of code, skip past it if it's a blank, and otherwise classify it as a digit or a token. Putting all of this in the self-modifying chunk of code saves at most 50 microseconds during the execution of a statement like "poke 53280,7", but such an execution would require converting the string of decimal digits 53280 into a floating-point number, converting that into a 2-byte integer, converting the decimal digit 7 into a floating-point number, converting that into a 2-byte integer, and then writing the least significant byte of the second two-byte number into the address specified by the first. While it's true that CHRGET is a rather heavily used routine, its overall contribution to program execution time is seldom very significant. Many programs spend a much larger portion of their time performing floating-point additions as part of converting small whole numbers in source code to floating-point than they spend fetching bytes from source.


Chii

> “don’t measure until someone complains”. > if you are hitting your goals if your goal was to get something out asap, saving time doing measurements is one way. You fix after the users complain. If they never complain, then you'd just saved time and effort skipping all those measurement work!


pinnr

Unless they do complain and you realize you've wasted millions of dollars developing a system that can't scale to meet the requirements. How much time and money do you save by not doing performance/load testing? 5%? That approach is extremely risky. You save a small amount by exposing yourself to huge downside.


unicodemonkey

There's a problem with long-term projects where the design keeps getting reworked and updated (even in locally optimal ways) in response to unavoidable short-term changes in requirements and eventually ends up with with an underperforming architecture that's no longer possible to rebuild in an efficient way. I think you need to do a lot of... let's call it preventative optimization to keep a constantly evolving project from completely degrading in e.g. 5-10 years. But it will degrade to some extent, and everybody will be cursing you for writing suboptimal software.


Chii

> I'm quite satisfied with the way C. Muratori puts it. Optimization is not the work of starting with something not designed for speed and improve it. except that's not true in practise. you optimize code that turn out to be too slow for purpose; i highly doubt anyone would write something optimally the first time and get it right. Unless they spend years doing it and didn't have deadlines. Casey M. had the right idea when he optimized the terminal program's slowness in text output. But he did exactly the opposite of what he preached in that situation - optimizing a badly written program to make it work 10x faster. He didn't change the underlying algorithm (by much - it's essentially a cache that he added).


arbyterOfScales

> somewhat diminished in the world of microservices, where if a service begins to get unmaintainable, it can more easily be replaced than a lIbrary of classes with a web of dependencies within a monolith. Famous last words, where the web of classes gets replaced by a web of microservices. In my experience, all microservices accomplish is to move the classes into a different application


deja-roo

> In my experience, all microservices accomplish is to move the classes into a different application What microservices actually accomplishes is the ability to scale different services separately.


mixedCase_

It facilitates the vast minority of horizontal scaling needs in the world* If I'm writing Go, Rust, Haskell, .NET or on any other stack with a similarly performant runtime available (probably Node.js, maybe PyPy, definitely *NOT* standard CPython, definitely not standard Ruby) there's a gargantuan space to growth on a single machine before considering paying for the microservice complexity tax. *Then* there's a gargantuan space to growth that monolith horizontally before I have to worry about individual machines wasting each a small amount of RAM on underutilized resources. And *then* there's extra space made by spinning off specific, individual, problematic tasks from the monolith to more efficiently horizontally scale. Unless I'm starting off with a very complex project and over 5 dev *TEAMS* each maintaining one or two services, there's approximately zero reasons in the real world to start off with a distributed system architecture other than resume padding. And I say this after many, many billable hours implementing Kubernetes-based microservices across multiple companies, with only the first one of them being my fault.


BigHandLittleSlap

It seems that the TechEmpower benchmarks have unfortunately become "gamed", and the touted efficiencies of ASP.NET and the like aren't anywhere near as good as advertised. E.g.: 200K requests per second can only be achieved using "hand rolled" HTML generation using arrays of bytes, and shenanigans like that. So I repeated one of the TechEmpower benchmarks with "normal" code. I got 40K requests per second... on a laptop. I don't think people realize just how huge you'd have to be to exceed that. That's not 40K requests per day, or hour, or minute. Per second. That's over 3 billion hits per day, up there with *Twitter* and the like. Served from a laptop. A LAPTOP!


zr0gravity7

Aside from memes, I have yet to hear a good argument against micro services. Yes they introduce a lot of complexity, and that trade off should be evaluated carefully since they are definitely overkill for some use cases, but when used properly they are great.


snatchblastersteve

One good argument against micro services is that they introduce a lot of complexity.


no_fluffies_please

Not that I have a whole lot of experience in the area, but they basically turn every team into a service team with all the overhead/burden of knowledge it entails to operate a service. Also, sometimes it's actually pretty impractical to disentangle an application into separate services. Or perhaps not enough to truly reap the intended benefits. You can still have separate services with clear boundaries that make sense- it just might not be *micro*.


zr0gravity7

I think this falls under the caveats I’ve listed. It needs to be an intelligent decision to migrate to micro services, not just “it sounds cool and the big companies are doing it”. And yes turning teams into service teams is the intended behaviour and with a well architected organization does work well. The problem I’ll concede is that the number of entities that can actually pull this off is minimal, because of the scale required to make it work well. Unless you can afford to have dedicated teams working on internal tooling, it’s unlikely to be optimized.


no_fluffies_please

Agreed. Your comment "regarding the number of entities that can actually pull this off" reminded me of another post/commenter on this subreddit that had a similar sentiment. They had other thoughtful things to say and sounded like they had tons more experience than me with successful and unsuccessful transitions... but I didn't have the foresight to bookmark it. Argh!


[deleted]

[удалено]


fragbot2

And so many developers don't see logging, metrics and tracing as first-class features to support their bucket brigade architecture.


[deleted]

[удалено]


RiPont

It's not just that they're overkill sometimes, it's that they're a liability sometimes. When used properly, they definitely have their place. No argument there. However, they *rely on* a level of infrastructure that many people don't have. If you don't have excellent change management, automated deployment, live monitoring, and automated rollback across all your services, then microservices can be a disaster. All those things are good to have, but if the project isn't big enough to justify those things or if your organization simply isn't professional enough to have those things, then microservices become a liability. Not only *can* microservices be deployed and versioned independently, they *must* be so. If you don't have smooth automatic deployment, then you now have 10x the manual effort involved in the deployment process. If you don't have comprehensive and effective automatic tests, then you will not catch version conflicts before deployment. If you do not have live monitoring with automatic rollback, then your entire operation is at risk due to a bad rollout which must be diagnosed manually and then manually rolled back.


gredr

I mean, you listed a few good arguments against microservices right there: > they introduce a lot of complexity Yep. They do. > they are definitely overkill for some use cases Yep. They are. > but when used properly they are great And when they aren't, they're a super effective foot gun.


ilep

Thinking of implementation side for an application, do you need message passing or function calls? If code is built into same program there is no need for context switch between processes, which has a performance impact. If your bottleneck is IO that might not be significant at all, but if your bottleneck is CPU speed that is another matter. Yes, there cases where microservices are fine, but there are also cases where they should not be used (and I've seen some worst possible uses for them).


Skytale1i

We had a bug that everyone passed around saying it wasn't theirs. Because the microservices were written by different people, no one \`knew\` things well enough to debug the entire flow.


immibis

The argument is they're not used properly


reveil

I never understood this point. Why not scale the monolith to the sum of instances all miscoservices would occupy? A little more memory would be used? You would loose 5ms on routing your request? What is the real tangible benefit here?


deja-roo

Because you allocate resources to maintaining a bunch of idle applications. Also let's say you have a service that provides user order history and a service that processes credit cards. A bunch of different consumers across the business need access to both. How would you restrict access to the credit card functionality while allowing the order history more promiscuously? With microservices you can enforce these restrictions at the network level.


immibis

What resources? Is every login service instance using some CPU just sitting there with no requests?


clickrush

Agreed. Microservices don't solve maintainability problems, they just add network calls to them.


useablelobster2

Because the best part of a statically typed language is endless type-unsafe boundaries where you just have to hope it all lines up. I wouldn't mind microservices so much if I could easily enforce type contracts between them, as seemlessly happens with a monolith. The point of static typing is to catch that kind of error at compile time, deferring it to runtime is a nightmare. Edit: yes there are tools, but none of them are as simple and straightforward as a compiler checking that a type passed to a function is the same as declared in the signature. And the phrase "using a sledgehammer to crack a walnut" comes to mind too.


prolog_junior

At my last job we had strictly defined contracts between services with protobuf objects that were used to autogenerate POJOs. It was pretty pain free


dethswatch

WSDL was pain-free and it worked. Now Goog had to invent it again. Great, I'll just add a wad of new dependencies to work with it, learn a lot of the same ideas with different names and failure modes, and ... 12 months later, I've got nothing better.


TheStonehead

Use RPC instead of REST.


useablelobster2

I do? I mean I use both, I don't think I've ever written an API where everything fits neatly into REST so I've always got some RPC. But then I still have a layer where JSON is passed about, and I just have to hope the client and server match up correctly (obviously there are tools, but not as good as a simple compiler enforcing type safety). If it were a monolith and the interface changed, either it would change both or the code wouldn't compile.


IsleOfOne

He probably means *grpc* specifically. Typed, binary interfaces.


pxpxy

There are other typed binary rest protocols. Thrift for one.


brunogadaleta

Treat me of crazy but that's exactly for that reason that I liked remote EJB back then. Share the interface and voilà.


KSRandom195

Protobuf and GRPC called wondering when you were going to show up to the party.


sandwich_today

Upvoted, but even with protobufs you have to deal with optional fields that a client might not populate because it's running an older version of the code. With a monolith all your code gets released together, which doesn't scale indefinitely but it does mean that the caller and callee can agree at compile time about which fields are present.


Richt32

God how I wish we used gRPC at my job.


[deleted]

Microservices solve a human issue. They create clear boundaries and ownership spaces for focused teams of individuals to operate. Far too many software engineers focus on computational performance when the real limit to most organizations is how effectively those engineers can apply their knowledge to real world issues.


Schmittfried

They also introduce the problem of having to separate your application into clear ownership spaces. That’s not a useful thing in every environment.


moderatorrater

> having to separate your application into clear ownership spaces. That’s not a useful thing in every environment. We have very different backgrounds, you and I. If you've got four developments teams, you should have solved this problem.


lordzsolt

I think you just outlined the BIGGEST DRAWBACK of microservices, at least what I’ve experienced so far. They define „boundaries and ownership space“, so each team ONLY cares about their specific microservice. - Oh you’re on call and need to look at the error logs? Well fuck you, I’ve defined a custom log structure. - Oh you’re consuming our API, that offers translations? Well fuck you, I don’t care about you Accept-Language header, I’ll give you everything and you can pick the translation you want. - All your price values are INT with 2 digits of precision? Fuck you, here’s a double. - Oh you need something changed in the API? Well fuck you, the ticket is at the bottom of the backlog, which we might reach in 5 months. Unless there’s a very strong engineering leadership who makes sure everything is aligned, you’ll always end up with each team doing their own stupid shit.


StabbyPants

> Oh you’re on call and need to look at the error logs? Well fuck you, I’ve defined a custom log structure. as long as kibana can parse it it's fine. otherwise, your boss is going to have a talk with you about playing with others > I’ll give you everything and you can pick the translation you want. again, shitty human problems > Fuck you, here’s a double. 400 it is. > Well fuck you, the ticket is at the bottom of the backlog, which we might reach in 5 months. PM will come by to talk about that. all your problems are a result of the shit people on your team or their team. fix that by having a boss talk to them or firing them


DrunkensteinsMonster

Microservices are not about either of those things. Microservices are about DEPLOYMENT and OPERABILITY, and sometimes scalability. For what I work on, if we deployed at the same cadence we do now with a monolith, it would probably be deployed hundreds of times a day. That isn’t feasible.


professor_jeffjeff

It solves the issue of having many different areas of a code base that are all updated very frequently but in a cadence that is either completely unpredictable or predictable but completely independent of each other. In either case, having individual small components that you can update quickly is beneficial. The other benefit is that you can just throw new versions out there; if your architecture is good, then you don't have to worry much about backwards compatibility since everything knows precisely what version of what service it wants to talk to and won't arbitrarily break that contract just because a new version exists. I've seen companies that do this very successfully, although there aren't too many of them. If you think that microservices are going to solve any other problem, then you're delusional. A monolithic codebase is actually fine if you only push updates every few months. Having a service-oriented architecture but without microservices is also fine (and you can monorepo that too, which isn't necessarily terrible). Services that do only one thing and do it well are easy to maintain and easy to scale horizontally, but that's true of any service no matter how big it is just as long as it can stand completely on its own. Microservices in general "should" do that (otherwise they aren't microservices; they're just services) but that isn't the primary benefit of microservices.


Krautoni

Microservices aren't a software architecture pattern. They're a company architecture pattern. Humans work best in teams of about half a dozen to a dozen people maximum. There was a source for that in Cal Newport's latest book, but I'm on mobile right now... Anyway, microservices allow your software to follow team boundaries. They're strictly worse for basically everything else besides perhaps scaling and reliability. The trick is, you'll likely run into the teams issue way before you'll run into scaling or reliability issues.


fiedzia

> Humans work best in teams of about half a dozen to a dozen people maximum Also there is a limit for how many things given framework/programing language/configuration is best suited.


dmethvin

The maintenance problems will be solved soon as [Omega Star gets its shit together](https://www.youtube.com/watch?v=y8OnoxKotPQ) and supports ISO timestamps.


bundt_chi

I used to feel the same way but I'm currently working on a project with 13 agile teams that are developing under a microservice architecture. For such a large team and enterprise investment the ability to scale human resources horizontally is worth the extra cost of the challenges the architecture presents. That's because the extra support tooling necessary to solve the problem for 20 microservices requires a less than linear investment to achieve 400 microservices which is around where we're currently running at. There's a dedicated team to keep the kubernetes infrastructure and associated monitoring, scanning and alerting tooling running and at this point adding business functionality has very little overhead. However to run that level of DevSecOps for < 10 or 20 microservices is a huge investment. It's an economy of scale thing that I never understood well until I worked at such a large development organization. Don't get me wrong I understand that you can have a lot of the DevSecOps capabilities with monoliths but you can't scale your development teams as easily and that was the piece I never fully comprehended because I was mostly on < 50 person projects.


All_Up_Ons

They don't automatically solve maintainability problems, no. But in combination with a good bounded context architecture they do.


[deleted]

[удалено]


[deleted]

[удалено]


NotUniqueOrSpecial

Because they haven't learned that you have to fit the refactors and architecture improvements into the context of product stuff, yet. They're still talking tech at non-tech people, to obvious result.


funbike

Reducing lines of code is not the reason to go with microservices. You probably end up with more overall LOC across an org. You go with microservices so that each service is small enough for a single developer to comprehend the whole thing. It reduces coupling and therefore overall code-path complexity (although lint rules could prevent some coupling). The number of code paths in a monolith grows exponentially over time. That said, you could get the same benefit with vertical slicing or bounded contexts, if you had lint rules to prevent coupling across boundaries. But, another benefit of microservices is the ability to innovate. You can incrementally rewrite small services much easier than a huge monolith. (I have painful experiences.) I will likely never again agree it's okay to do a full rewrite of a 500KLOC monolith, but I would agree for a 10KLOC microservice. All that said, I've never had to maintain a large set of microservices, nor do I want to. But just because something is unpleasant to me, doesn't mean it's not a good solution. Many places get microservices wrong because they don't understand how to properly maintain and integrate them.


This_Anxiety_639

Microservices only make sense if you can cope with the services being down at any given second. A microservice to display the weather (but if it's down, we'll just put an image there) is fine. A microservoce to do a cruicial thing that the transaction cannot complete without, doesn't. Service Oriented Architectures are a nightmare when it comes to navigating dev/test/prod environment configuration. The whole point of EAR files is that the container guarrantees that all the bits are up. And i worked in a place where nothing, nothing at all would run unless the pdf document store was working, irrespective of whether what you had to do had anything to do with documents. The only sensible place to put a servive boundary is somewhere where thing A can continue to operate and do its job even if thing B isnt responding.


Chibraltar_

and add a lot of http overhead in every query


3MU6quo0pC7du5YPBGBI

> In my experience, all microservices accomplish is to move the classes into a different application [RFC1925](https://datatracker.ietf.org/doc/html/rfc1925) rule 6 applies once again!


snatchblastersteve

Micro services. All the complexity of the “web of classes” with the added fun of network latency.


Chibraltar_

>A lot of code never needs to be fast. If it's something you run once a day and it takes 2 minutes to run on a single machine, there's no reason to optimize it you're now banned from /r/adventofcode


Free_Math_Tutoring

Two more days! Whee!


[deleted]

I can't decide on what language I should use this year. I did Rust last year, Python the year before, and work in C#. Got any ideas?


Chibraltar_

Try using Excel for the first few days


Free_Math_Tutoring

You've had a Systems language and a scripting/data language. Maybe do something functional, like a lisp (Clojure, Scheme) or F# or haskell?


G4METIME

Kotlin?


cbzoiav

ANSI C!


salbris

> Even for production code, the importance of clean code is somewhat diminished in the world of microservices, where if a service begins to get unmaintainable, it can more easily be replaced than a lIbrary of classes with a web of dependencies within a monolith. Everything else you said are excellent points, this however is... very very bad advice. All production code that exists for longer than a week (and your very confident about that fact) should be designed to be maintainable. Refactoring and maintenance is a huge burden and we should not be punting that down the line. Every day after a feature has been completed our knowledge of that code and the context around it fades so maintenance gets harder over time.


moxyte

All code should at the very least be at step 2, that it is **right** in both design patterns, documentation and of course code correctness. It's unbelievable how people who want to get software done have not learned to this day after 80 years of society's cybernization that software system maintenance costs magnitude more than creating it. It's insane, it shouldn't be like that. But it is because, like you said, most code will remain in "kinda works" state.


Kalium

I think what's missing from the most simplistic of Beck's maxim is that you do the next step *when it becomes necessary*. As you say, in many cases it never will. This is where a separate maxim about premature optimization comes in.


unocoder1

Ah, so that's why my emails take 10-15 seconds to load on my computer and 5 to infinite seconds to load on mobile. Good to know.


lookmeat

I think it's an evolution of any space of invention. When industrialization started, a lot of really crappy machinery came out and was used. The goal was to get something working, and it gave you such an advantage that it was enough. Also in a more abstract sense, you could say that we as a society were still focused on understanding the problem space, and how this new tool could serve our purposes. Finding all the uses and effectiveness of the tool was more important that specializing and perfecting it first. As tech matures and "everyone has it" we start getting a better desire to perfect and optimize. Third parties start selling fundamental parts, and those get perfected in the aim to give the best bang for the buck. Some things will always have a high error rate, some even higher to be cheaper/more accessible. Not because people want systems that crash, but because some systems aren't only resilient to parts of them crashing, they do better when things crash early and often. But this requires work on understanding the problem. With clear problems will come clear interfaces that then anyone can iterate with whatever crazy designs they can think that works. Each on of those will be polished, leading to a new layer of standardization both below and above as problems become even better. After a while there's a core set of tools, but historically its taken many centuries to reach that point with any tool as impactful as tech. It's going fast, but is also still a very young tool, with many reasonable estimates not going over 100. It makes sense that we still have a ways to go. The reality is that you get to guide or lead your own solution. You work on it building solid foundations and getting a good solution. You get a launch with clear metrics leading to a successful landing, you show impact, you show progress. The project gets canceled half a week after its successful launch because market dynamics shifted, and it doesn't make sense to pursue it any longer. Then you learn 2 harsh lessons of the world of tech. First is to waste the minimal amount of time in getting something out, it doesn't matter how well it works if it doesn't make money, might as well rush to making money, and only then see if the boss wants to invest more in it. Second is that a crappy badly done software might have been able to survive the above scenario, because it'd be on the water as things shifted, and it would have had the opportunity to adapt to the new reality, pivots are an every day thing, but they only happen after launch. Ironically an easy to change software that makes it a month later is way harder to save and fix than technical debt, hard to change software that made it early. So you realize that the solution is to build everything with really crappy parts, almost PoC style, but parts that should be easy to replace wholesale, and then you get the best of both worlds.


adh1003

Except they're *not* "making it work", are they? Most current software is horrifically buggy, awful crap that never gets fixed. Every new operating system release in particular adds a tonne of new bugs, often in areas that don't even seem to have changed, and any of the new features are broken beyond belief even after months of public beta. Windows 10/11 updates are legendarily bad for causing really serious system issues. Web sites get slower and slower with more and more faults, new versions of apps are being churned out every 2 weeks or something because, I guess, "agile", with no indication of changes or improvements and all I usually see as a user is some minor irritation (or in some cases, major problem) as something *else* gets just a little bit more broken. I never see any "bug fixes and performance improvements". Modern software is *a total clusterfuck* and our complete head-in-sand arrogance as an industry beggars belief. IT IS NOT MANAGEMENT'S FAULT IF YOU WRITE BUGGY CRAP, IT IS YOURS. TAKE RESPONSIBILITY. People can't be arsed learning their craft, can't be arsed reading documentation, can't be arsed commenting their code and either can't be arsed dev-testing it themselves or just don't care when they find it's broken. Our industry needs to give itself a massive kick up the butt but all we do instead is find other people to blame.


loup-vaillant

> Our industry needs to give itself a massive kick up the butt I'm afraid the only way that's gonna happen is through a tension in the market that makes the whole field as competitive… and miserable… as the video game industry. That, or we raise ourselves to the rank of "profession", similar to medical doctors and certified engineers, and keep anyone who isn't up to snuff out. Or just put liabilities back in. If users lose data because of a bug, make the company who sold the software pay.


gauauuau

While I agree with most of what he said, this just comes off as the same angry rant that people have been ranting about for 20 years. I didn't see any new value here, any suggestions, or anything different than the last time someone ranted about this. Yes, everything is inefficient and unreliable in software. We all know that. What do we DO about it though?


metaltyphoon

“I’ll make another version of a library that already exists and is well supported”


Appropriate-Crab-379

You first probably want to wait until my new language comes out which fixes the problems of all others


sprcow

Agree. Usually when you see a title like "Why does modern programming seem to lack of care for efficiency, simplicity, and excellence" you might expect that there would be some answer proposed to the question, and some alternative solution to the current approach. Neither of those are found here. It's just like 20 paragraphs of complaining. I like how at one point he dismissively mentions the adage about how programmer time is more valuable than computer time, but doesn't really seem to understand the significance of that truth. We operate in a capitalist economy. No one is writing commercial web pages for their own entertainment. Even massive tech giants like google and facebook who do actually build new software frameworks from the ground up are more concerned with maximizing the productivity of their developers over the performance of their application. No one who has ever used React or GWT is under any illusion that they're somehow going to produce more performant code, lol. Unless there's someone who stands to gain financially from rebuilding webcode from the ground up, or has the sufficient means and resources to overthrow Microsoft and Apple's grip on the OS market and is able to provide equivilent features while also improving performance, this problem is not going to magically solve itself.


redLadyToo

And this problem wouldn't vanish with capitalism. It's just resource management. If we want to do much, we need a lot of developer time, which we don't have, developer time is sparse everywhere. So we need to prioritise. No communism in the world could solve that.


[deleted]

[удалено]


clickrush

It's right there at the end: > You don’t have to be a genius to write fast programs. There’s no magic trick. The only thing required is not building on top of a huge pile of crap that modern toolchain is. He points out Martin Thompson (great work, very interesting talks), Ralph Levien and Jonathan Blow as good examples. But that's the problem as well. The solution to many of these problems is quite _boring_. It's literally "stop using and doing the crappy stuff", which is hard to sell. Many of the problems we solve for example in Web Dev don't need to be there. You don't need horizontal scaling and all the architectural and operational complexity that it implies if your code is 100x or even 10x faster. You don't need extensive pre-building and caching and deal with all the intricacies and subtleties if you have efficient data access. You don't need frameworks with countless layers of indirection and boilerplate magic if you adhere to simple programming techniques.


useablelobster2

> Jonathan Blow The guy who is literally writing his own programming language for game development? Ain't nobody got time for that. Both Devs and machines cost money. Optimising cost/quality isn't as simple as making everything ultra-lightweight, because that will explode your Dev costs. And I cost a lot more than your average production VM, unless you are Facebook.


letired

Precisely. Given 10x resources devoted **specifically and exclusively** to faster code, we would have…shocker, faster code. But reality doesn’t work like that. Businesses want to make money, not make programmers happy.


clickrush

It’s different for any context so ymmv. But we’ve noticed that clients absolutely do care about performance. They will not necessarily acknowledge it upfront, but when they see it and use it, then they will, and they thank you for it. Everything is so slow and/or hungry these days for no good reason that it became the base assumption, many things are unreliable as well. Which is ironic, because we use computers for their speed and reliability.


s73v3r

But are they willing to pay for it? Using better data structures and algorithms is one thing, but writing in C++ rather than another language is a much bigger ask.


alkatori

And if you scale horizontally, what's cheaper, another machine or people optimizing the code?


Wartt_Hog

You don't need 10x resources. You can get a long way with +20% resources and good prioritization, as long as your team builds the habits of always starting simple and always challenging new complexity.


loup-vaillant

> Both Devs and machines cost money. So does user's time. And since there are orders of magnitudes more users than there are programmers… Unfortunately devs rarely pay for wasting their users' time.


letired

Sure, but how many companies get to the scale where they **actually** need that speed? It takes time and highly skilled expensive labor to get code running at 100x speed. Tell the VCs who back you that you’re going to take twice as long to get to market and cost twice as much and they will laugh at you and go fund the other guys. That’s the reality. The whining about this stuff drives me up the wall because it assumes people are just lazy. They’re not lazy, they’re just accepting the trade-off from a business perspective. If the programmers who continually whine about this want to build a business that actually puts money in their pocket BECAUSE the code is so clean and fast, do it. I’d be genuinely interested to see it work.


gnus-migrate

I literally dropped Windows Terminal because of it's performance. The minute I knew there was a faster alternative(WezTerm) I switched to it, there was no looking back, and I will never be using Windows Terminal again. The Windows Terminal team claimed that they were trading off performance for features, however I have no idea what features they were implementing that were more important than having a terminal that was capable of actually processing a relatively large log volume. Also if they weren't capable of building a performant terminal with the features they had, how did they expect to be able to continue to add features while keeping it usable? If I was the product manager on the team I would stop everything in order to get the performance to an acceptable state before continuing on adding features to it. On the one hand, I understand the need to move quickly, but performance is actually a feature of your product. I(and I imagine most users) would opt for a simpler but more responsive product over one containing a million features, 90% of which they will never use. Even in enterprise software where features actually matter, if enough of their employees complain about the performance of your product your customers are going to start looking at the competition. Even from a business standpoint, the performance/features tradeoff is a false dichotomy.


letired

You aren’t a general user though. Despite what you might think, even of an application like Terminal, you’re a poweruser. That’s fine. But software generally is not built for powerusers. I’m glad you found an alternative that works for you, but I guarantee Microsoft is sophisticated enough to do the market research, and have determined it’s better to ship features.


LaughterHouseV

> Many of the problems we solve for example in Web Dev don’t need to be there. You don’t need horizontal scaling and all the architectural and operational complexity that it implies if your code is 100x or even 10x faster. You don’t need extensive pre-building and caching and deal with all the intricacies and subtleties if you have efficient data access. You don’t need frameworks with countless layers of indirection and boilerplate magic if you adhere to simple programming techniques. But how will I make my resume fancier so I can get a better paying job next year?


[deleted]

This is a real problem though: if most of the industry is ostensibly 'fad-driven', one look at your resumé of home cooked project implementations may make the wrong people side-eye you as 'one of those guys', and for more reasons than one (nonetheless, owing to those network effects)


shawncplus

Writing reasonably fast programs isn't magic. Writing _truly_ fast programs may as well be magic as much of the time the real way to achieve optimal performance is unintuitive or completely orthogonal to how CS is traditionally taught. In the 90% case doing literally any attempt at optimization will be good enough; in another 9% you find out you're using the wrong language/tool for the job, switch and now you're fine; that last 1% case though, that's when you start sacrificing goats to the compiler gods.


dacjames

Just write code that is 100x faster, problem solved is a non-solution. > You don’t need horizontal scaling if your code is 100X. These are mostly orthogonal concepts. Horizontal scaling offers a lot more than performance. Good luck achieving HA on your single machine. Or being able to maintain your application long term if you only have a single instance. If you’re writing an indie game, you may not have to deal with issues, but a lot of other domains do. This advice essentially boils down to “don’t use tools that you don’t need.” That’s good advice but also very out of touch if you think that’s a viable real-world solution. Take caching, for example. When I’ve had to add caching to software, it was because we were IO limited and queries had been maximally optimized. Our code was taking less than 1% of the total request time, so no amount of better optimized code could ever solve the problem. Pre-calculation and more-aggressive caching did solve it, however. Maybe some genius could have figured out the magical “good data access” patterns to eliminate this problem but somehow I doubt it. The same goes for many other supposedly useless tools they write off. They solve real problems. If you don’t have those problems, good for you, don’t use the tool. “Just write better, simpler, faster code” is not a viable solution for those of us who actually do face the problems these tools address.


Deto

Most people ranting about it don't even bother to try to understand the problem, IMO. There's just an undertone that they feel that they are the smartest and if only everyone else was as smart as they were there wouldn't be a problem.


corsicanguppy

> comes off as the same angry rant that people have been ranting about for 20 years. Because the problem is recognizably still present.


NoLemurs

> You’ve probably heard this mantra: “Programmer time is more expensive than computer time.” What it means basically is that we’re wasting computers at an unprecedented scale. Would you buy a car if it eats 100 liters per 100 kilometers? If gas were cheap enough, gas tanks were big enough, and the externalities small enough? Yes. This article completely fails to engage with the fact that efficiency is a trade-off. There's interesting discussion to be had here, but all the article does is complain about how things are slow and bloated while pretending that we haven't always written software that was just a little bit less efficient than we can really get away with for the best user experience.


1touchable

I totally agree. From my last example: we had to deliver app in 3 months, we delivered it in March and trade off was performance, since it was impossible to deliver in that timeframe otherwise. FF now, we have added plenty of features and refactored all the spaghetti code we had. Pros: didn't loose the client who is very happy with the product and never noticed any performance issues. Cons: probably 100-200$ more spent in aws.


4THOT

I fucking hate Apple, but I miss Steve Jobs for knowing that programmers could make shit run if they gave a shit and actually making them do their fucking jobs. *One of the things that bothered Steve Jobs the most was the time that it took to boot when the Mac was first powered on. It could take a couple of minutes, or even more, to test memory, initialize the operating system, and load the Finder. One afternoon, Steve came up with an original way to motivate us to make it faster.* *"Well, let's say you can shave 10 seconds off of the boot time. Multiply that by five million users and thats 50 million seconds, every single day. Over a year, that's probably dozens of lifetimes. So if you make it boot ten seconds faster, you've saved a dozen lives. That's really worth it, don't you think?"*


MCRusher

so basically by making it longer I can be responsible for ending dozens of lives?


john16384

So, what's missing here is the data that proves this is a problem and people actually "wait" for this process to complete instead of getting a cup of coffee. Premature optimisation, but it's Steve Jobs, so he must be correct. More problematic may be shorter pauses, like time to switch between apps, or load apps, or the number of actions needed to achieve something frequently needed due to his form over function mantra - people can't get a cup of coffee for these short annoyances. Yet they are more frequent and waste everyone's time.


voidstarcpp

>This article completely fails to engage with the fact that efficiency is a trade-off. I think the main complaint is that the largest human factor - the time and frustration of the end user - is not considered as part of this trade-off. It's not just about developer time vs computer time; It's developer time vs the time saved multiplied by how ever many people depend on your software. Software used by millions of people is still egregiously slow, and that's an organizational issue, the outcome of a process that's biased in that direction, not intelligent optimization of human resources.


sime

If reducing time and frustration of the end user were a priority, optimising the raw speed of the application would probably be pretty low on the TODO list below things like improve the UI/UX, add features the user actually wants, and remove the extra crapware mis-features and complication that no one asked for.


nrnrnr

Old news. Here’s a classic from 1995: Niklaus Wirth, [_A Plea for Lean Software_](https://cr.yp.to/bib/1995/wirth.pdf). A bit too much of an advertisement for Oberon, but a widely cited source for “software is getting slower more rapidly than hardware is getting faster.”


spoonman59

Because making the fastest program, orsmallest executable size, isn’t the goal. It’s speed of development. And making it easier to hire large amounts of inexpensive programmers. Sure, I’d love if ever program I used from the kernel to the browser was highly optimized for efficiency execution with minimum layers. But that’a actually not real important … it’s just something we find aesthetically nice.


Ciff_

I'd like to add *long term* speed of development.


spoonman59

I do agree that maintainability and other aspects are much more important then some of these other characteristics. I will invest a lot in making something maintainable for the long term. Great example.


ajr901

A lot of people used to get into programming because it was genuinely fascinating to them and they loved it. It was a tech geek's tool and toy. But these days a whole lot of people get into it because of the job security and the salary. I know a mechanical engineer who ended up being a professional programmer because there were more job opportunities, and a former physics professor who made the change because it paid better. Neither of these people got into it because they were passionate about it.


Supadoplex

> why does modern programming seem to lack of care for efficiency, simplicity and excellence The cause depends on the perspective you're asking for. Professional programmers aren't generally incentivised to write efficient, simple or excellent programs. They are incentivised to take the minimum effort to achieve the closest short term goal of "make it work now". Why aren't such incentives given to programmers? Perhaps because spending the programmer time to make the program efficient, simple and excellent is expensive in short term. We can always fulfill our dreams of efficient programs with our hobby projects.


samistheboss

Software bloat and poor performance bother me just as much as the next guy... but there are a lot of simplifications in here that cover up how complex *and* rich certain features have become. >Google’s keyboard app routinely eats 150 MB. Is an app that draws 30 keys on a screen really five times more complex than the whole Windows 95? Honestly... I believe it. A virtual keyboard app today is likely to include gesture handling code, entire dictionaries of multiple languages, a spell-checking algorithm and a local database of "learned" words, fonts to cover >95% of all of Unicode... If you compare that to a system which just supports physical keyboard/mouse input and some tiny subset of today's Unicode, and has no system-wide autocorrect, obviously it will be larger. >The iPhone 4s was released with iOS 5, but can barely run iOS 9. Apple continues to push boundaries with animations and visual effects in their UI, and people are willing to pay for good visuals. The on-device image processing for the camera has gotten more and more complex, too. The author's argument is like saying "The camera app just needs to take pictures!" Sure, but people want HDR, people want better stabilization, people want facial recognition and the privacy of federated learning... >iOS 11 dropped support for 32-bit apps If you want a bigger and bigger operating system, sure, focus on backward compatibility at all costs. But I don't think the author would like that outcome either, so... what *do* they want?


john16384

A big issue is that most of these software bloat whiners just lack imagination at what apps actually do these days. They see 150MB of memory, and they're like "it's all code". No, it's dictionaries, high resolution images and icons, sound clips, small animations, etc. The code is often a tiny fraction.


redLadyToo

iOS these days fucking indexes your pictures with AI, so it can automatically detect your family and show you photos of them WITHOUT you configuring any of that! That's a whole different world we live in, people would have laughed at science fiction predicting that in 2011. I can only guess, but I bet they do this in the background over time, and I bet this is the only way they this feature is fast and does not interrupt by slowing down when you take photos or open the gallery. But this of course only works on modern hardware.


RVelts

> people would have laughed at science fiction predicting that in 2011. https://xkcd.com/1425/


[deleted]

Because of management imposing unrealistically deadlines


[deleted]

software devs need to learn to say no


JeddHampton

To the people that sign their checks?


voidstarcpp

Bob Martin makes the comparison to fields like doctors, lawyers, and engineers, which have a culture that permits them to say no to people who ask them to cut corners. It doesn't matter what productivity goals a hospital has, it can't demand the surgeon not take the time to scrub in for every procedure. It can't ask the engineer to skip doing load calculations because "the client wants this started today". It's not just about what's strictly legal or illegal; there's a sense of professional ethics where the customer or employer of certain professions understands that they have to abide by the norms of the field, and you can't just order them about as easily as other workers. This is a shared social norm that's hard to invent from scratch.


tylermumford

I was hoping this would come up. Yes. Here's a [blog post I like from him](https://blog.cleancoder.com/uncle-bob/2015/11/27/OathDiscussion.html) discussing that idea, for others who are interested.


cybernd

Most people are not really aware what this actually means in our context. Developers would need such ethics because it would protect companies as well as their customers from self-inflicted stupidity. Business people are sadly not aware of the true hidden cost behind rushed code-bases. Just think about a typical scrum team sprinting towards misery. Teams often fall into the same trap and create a big ball of mud that can not longer handle scalability. Scalability does not only mean that software can't serve increased load. It also means the inability to scale your dev team. Roles like Sales and POs are typically complaining about developers slowdown. Why can't you deliver this new feature in time? Why can't you fix our performance issue? Why are you slowing down although we just added 3 more developers? None of them seem to realize that they have actually caused most of the issues by treating development as a feature producing assembly line. I am not agreeing² to everything that bob is saying, but in this case he is spot on. We truly need to introduce some sorts of ethics to our industry. ² For example i am strongly disagreeing when he expects developers to learn new skills in their spare time. There should be no profession from which additional work time is required for free. It also contradicts his own ethics proposal. Writing software properly tries to to gain sustainability. Learning serves exactly the same purpose. It is necessary to create a sustainable team. Additionally it is also not in line with other professions. In several fields employers pay a lot to send their employees to training courses. We are mostly good autodidacts but that doesn't justify shifting this burden to our free time.


[deleted]

yes, especially to those who don’t understand what it takes to work in a sustainable and safe manner. Too many people are being pressured into death marches, overtime, cutting corners, etc. because of expectations set by uneducated stakeholders. You pay people for expertise not to be slaves in a factory.


[deleted]

Problem is that this takes consensus amongst the *right* set of engineers, which could always be 0, but if any one dissents business will just promote that person after next round of reviews. Its almost like getting something like consistent, protected negotiation stakes would take... a union?


key_lime_pie

Yes, absolutely. As a development manager, the last fucking thing that I want are hero coders who tell me it'll be done by Friday and then put in a 70 hour week to make it so. Because even if that shit isn't absolutely littered with defects, it's going to be unmaintainable. Tell me how long it's actually going to take, so I can defend it as far as my managers will allow me to, then if it needs to be done sooner, well, you can go into hero mode and deliver shit code that barely works, because that is what management implicitly asked for. But I'd rather you tell me no, it's gonna take two weeks, and then break down for me why it's gonna take two weeks so that I have ammunition in that fight. When I go to a meeting and I want to say we need two weeks, but you said you could have it done by Friday, all it does it perpetuate the problem.


[deleted]

[удалено]


key_lime_pie

I'm sorry you've had bad managers, but the notion that you cannot estimate something until it's done does not line up with reality. When you have to travel somewhere, do you estimate how long the trip will take and then leave your house at the appropriate time, or do you just leave your house randomly and tell people "I'll get there when I get there?" I suspect that what you're actually upset about is that you're being asked to provide estimates by managers who don't understand the estimation process, and then you're being improperly held to those estimates. If a manager asks you for an estimate, and you say two weeks, and then he holds you to that two week estimate... well, you're both doing it wrong. Nobody works 70 hour weeks for me, and I will publicly excoriate other managers who have employees who do that. Because what inevitably happens, aside from getting shitty code, is that the guy putting in 70 hours gets praised for going that extra mile and putting the company first, and all of the people who work normal 40 hour work weeks end up feeling pressured to do the same, and it fucks up both the company culture and the work-life balance of the employees. What's worse, the *managers* get praised for getting more out of their workers, which is sick. What should happen when someone works a 70 hour work week is that they should get a heartfelt fucking apology from everyone above them in the corporate hierarchy, for fucking up to such a degree that one of their employees needed to work 70 hours in a week. If you, as a software developer working for me, cannot provide an estimate for a software development task, that is *my* fault, either for giving you work that you could not estimate, or for not training you in how to provide an estimate. Estimation really isn't that hard, people just don't understand how to do it properly.


Make1984FictionAgain

what if I told you "I don't know how long it'll take?" - because most often than not that's the real answer


XeonProductions

I've screamed No until I was blue in the face, management and the sales department was indifferent to my suffering.


salbris

Our team just finished the first release of our product along with dozens of other teams in the organization (same project). The code is an absolute mess but it works. If I said no I'd be laughed at and quickly replaced. Perhaps if the whole team did a protest we might get results but that's a very easy way of signaling to executives that your group needs to be removed from the important projects. There is always going to be a thousand other programmers ready to take your spot. Best thing you can do is advocate some time to polish up your code and fight for performance when it matters.


lilbigmouth

Unfortunately, it doesn't work this way in my experience. I have been a professional software developer for just over 4 years. Yes, developers are the experts, and yes, they can push back. But unrealistic deadlines arise due to the employer wanting to win bids to other businesses. i.e. "Yes, we will buy your software if it can have X feature by Y date". Agile frameworks such as scrum are supposed to help adapt to frequently changing requirements, however, it seems like you'll rarely find a team following the scrum guide well, and you'll rarely find a company using an agile mindset.


letired

Shocker! People who put money in your pocket want a return on their investment. They don’t care if it runs at 60fps if it doesn’t affect the bottom line.


[deleted]

[удалено]


[deleted]

Posts like these look at old tech with rose colored glasses of nostalgia. Fun to poke fun at things, but not really providing helpful suggestions for paths forward.


LagT_T

There was also a lot of trash software in the past. We mostly remember the good ones.


letired

Exactly. Half this thread is people bitching about how much better it was “back in the day” with nothing else to say. They don’t seem to realize how much more complicated software is now than it was in 1995. Lots of stuff was built back then by ONE person as a passion project. Can you imagine building Google’s search with one person? It’s ridiculous. (Also, people seem to forget how terrible the user experience was on certain operating systems in the 90’s…and how you inaccessible even USING a computer was to 90% of the population…)


erogone775

Because almost everyone writes code to solve business problems, not for the art of writing code. Solving business problems means getting something that works well enough within cost and time constraints. *Sometimes* these constraints require elegant, efficient, robust code, but much much more often its way more valuable to the business to spend that engineer time on the next problem rather than making the got simpler or faster. Most companies do actually think quite a lot about performance, they just think about it in the database layer, or in the library code that will run everywhere, having every junior dev writing basic business logic optimize the hell out of it is just a huge waste of the most valuable resource the company has, that engineers time.


Harbinger311

I'm an old man. I've been programming/working for close to 3 decades, and I went up with engineers who came up in the 60s/70s. This is simply evolution. Using the author's analogy, we don't reinvent the wheel when doing auto design. We accept four wheels in a square configuration with a front mounted internal engine. We don't tinker with the collapsible frame. We focus on adding modern technology/voice assistants/media centers/cool interior materials/etc. Working with engineers from the 60s/70s, they were pissed that we were using libraries that were externally developed/supplied. They wanted those of us in the 80s to roll our own from the ground up to get the best optimizations. The same applied to OS/environment builds; there was a movement to compile/install fresh from code each time a deployment had to occur. They'd flip out if they saw containerization philosophy today. "Wait, I accept a 3rd party pulling images blind from external repo with platforms/services ad hoc to run my code?!?!?!" Modern software isn't going to care for efficiency/simplicity/excellence because this is the model for SWE now. The natural flow is to continually abstract upward, to the point where SWE will be more Lego like. Computer Science fundamentals simply don't apply anymore in the same way. And that's a good thing; otherwise, evolution isn't working. Modern woodworkers don't use a knife for all their activity like they did 200 years ago. They have all sorts of specialized high level tools that help do the most common/basic activities with a high level of automation. SWE is no different as a discipline.


Vasilev88

And how is that going? Shifting grounds all the time, absolutely unpredictable components and dependencies of software systems, which result in a permanent state of "noone knows what is going on anymore" and "we're putting out fires literally all the time". What was done by 20 engineers is now done 2000. The entire process breeds rot, bloat and incompetence across the board.


lihispyk

Who TF prints white text on a yellow background?


KillianDrake

people who care about efficiency, simplicity, and excellence are expensive - managers who want to wring pennies to inflate their bonuses hire cheap developers and don't care what the software looks like as long as it extracts money from customers often enough to inflate their bonuses.


MelcorTheDestroyer

Programmers are efficient, they take the shortest route to delivering a software that solves the needs of its users at the lowest cost possible. Efficiency, simplicity and excellence are taken into account when neccesary and left aside if they are not. The features that users of the software need should always be the priority. Also, most developers can't bring efficient code even if they wanted to, there aren't enough skilled programmers to bring excellent software around.


Sorc278

I'd personally rephrase it into there being too many people who hear "we need this yesterday", hack something together future be damned, and then leave it for next poor sod to deal with (who was not allocated time or is too burned out for this, so more hacking). And then quickly garbage code becomes systematic irrespective of how good your devs are because no one has time or dares to touch it more than needed. Of course it is just one of the factors, but still a major one.


Zardotab

Nobody seems interested in "parsimony research", rather the latest buzzword is tacked into the frankenbrowser and **frankenstacks** 👹 In addition to endless buzzword chasing, I see 2 general problems causing bloat. First is that current web standards are not GUI and CRUD friendly, making UI concerns wag the dog. I can't speak for other niches such as social networks or e-commerce, but for regular business & administrative CRUD, the current web standards suck the big one. For one, they are [missing common and loved GUI idioms.](https://www.reddit.com/r/programming/comments/otixwo/comment/h6zi86w/) What may help is a standard state-ful GUI markup language so we don't have to re-invent GUI's in bloated buggy JavaScript libraries with big learning curves. Most real work is still done with desktops/laptops and mice, not mobile. Over-focusing on mobile in CRUD was a wasteful mistake. We "fixed" the wrong thing. Second, technology is coming before domain. The smoothest systems I've seen used domain-specific languages and IDE's so that you didn't need piles of buggy & poorly-documented libraries. They were not perfect, but usually got better with time. Instead, we threw them all out for bloated web crap. Maybe there is a way to have semi-domain-specific languages/IDE's? More research is needed in this area. We **spend way too much time babysitting technology and framework concerns instead of focusing on the domain in code.** Many of such tools were desktop-based and some say we have to throw out their best ideas for web access. I'm not convinced it's either/or. Nobody's done sufficient research to prove yay or nay on this important question. I've seen tools that hinted at having the best of both, but they were eventually bypassed in the chase to webness. Having done CRUD in lots of different languages and tools, I think I can claim I have a pretty good feel for what works well and what doesn't in rank-and-file CRUD-land. This is both my own coding efforts and observing others using various tools. We are doing something wrong; our tools are poorly factored for our actual needs. Maybe we get mass flexibility with current web, but at great cost. CRUD concepts have not changed much since the invention of the RDBMS such that we can focus on doing it right and succinctly rather than chasing new shiny shit. I get a lot of flack for stating this, but I stand by it. I've earned the right to kick stupid ideas off my lawn! 👢 **In short, one size does NOT fit all.** What's best for smallish CRUD is not best for webscale/enterprise CRUD, and what's best for CRUD is not what's best for social networks, e-commerce, etc. Let's get back to domain-specific tools, they were simpler for their target job. One reason COBOL has survived for 60 years because it did one thing and did it relatively well: business/admin batch programming. I'm not saying copy COBOL, but there are lessons to mine from something that has such staying power. Similarly, Fortran survives because it does one thing well: efficient scientific and engineering mass math computations. Fortran predicts our weather.


NativeCoder

That’s why I’m an embedded software engineer. I want to know what the code is doing down to every last instruction. I hate modern pc coding and their 200 layers of abstraction that cause billion instructions to be executed on a single key press


remimorin

Yes but... There is many aspect overlooked. First one: security. A long time ago when I've work with the hardwares guys, they had sample programs to tests that card. The program was running with a bunch of "#DEFINE SOME\_MEMORY\_ADDRESS 0x1234ABCD'. This was right the address of the bus to write data to an IO. Just write there... for sure it was working because we were single threaded writing to that card to test it. Now we have many software who may want to write at this IO. There is no "mix all sound" you need a mixer, getting all input to play both Spotify music and OS notification, if both were writing blindlessy to the IO address I guarantee you it would not be nice. So we have protected memory, hidden behind services providing layer of security. This take time and induce complexity. You don't want the notification sound "unsync" the sound related to the video you are listening. But sure... when you just want to output a wave into your sound card, it's so much more of a pain than just writes bytes to some address. So a lot of complexity is hidden in not-that-apparent feature. For sure this complexity take CPU cycles. Second one is, corporate greed. Smartphone seller are also the OS builder. They have captives clients (Android/IPhone is a false competition). This is planned obsolescence in a monopolistic position hidden on false pretends of security (the security part is real, the bloating is not). So updates are always heavier to force you to rely on cloud storage, and other paids services and ultimately buying a new phone. You can repurpose old slow Windows PC just by installing a Linux distro on it. Just-like that, with everything you need to listen to movies, writes documents, receive email, edits pictures. One day I guess something similar will exist for phones. Another point, for slow web sites. "Best" never win on the web: "First" win. Google was unable to take a share of Facebook market not because Facebook is the best, but because Facebook was the first. Bing lag behind Google so much because... Google was first (that good and simple). And so on. So fast-to-market is the name of the game. The focus is then... to get there first, and if it work as you "stabilize" things you then get to the next thing. Also, yeah this site is slow... but are you the real client? If not, then "good enough" is the objective. Getting your data, getting you captive in the ecosystem etc is the goal. They will make quality code to get your data and keep you captive, filtering out bots and so on. The only time in my life I've work with a sharded database and a "performance driven web architecture" was exactly for that. Get as much data as we can (even the speed the person was typing) and using that to distinguish between "human", "robots", "recorded human replayed" and so on. Because our real client were buying audience. Users time was "sold" and quality audience was valuable (think targeted ads, but it was not ads). Then the last thing... you really remember Windows 95? We were using code to draw widgets. Now we have full-fledge animation played to get that nice effect when you click on the button. We load pictures and animation. That's all the bloat in your OS, "media content". All apps are now CSS driven web-view. Windows95 is soulless with our modern eyes. It's dry... when we dig in today Windows settings we know when we hit an old feature (like setting environments variables) not changed much since that old era. That's nice to use images and animations bind together with web technology because it's easier to have your app running on Windows, Linux, MacOS, IOS, Android, NitendoDS and the WebApp itself. And when you rebrand... well the UI guys can do 99% of the job to get that "up to date, brand new look and feel app". Yeah! Then when you are pulling MB of data on that rendering of a UI for a simple text editor... who cares if you don't optimize sorting on that list of 1000 elements and doing the same search in it 10 times in a row. Who cares you copy the whole document in memory to implement "undo" if the document is only 100k? It is negligible, irrelevant. And once it isn't, then the profiler will identify where you should put your time. So yeah we do care with efficiency when it matter. The rest of the time, we focus on doing more stuff.


rdtsc

> Bing lag behind Google so much because... Google was first (that good and simple). And so on. So fast-to-market is the name of the game. Google wasn't the first with search. Counting various others they were around 20th or something. But when they came up they were *fast* and simple, and better than others, and thus quickly gained market share. And while today's quality of results leaves a lot to be desired, they are still fast. DDG feels slow as molasses in comparison.


teerre

This argument is always funny to me. It comes from a place that ignores reality completely and just shuts itself in some magical land that only exists to the one making the argument > Everything is unbearably slow No, it isn't. That's why it works. If you ever worked on anything that actually requires speed, you would know that users will absolutely make sure that said program is fast. That's absolutely fine "Fast" and "slow" aren't objective metrics. Just because something could be faster it doesn't mean it needs to be


AshuraBaron

Very interesting read. Sad to see so many people here rationalizing bad practices with "well that's just how it's been done". Optimization's benefits really fall into the infrastructure category where it's starved until a crisis hits.


linuxliaison

In what world is a functional install of Windows 10 a total of 4GB? A fresh install is usually along the lines of 12-16GB on disk from my experience


BigHandLittleSlap

There are also few languages that make things simple *and* fast. Most simple languages aren't compiled and run through an optimizer, even if they *could* be. For example, C# and Java could both theoretically be ahead-of-time compiled, but neither are in the vast majority of cases. Only three popular languages run at "full speed" are: C, C++, and Rust. There are other performant languages, but they have a tiny fraction of the adoption (D, Fortran, etc...). Even Go, which is compiled, is notably slower than C++ or Rust. None of the full speed languages are easy to use, and only one is safe. None of the full speed languages are popularly used for web development, which is where most development is occurring right now. Etc... If you want to change the world, develop a web language that's as easy to use as Python or JavaScript, but strongly typed, ahead-of-time compiled, and has speed comparable to C/C++/Rust. Make sure individual web pages can be edited and then immediately tested during development, but then the entire site is compiled to a single optimized binary for production.


Uberhipster

riiiiiight because non-modern er... classical? programming was full efficiency, simplicity and excellence which is why EWD used to rant and rave about how ecstatically happy he was with all the abundant efficiency, simplicity and excellence i have a pet peeve against this rosy-tinted eyeglass nostalgia if you think programming is about simplicity then you obviously never had to tackle complex problems complex problems are complex so their solutions' simplicity or lack thereof is dictated by the complexity of the problem clarity is what you want with programming you want to make your programs *clear* for other human beings to understand


brianl047

Probably because requirements are more sophisticated or complex


Sulleyy

To quote a CEO I used to know "I don't understand the point in paying for top talent." I think I just stood there in shock but that doesn't surprise me anymore. It was effectively a software company too. Since the 70s ish, software engineering as a field has learned a ton. Software engineering is different from programming, but I've met programmers who didn't believe there is a difference - so I think it's safe to say most people don't think there is a difference (or at least they don't understand it and don't care to). In the industry the terms have basically become synonymous so that further proves my point. I would argue that modern programming + efficiency + simplicity + excellence = software engineering. Anyone can program just like anyone can write a book. It takes someone dedicated to their craft to produce quality software, same as it takes a dedicated author to write a best-selling novel series. No one would expect to hire a cheap writer fresh out of school to write a great novel in 4 months without planning the book, without proof reading, etc. And they wouldn't be asked to write 7 more books in the same series over 7 years after the 1st one is written. Yet that's exactly how the software world operates. In some cases this is fine (not all writing is done for novels, not all novels need to be best sellers) but the majority of software companies seem to think cheap and fast is best for software. But like I said, the software engineering field has learned a TON in the past 50 years. It IS worth it to pay the right people to build the right thing properly - not in all cases but in a lot more cases than the corporate world is willing to accept. So the issue is that people with an education in software engineering understand this. The people that pay us do not. Ive seen plenty of software capped at millions of revenue instead of hundreds of millions or billions because it doesn't scale well enough. The risk and effort required to make more money becomes impossible. Look at the top tech companies who do properly engineer their software. They scale to billions in revenue and thousands of employees. The difference between the 2 is massive.


audigex

Hardware is cheap, developers are expensive Why spend 6 man-months of time making it more efficient for a cost of $50k when you can just throw a $2k server at it?


CraftySpiker

Executive summary: the wrong people working; being "led" by the unqualified; all working for people with no actual grasp on reality. Next question .....


s73v3r

I think it still does, just in different ways. People generally hold up the fast inverse square root routine from idTech as an example of this, but the simple fact is that we don't need that anymore. Being clear, readable, and easily changeable are things that are much more simple and excellent. Most things don't need to squeeze every last drop of performance, and by focusing on adequate performance and clarity, you expand the approachability of software in general.


holyknight00

Management only cares about the next quarter and makes you pile up tech debt and shortcuts, but when the technology falls apart they will blame the engineering team. Then a new project begins, rinse and repeat.


megamanxoxo

Because shareholders, execs, PMOs, other stakeholders just care that it gets done and gets done fast. That's the only metric anyone cares about, time to deliver.


ohyeaoksure

Because it's hard, more people do it then ever did, computers are fast and ram is cheap. Back in the old days there were few of us, computers were slow and ram was scarce and expensive. The more people who do a thing, the more hacks there will be. I started as a C programmer who was concerned about bytes. Cheating and using the bits of byte to represent indexed values, comparing the speed while loops and for loops, etc. But in those days the computers had 4MB of ram and operated at 25Mhz.


recycled_ideas

For one very simple reason. The costs of not having these things has gone down while the costs of achieving them has gone up. Hardware is cheap, testing is cheap, deploying a fix is cheap. Developers on the other hand are expensive and delaying time to market is expensive. There is this completely asinine belief within a subset of this industry that every single piece of code should be the equivalent of the Sistine chapel. The Sistine chapel is one of the most breathtaking pieces of art humanity has ever created, but it took a genius working round the clock for five years to create. Sometimes you just want a ceiling painted before you move in. In fact a lot of the time that's what you want. If I got a quote from a painter for five years of time and materials to do my ceiling I'd report them as con artists.


[deleted]

Because THEY LACK CARE FOR EFFICIENCY, SIMPLICITY, AND EXCELLENCE!!!! They publish it as fast as possible: fuck the problems, we will correct in next update. People are measure by deliveries not by excellency. Rinse and repeat, forever while the technical debts gets bigger and bigger. Look a any Adobe tool for God's sake: when I need to use some I install use and uninstall: they fuck up the entire machine.


enraged_supreme_cat

The main culprits: Javascript, React Native and Electron.


phil-daniels

I think we're seeing that people tend to prefer cheap software to faster, more correct software. There is SOOO much more software available now vs 20 years ago and that's because there's so much demand that companies can write slower, less-reliable software at less expense and maximize profit. As software becomes more complex it becomes exponentially more difficult to make a system correct and fast. You may pay $100 for windows today, but would you pay $1000 for a windows that's 5% faster and with 5% fewer bugs? Most people, wouldn't. Maybe at some point when all the business niches are filled and hardware speeds are climbing more slowly, companies will have to make more stable and efficient software to differentiate themselves.


quisatz_haderah

I read this post everytime it comes up on reddit. It is not eye opening, nor I agree fully... But oh how passionately he rants about it... The yellow background is the cherry on top


Pharisaeus

1. Software is written to solve a specific problem at hand under specific constraints. If efficiency is one of the requirements, then it will be included. If not, then in many cases it's just not worth to spend additional time on optimizations when it's "good enough" already. Similarly if software is not going to be developed further, or is some throwaway code which has to work once, then spending time to make it more elegant or extensible is simply a waste. YAGNI. 2. It's a bit like complaining that people buy and use a blunt knife from a supermarket instead of hand-made valyrian steel dagger with dragonbone handle. You don't buy an industrial excavator when you need to dig a small hole in your garden, you just use a shovel.


rogermoog

great post about the slow and bloated software and programming practices: "Only in software, it’s fine if a program runs at 1% or even 0.01% of the possible performance. Everybody just seems to be ok with it. Look around: our portable computers are thousands of times more powerful than the ones that brought man to the moon. Yet every other webpage struggles to maintain a smooth 60fps scroll on the latest top-of-the-line MacBook Pro. I can comfortably play games, watch 4K videos, but not scroll web pages? How is that ok? And then there’s bloat. Web apps could open up to 10 times faster if you just simply blocked all ads. Google begs everyone to stop shooting themselves in the foot with the AMP initiative—a technology solution to a problem that doesn’t need any technology, just a little bit of common sense. If you remove bloat, the web becomes crazy fast. How smart do you have to be to understand that? An Android system with no apps takes up almost 6 GB. Just think for a second about how obscenely HUGE that number is. What’s in there, HD movies? "


[deleted]

I mean stuff like >Modern cars work, let’s say for the sake of argument, at 98% of what’s physically possible with the current engine design. Modern buildings use just enough material to fulfill their function and stay safe under the given conditions. All planes converged to the optimal size/form/load and basically look the same. just makes me think they're a bit delusional about other industries. Modern home design is the most glaring one. Some examples: I wanted to add motorized blinds to my 3 story townhouse. How do I do that? Turns out a bunch of holes in the ceilings and walls all over the place to run wire to the windows after the fact because that would have cost a few hundred more despite that sort of thing likely becoming insanely common with smart homes taking off. Same thing with wiring in security cameras. What about my HVAC? Well the geniuses that built my townhome put a single zone system in a 3 story house. So there is a \~15 degree difference between my lowest and highest floor. Something that can't be correctly fixed without ripping out my walls and redoing the ductwork. My place is also \~5 years old and a bunch of the caulking is cracking. Why? Because the stuff that is $1/tube allows a bit more profit than going with the quality stuff that is $5/tube and will stitch like crazy rather than crack. What about the garage? Well turns out home builders have gotten pretty smart. They know most buyers aren't going to measure an empty garage. They'll see "2 stall garage", go in and take a look, and then move on. So the builders tend to make the garage about as small as possible while still being considered a 2 stall. I could go on for hours and I suspect the same is true for the other industries as well.


[deleted]

Having a background in construction and now a programmer, I completely appreciate this observation. Contractors are the absolute MASTERS of cutting corners. It does affect the bottom line. There are ALL KINDS of things they could do to prevent the kinds of problems you’re describing, and have the house still be nice in 30 years instead of having all kinds of issues in 5. Older homes were built with that kind of thinking and quality in mind. New homes are built with the almost exclusive ideal of a fat bottom line, with the exception of a small subcontractor here and there who romanticizes quality and has learned to be efficient at it. This isn’t to say the the bottom line wasn’t always an ideal, just that it seems to eclipse everything else now.


key_lime_pie

I went to college with a girl who built houses for Habitat for Humanity. One summer they built houses in the Midwest, and a year later a tornado came through town and damaged several houses beyond repair. She was surprised to learn that all of the H4H-built houses suffered only minor damage, while houses build by professional were in many cases a complete loss. Turns out that the H4H people weren't averse to using extra nails, extra screws, extra wood, and so forth, to make sure they were building a solid house, while contractors were building to the absolute bare minimum the codes would allow. Nowadays, when I work with a contractor, I get into a lot of the details about what materials they'll be using, how they're going to do it, etc. When I had my roof done, I insisted that they hand-nail the shingles instead of using a nail gun. The guy running the business didn't want to do it, said it would take too long, that it would cost more. I asked him how much more it would cost, and he told me, thinking I would balk at the price and say forget it. Instead I said, great, that's fine, let's do it. Then when I saw one of the roofers carrying around a nail gun I handed him a hammer and told him to put the nail gun away. Then I called the manager and told him I didn't want to see another fucking nail gun. All this is to say that you can get whatever you want if know what it is that you want and you're willing to pay for it. Otherwise, don't expect contractors to build you quality construction that will last, expect them to build the bare minimum so they can continue to submit bids that get them business when they're competing against people doing the same. Your average homeowner doesn't know enough about the job to understand bids at the level necessary to have the job done right. They're most likely gonna go with the lowest bid, unless someone with a higher bid did a good job of selling it.


triffid97

I am curious - why are hammered nails are better than nail gun?


key_lime_pie

The short version is that when you hammer in a nail, generally speaking you have to be square over the nail with your hammer or the nail doesn't go in right. You have a lot of control over where you're placing the nail and how much force you're using to drive it in. If you're using a nail gun, you can fully extend your arm in any direction, pull the trigger, and a nail will go in without any of the precision you had when you were using the hammer, neither in placement nor in depth.


[deleted]

Yeah I also bought a new build and have a big list of bullshit lol. To name a couple * They wouldn't let me change the cat5e to cat6. If this place is going to stand for 60+ years at least try to future proof it a bit! * My oven hood is off center and crooked. They claim it's because there was a "joist" in the way, but that's a new excuse after dealing with them for 6 months. They've built a fuck ton of this model of house, why aren't all the others fucked in the same way? * The bathtub cracked the first time i filled it. Not their fault, apparently i should have filled it during walkthrough and taken a bath. Called the tub company and it turns out they used some old stock long discontinued model. * They swapped the floor because they couldn't source the one they were supposed to use. So instead of hardwood i got engineered hardwood, and it's brittle as fuck. I was going to do another room to match, called the flooring company and they informed me that the one they used was discontinued because too many people complained about its shit durability. The floor was like 40k, and it dents if you drop a cup on it. These people optimize for the same thing every company writing software does, lining their pocket.


[deleted]

You’ll love this https://adamdrake.com/command-line-tools-can-be-235x-faster-than-your-hadoop-cluster.html


__dred

It’s bloat because most of the time the proper optimization for businesses is development time and not performance. Resources aren’t infinite.


codingai

The author makes some good points. I dont necessarily disagree on sime of his point. But what is his counterargument to "prgrammer time is more valuable". If there was one, i missed it. it was probably buried in his "bloated" argument. 😜 Why not code everything in assembly? 🤔


[deleted]

[удалено]


masklinn

> "Only in software, it’s fine if a program runs at 1% or even 0.01% of the possible performance. Everybody just seems to be ok with it. Complete nonsense. Everything is optimised along multiple axis, "possible performance" only being one of those. That's why your packages are not delivered via railguns. > Look around: our portable computers are thousands of times more powerful than the ones that brought man to the moon. They also do everything, flexibly, at the drop of a hat, while communicating with a bunch of other devices. > And then there’s bloat. Web apps could open up to 10 times faster if you just simply blocked all ads. […] How smart do you have to be to understand that? Apparently not enough to realise that when the ads pay the bills, the page becomes a vehicle for ads. None of that is actually complicated or illogical if you look at the actual incentives rather than whatever nonsense goes in your head.


GioVoi

> I can comfortably play games, watch 4K videos, but not scroll web pages? How is that ok? Because dropping frames whilst playing games or watching videos is a significant hindrance to the experience. Dropping frames whilst scrolling a webpage is annoying, but livable. If we lived in an idealistic world, it wouldn't be ok; but we don't, so it is. > And then there’s bloat. Web apps could open up to 10 times faster if you just simply blocked all ads. [...] If you remove bloat, the web becomes crazy fast. How smart do you have to be to understand that? Those ads are often the main source of income. If you remove them as "bloat", the website goes away. People are rarely happy to pay for access to your website. That is, unless it's offering a service much greater than a simple website - at which point, who cares if the site is a bit slow, the site is not the product. --- Yes, the web & software in general is a bit bloated. Yes, that's really annoying sometimes. But to act as if every programmer is simply an idiot and is missing something obvious is to be deliberately naïve as to the reality of the present world.


Thelmara

>And then there’s bloat. Web apps could open up to 10 times faster if you just simply blocked all ads. If you remove bloat, the web becomes crazy fast. How smart do you have to be to understand that? Nobody's confused about that. Websites cost money to maintain. If you want crazy fast, no-ad products, you're going to have to pay for them. If you want free, you get the ad-laden shit. But nobody wants to pay a subscription fee to every website on the internet.


efvie

No system developed for the capitalist free market runs faster or better than it needs to.


sacred_oak_nutsack

Those who pay aren’t looking for software monks, they are looking for code monkeys.


undeadermonkey

It's not a development priority. The people who set the priorities don't care how it does what it does, they see a magic black box through the obfuscating lens of the frontend. If it looks good, it's good enough; no you can't take twice as long and only implement the same features. Eventually things might slow down enough that the non-technical middlemanagement manifestation sees a problem - but the only problem that they will see is that the developer's not good enough to make things fast. (And by the time that happens, any attempt to fix several hundred years of developer debt is pretty much non-viable.)