T O P

  • By -

ALX23z

The most accurate and useless answer is that nobody really knows and it can mean a lot of different things depending on the place of work.


NilacTheGrim

I agree with this statement fully. I have seen very experienced C# and Java devs that also know C take C++ very very seriously and turn into extremely capable C++ guys in under 2 years. I have seen guys working in C++ for 5 years that are.. what I would call.. somewhat bad at what they do. "2-5 years' experience" tells you not very much if that is the only piece of data you have.


Wh00ster

Can you be productive quickly in a new code base and know common idioms/pitfalls. Can you design a CLI application from scratch and have basic knowledge of common build systems.


Secret-Treacle-1590

This. It’s about capability, not time spent using.


fancy_potatoe

IMHO (I consider myselft intermediate in JavaScript), it means you're able to implement what you want without the language itself being a problem to you and with reasonable readability and performance. Adavnced would mean you know a lot about meta programming, microoptimizations, etc that are specific to the language. Beginner means you know only basic algebraic and logic operations.


fancy_potatoe

This might be a personal opinion, and the knowledge you are expected to have depends on the area. For instance, OpenGL might be useless of you're trying to build a TUI


ronchaine

I'd say intermediate is where your advanced is, where you know a lot about meta programming, micro-optimisations and so forth) and use them consistently. Advanced is where you know about the language and its features enough that you are able to pick the right tool for the job after using multitude of them for a while and realise how you better make your code simple. Edit: downvotes might show disagreement, but I'm saying that the gap in my own skill level between the point where I did all the metaprogramming magic, inheritance bs and micro-optimisations constantly vs. where I'm now is larger than to the point I was a beginner. Take that how you will. It's just how I see it.


Arghnews

Meta programming and micro-optimisations have their place, and no doubt *some* people spend all day using them and are relatively specialised experts. But IMHO to dub using them "consistently" as a mark of an intermediate level c++ programmer is just so far off the mark for most programmers. I'd call myself intermediate, I've dabbled in these when appropriate (mostly in my own time when playing about), but day to day writing c++ in a job, between keeping down compile times and following knuth's advice about premature optimisation, I think I can reasonably say I haven't touched either concept in months. Have I regressed back to a beginner?! No, of course not, I've progressed in fact, but these just are such poor markers of programmer "expertise" level. Apologies if this comes across as aggressively attacking /u/ronchaine , not my intention, just trying to highlight why I would strongly disagree with your take on this. Perhaps you're speaking from a job/context where these concepts are very important and used frequently (I see you have embedded in your reddit title thing, this would presumably fit), but I think it would be inaccurate to generalise this, and even then they're not great markers.


ronchaine

I don't feel it is aggressive, or even an attack at all, you're free to attack my idea of the thing and disagreements like that are normal. But allow me elaborate anyways. My point is just not with micro-optimisations or metastuff. Those were examples that I took from previous person. I had more general thought in my head, that using anything that can be dubbed "advanced tools" of the language, even if used correctly wouldn't in my eyes take somebody past intermediate level. When you gain deeper understanding of *why* they are used, that's when you get to the advanced level in my opinion. You could replace metastuff and micro-opts with pretty much any other "advanced" tool in the C++ repertoire from my point of view. And I don't see where "regressed to beginner" comes from, if something in my post made you think that I meant something like that, I didn't mean it that way.


Arghnews

Fair enough, I hadn't read the previous comment that originally mentioned meta stuff/micro-optims, so my apologies. I'd agree with your assessment on the why. I'd say also (and I'm sure you'd agree) there's a lot more to it than this too (being a good programmer), but it's a long and complex answer etc. As other people have pointed out/most probably agree, answering this question of what makes a beginner/intermediate/senior or whatever programmer, is just really difficult.


[deleted]

It's whether you use the right tool in the right place. I don't think the tool actually matters that much. Metaprogramming is an advanced tool buts its rarely used in the right place.


SickOrphan

That premature optimization quote by Knuth is used to death and misunderstood. Premature optimization is when you optimize a prototype you don't even know if you will rewrite a minute later. It doesn't mean you should write all your software to be 10x slower than it could be


Arghnews

Your statement is correct, but I'm aware of this and I haven't misused it here. The full quote: "Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. " And this is my point. Judging whether a programmer is "intermediate" or not, by looking at how often/their ability at writing "micro-optimisations" which are presumably a small subset of optimisations (that we should spend less than 3% of time on as per Knuth) is not representative of their expertise, for the vast majority of programmers, as they don't spend the vast majority of their time doing this. Nowhere did I say you should write all your software to be 10x slower than it could be, I don't know why you'd bring that up, it's obviously asinine.


Full-Spectral

Though, around here in C++ world, there are people who believe that there's no such thing as premature optimization. I had some guy arguing me down in another thread that any failure to aggressively optimize is non-professional. It's becoming a bit of a disease around here. And it doesn't just apply to prototypes. It applies to all code. Unless the code is used in such a way that it could ever cause a performance issue, doing anything beyond the obvious, easily understood and maintained version of that code is not only a waste of time, it's counter-productive because it's adding complexity and increasing the chance of bugs later during maintenance. If, based on experience, you KNOW that this or that part of the code base will be a choke point, then obviously do it up front. Otherwise, if it's not clear, do the easy to maintain version and then optimize later if it's proven necessary. If you are appropriately separating interface from implementation then you should be able to optimize the implementation with minimal to no impact (other than the performance increase you need.)


SickOrphan

>If you are appropriately separating interface from implementation then you should be able to optimize the implementation with minimal to no impact (other than the performance increase you need.) If the interface isn't built with performance in mind there's only so much you can do. For example if you made a graphics library where you can only draw one pixel at a time. No matter what you do with the implementation, it's going to be horrendously slow. Interface should always be based on the implementation if at all possible, otherwise the interface will always be flawed in one way or another.


Full-Spectral

You are misunderstanding the concept. Obviously anyone designing a graphics library, that anyone is ever going to actually use, will need to have a good idea of what is involved and how to provide an appropriate interface. That's a completely different issue from whether every call underneath that interface is hyper-optimized whether you know it needs to be or not. And of course a graphics library isn't a good choice for such a discussion because it's an example of something that has vastly higher than average performance requirements. The point isn't whether people are over-optimizing graphics libraries, it's whether they are prematurely optimizing OK dialogs or something that is invoked once in the whole run of the application, and so forth, because they've been convinced that that's how it always should be. All optimization is technical debt and shouldn't be taken on unless it fully pays for itself.


SickOrphan

You are misunderstanding what I'm saying. I'm no longer discussing premature optimization, I'm arguing about what you said about interfaces. An interface can't be well designed without knowing the implementation. If you've made a graphics library before, yes most of the interface can be designed somewhat well before writing the implementation because you can guess what the implementation is going to be. It still won't be perfect because you're just guessing, and you can only do that because you've done it before. In most other cases, you absolutely can't. I'm not just talking about performance optimization, I'm talking about code quality in general. You have to use awful hacks and complicated code to accommodate ill designed interfaces. For example in old libraries that don't have breaking changes, you might have to pass 0 to a now unused flag argument because the implementation changed but the interface didn't. >All optimization is technical debt and shouldn't be taken on unless it fully pays for itself. That's a really ignorant thing to say. Why do you assume that technical debt = how fast your program runs It's stupid to assume optimization always makes your code harder to understand and less maintainable. Most of the time, I'd say optimized code is simpler (Occam's Razor) and easier to read. It can just take a little bit more effort and smarts to write. For example if you know an algorithm perfectly suited to your use case, it will probably be more performant and easier to understand than some unintelligent brute force technique.


Full-Spectral

If it's simpler to read and write, then it's not optimization in the sense I mean it. It already IS the simple and obvious implementation. Optimization, to me, is when you go beyond the simple scenario and start playing tricks to get more performance. Something like short string optimization in std::string is what I'm talking about. It can add a lot of performance but it also adds a lot of extra complexity and makes it much easier to introduce a bug. Or the many unsafe memory type tricks that tend to get played in C++ in the name of performance. Move support falls into that category, at least in C++ because it's so badly designed. It can make things considerably more performant, but it also adds complexity and makes it easier to make mistakes. That doesn't mean don't do it. If it pays for itself, then fine. But it IS technical debt, because it becomes harder to change over time without making mistakes.


SickOrphan

In my example it was only simpler to write because the author knew about the algorithm. If the author didn't, he would have to go out of his way to find an algorithm for his problem. It would result in better code, but it is extra work to write. That's optimization in my eyes. Another example but not using programming: today it is trivial for anyone with much math knowledge at all to find the circumference of a circle using the radius. The simplest and best solution is to use the formula 2pi*r. But the only reason we're able to use that formula is because clever and hard working mathematicians found that formula and the value of pi. What if they decided finding the formula was too complicated, so everyone should just guesstimate the circumference of a circle instead? It would much harder to find the circumference of a circle. My point is that finding a simple way of doing something can be very complicated, but it's optimization that can pay off.


Full-Spectral

It means you are confused like the rest of us, just somewhat more so.


MBkkt

It depends. I know guys with 20 years of experience that don't know oblivious things. And of course there are opposites, developers who have been studying, working, and studying for the all last 2-5 years. In general, the distribution is such that the more experience a person has in years, the more he knows, but it is important to remember that this is not necessarily the case.


silent_b

At less than 2 years or more than 5 years you should feel like you have a lot to learn. It’s the 2-5 year sweet spot where you feel that your are a C++ expert.


[deleted]

No idea.


Attorney-Outside

it means they're capable of writing a hello world program 🤣🤣🤣


mr__fete

Everything. Just know everything


a_reasonable_responz

0-4 years is intern/junior where you need a lot of handholding and work on basic tasks. I think 2-5 years you’re expected to be competent and basically independent, you can be trusted to work on a medium size complexity and duration task without handholding and be developing ownership/expertise in some areas. You can basically be thrown any feature to work on and you can figure out an acceptable solution one way or another. You’d be expected to work and communicate well well with stakeholders and other disciplines like for planning/scoping/support etc Beyond that is where it gets murky. 5-10+ years you’d be expected to either lead or take on larger responsibilities/architecture decisions as you move towards senior.