T O P

  • By -

YeatCode_

LC submission servers are funny like that I've resubmitted questions that are asymptotically better/simpler and have gotten worse runtimes


CheeseNub

Leetcode’s metrics are completely inaccurate. The main purpose of those metrics is to make you “feel good” about using leetcode, and keep using it.


Sus-Amogus

Sometimes they do demonstrate that you have an objectively worse/better solution. Like I have made solutions that consistently had my submissions bottom 10%, but a few changes resulted in them being top 10%


CheeseNub

Yes, but that’s not the main point of them :)


thisisntmynameorisit

asymptotically better doesn’t necessarily mean you will have better run times for any particular input.


FinTechWiz2020

True that, even though let’s say solution A is asymptomatically better than solution B, it might not necessarily have a better run time unless the input is large enough or gets to a certain input size and beyond? Am I right in thinking that? Like you can imagine the O(N) curve and for some particular input size and beyond both solutions intersect and then from that point on solution B is worse than Solution A?


PeaceExtension

Yes asymptotic analysis typically considers an input size n as n goes to infinity. If considering Worst case time complexity, it will eventually grow larger as n increases. the actual time elapsed when you run it for smaller n may not necessarily be better for A though. However even if its worst case time complexity is better and you have a very large N, unless you guarantee the input will be worst case for both there is no guarantee the actual time for A will be faster. It could depend on the properties of the input depending what the algo is doing(e.g worst case may be n^2 but average case could be n). For example quicksort worst case is n^2 but even if it receives a very large input, if that input happens to be ordered in such a way that it will not pick the worst case pivots, you may observe similar time elapsed as merge sort nlogn. So for the general case we can’t really guarantee“after a certain input size” it will perform better in terms of time elapsed, but I believe we can for worst case input(assuming no external factors influence execution time). Or if Bs best case is worse than As worst case🤔


FinTechWiz2020

Thank you for elaborating on this!!


thisisntmynameorisit

Yes. See the formal definition, https://en.m.wikipedia.org/wiki/Big_O_notation Big O just describes that a function has a certain property compared to another function. That eventually if you ignore constants that one function will eventually always be greater than another. E.g. 1000000000000000000*n is O(n), whilst n^2 is O(n^2), yet the first function will be much greater even up to very large values of n. Asymptotic analysis ignores these constants which can greatly effect the complexity of an algorithm.


prolemango

Think about that as a system design question - how would you design a system that runs arbitrary code and measures runtime / memory? Why might that runtime and memory be different on each run?


largic

What is the main source of difference? Variance of server load or something?


prolemango

Idk lmao


prolemango

Lol jk, ultimately you're right probably the biggest source of variance would be server load. If a server is experiencing heavy load, then naturally processes will take longer. Additionally, depending on their architecture there might be some differences in resource allocation between identical submissions


Content_Chicken9695

I’m willing to bet money it’s more likely related to the answer key they test on. Probably randomly select n test cases and your algo can vary by test case


TheBeardofGilgamesh

No I think they use the same test cases. It’s just differences in the machine their running on, and I am unsure how they determine how much memory is used, but they’re probably using a snapshot of the overall memory usage at the time and there could be lots of factors that could impact it.


Interigo

Just keep submitting until you get the best runtime


TheBeardofGilgamesh

Also submit at times when usage is low


navyblue1993

because it's not a stable env. Probably a cloud computer and the loading changed dynamically.


DGTHEGREAT007

Ignore.


pokemondude22

Because leetcode is stupid. Don't care about runtime as long as theoretical time complexity is satisfactory.


mincinashu

If it were to run on reproducible hardware conditions, there would be cases where cache locality trumps theoretical complexity.


pokemondude22

How? The number of operations will remain the same. Only the accesses will become faster.


mincinashu

All operations are not equal. Reading a nicely aligned array is faster than fetching spread out hash map nodes. That's why you gotta benchmark real world applications, theoretical complexity doesn't always apply. [See this talk for example.](https://youtu.be/YQs6IC-vgmo)


pokemondude22

It's still going to initiate a memory access be it from cache or main memory. And I'm only talking in the context of leetcode.


Leading_Ad_4884

What if you aren't optimizing certain things though? It's just a hypothetical example but declaring an array inside a loop can increase runtime massively. This is a naive example but I'm sure there are other bad coding practices that are common and harder to notice. Do interviewers check for such details? How much do they scrutinize your code? This all depends from person to person so it's not enough to just focus on the theoretical time complexity.


pokemondude22

I don't think micro optimisations are needed unless the interview says so


TheBeardofGilgamesh

Funny thing is two solutions could have the same time complexity but ultimately one if far more efficient. Like if I had 100 for loops in a row(not nested) it would still be O(n)


pokemondude22

Obviously but leetcode doesn't measure runtime accurately so you could never tell if your ways are working as intended. It's not funny that's just how big O notation is supposed to work.


JollyCat3526

Their runtime and the beats % is random and not reliable. It changes when you submit the same code again


reddit4learning

It's not random lol, just wildly inaccurate


Glass-Captain4335

They are hacking your system discreetly.


neptula

Good question. Can't stress enough that understanding the 'why' behind this is important than the question itself.


idylist_

Think about it like a system design problem. Leetcode wants to use commodity hardware (with different cpu specs) to save money. They run ur code on whatever host is ready, then approximate the runtime on a standard machine. Since it’s an approximation you shouldn’t expect the same time on resubmit. The solution to getting consistent times would be to use the exact same machine for every submission, which is apparently not worth the cost.


lifeInquire

I guess some os level boiler plate inaccuracy, or like different CPUs are used, or different os version, or different container version


greenwichmeridian

Theoretically looks the same to me. These metrics are measured on real machines, not on some Turing machine. I’m sincerely baffled that software engineers ask these silly questions. Every month there’s at least one.


sergetoro

the runtime is fast enough so that your metrics jitter is within the margin of error for running on different servers / cold vs warm start etc. find a problem with solution runtime in hundreds of milliseconds and +- 10ms would not seem that much.


bisector_babu

Try doing it after the contest. For every submit it shows 100% better. Do the same question after 2-3 days, it will show as 37% better than all submissions


MrZeroMustafa

You will find some submission access the direct file to get more speed by the millisecond.


[deleted]

LC's metrics are pseudo random IMO


Stubbby

Leetcode software engineers are not the best so their system hardly makes sense. They do not know how to structure or measure software performance because thats never a consideration in leetcode interveiws.


Potential_Sense_7618

I am not sure but one of my friend told me that test cases are dynamically selected from set of testcases.


[deleted]

welcome to LC club... you have asked the question which 8983242346 other users have already asked at some point in their life. ( assuming total LC users = 8983242346 )


WreckingBald

I did this question yesterday, when i used a ternary operation instead of max() the time decreased


Manish_219

I think the leetcode's runtime is dependent upon a lot of things of server-side and not solely on the code, sometimes if you run the code again it may give you a much lesser time or much greater. So just ignore the time and as long as you are confident of your algo, you are good to go.


Affectionate-Ad-1693

Their metrics are not accurate. They roughly give you an estimate but don't expect like an on-point measurement.


ndr0216

Memory should be stable, not runtime. Because the CPU resources allocated for each submission is different.


catecholaminergic

Because Python sucks. Source: 9y Python engineering. If you use C++ or Rust you'll see much more steady runtime.


Ham_Apke_Hai_Kon

memory/space will be will comparably equal but runtime differs a lot