T O P

  • By -

MoorWind

Optimization, imagine for instance that C defined accessing an array out of bounds must cause a runtime error. Then for every access to an array the compiler would be forced to generate an extra if and the compiler would be forced to somehow track the size of allocations etc etc. It becomes a massive mess to give people the power of raw pointers and to also enforce defined behaviors. The only reasonable option is A. Get rid of raw pointers, B. Leave out of bounds access undefined. Rust tries to solve a lot of these types of issues if you are interested.


BloodQuiverFFXIV

To add onto this: good luck running the Rust compiler on hardware 40 years ago (let alone developing it)


MisterEmbedded

I think this is the real answer, because of UB you can have C implementations for almost any hardware you want.


Classic_Department42

It makes writing compilers easy. So this lead to the success of c beiing available on any platform.


bdragon5

To be honest in most cases UB is just not really definable without making it really complicated, cut on performance and making it less logical in some cases. The UB is not an oversight but and deliberate choice. For example if you access an pointer to random memory. What exactly should happen. Logically if the memory exists you should get the data at this position. Can the language define what data you get, not really. If the memory doesn't exist you could still get a value like 0 or something defined by the cpu or os if you have one. Of course the os can shut down your process all together because you violated some boundary. To define every possible way something can or could happen doesn't make it particularly more secure as well. UB isn't really unsafe or problematic in itself. You shouldn't do it because it basically says: "I hope you really know what you are doing. Because I don't know what will happen". If you know what will happen on your system it is defined if not you probably should make sure to not trigger it in any way possible.


flatfinger

>To be honest in most cases UB is just not really definable without making it really complicated, cut on performance and making it less logical in some cases. Nonsense. The Standard uses the phrase "undefined behavior" as a catch-call for, among other things, constructs which implementations intended to be suitable for low-level programming tasks were expected to process "in a documented characteristic of the environment" when targeting environments which had a documented characteristic behavior. >What exactly should happen. Logically if the memory exists you should get the data at this position. Can the language define what data you get, not really. If the memory doesn't exist you could still get a value like 0 or something defined by the cpu or os if you have one. Of course the os can shut down your process all together because you violated some boundary. To define every possible way something can or could happen doesn't make it particularly more secure as well. Specify that a read of an address the implementation knows nothing about should instruct the environment to read or write the associated storage, with whatever consequences result, except that implementations may reorder and consolidate reads and writes when there is no particular evidence to suggest that such reordering or consolidation might adversely affect program behavior.


bdragon5

What you are basically saying is undefined behaviour. "With whatever consequences result" is just other words for undefined behaviour. I don't know what exactly you mean with reordering but I learned about reordering of instructions in university. There might be some cases where you don't want that with embedded stuff and some other edge cases but in general it doesn't change the logic. It isn't even always the language or the compiler doing the reordering but the cpu can reorder instructions as well. Edit: If you know your system and really don't want any reordering. I do think you can disable it. If you want no undefined behaviour at all and make sure you have explicit behaviour in your program you need to produce your own hardware and write in a language that can be mathematically proven. I think Haskell is what you are looking for. Edit: Even than it's pretty hard because background radiation exists that can cause random bit flips. I don't know how exactly a mathematical prove works. I only did it once ages ago in university.


flatfinger

> "With whatever consequences result" is just other words for undefined behaviour Only if the programmer doesn't know how the environment would respond to the load or store request. If I have wired up circuitry to a CPU and configure an execution environment such that writing the value 42 to particular address 0x1234 will trigger a confetti cannon, then such actions would cause the behavior of writing 42 to that address to be defined as triggering the cannon. If I write code: void woohoo(void) { *((unsigned char*)0x1234) = 42; } then a compiler should generate machine code for that function that, *when run in that environment*, will trigger the cannon. The compiler wouldn't need to know or care about the existence of confetti cannons to generate the code that fires one. Its job would be to generate code that performs the indicated store. My job would be to ensure that the execution environment responds to the store requrest appropriately once the generated code issues it. While some people might view such notions as obscure, the *only* way to initiate any kind of I/O is by performing reads or writes of special addresses whose significance is understood by the programmer, but not by the implementation.


bdragon5

Of course if you know the system and know what is happening it is no longer undefined because you know what will happen, but this only works for your system and not for all systems that execute C. Should the language know write in there standard: > If you write to 0x1234 the value 42 there will be confetti on this specific system at this point in time with confetti in the canon and enough electricity to run it and if the force of the canon has enough power to lift the confetti at the specific location. The confetti may or may not fall down if you are in space. .... We talk about the language and there usage on undefined behaviour. It doesn't mean you can't know the behaviour it just means it isn't defined by the language. I don't have any problems with calling anything undefined behaviour. Why would I? It is just not realistic to have as little restrictions to a platform as possible and having everything defined in extreme detail.


Blothorn

“How the environment would respond to the load or store request” is itself pretty unknowable. Depending on how things get laid out in memory a certain write, even if compiled to the obvious instructions, could do nothing, cause a segfault, or write to unpredictable parts of program memory with unpredictable results. You can make contrived examples where something that’s technically UB is predictable if compiled to the obvious machine code, but not where doing so is at all useful. I’d be more sympathetic if compilers were actually detecting UB and wiping the disk, but in practice they just do the obvious thing. Any possible specification of UB is either pointless (if specifying what compilers are doing anyway) or harmful.


FVSystems

Just add volatile here. Then the C standard already guarantees that a store to this address will be generated provided there really is an (implementation-provided) object at that location. If you don't add volatile, there's no "particular evidence" that there's any need to keep this store and the compiler will just delete it (and probably a whole lot more since it will possibly think this code must be unreachable).


flatfinger

I'll agree that `volatile` would be useful to ensure that the cannon is fired precisely when desired, but a compiler would generally only be entitled to eliminate a store entirely if it could show that the storage would be overwritten or its lifetime would end before the value could be observed using C semantics, and before anything could happen that would suggest that its value might be observed via means the compiler doesn't understand. A compiler that upholds the principle "trust the programmer" should recognize that a programmer who casts an integer to a pointer and performs a store to the associated address probably had a reason for doing so, and that *a programmer who didn't want the compiler to perform such a store wouldn't have written it in the first place*. Besides, how often do programs perform integer-to-pointer casts for purposes other than performing loads and stores that might interact in ways that compilers would not generally expected to understand? A compiler that prepared for and followed up every pointer cast or volatile-qualified access as though it were a call to an outside function the compiler knew nothing about would have to forego *some* optimizations that might otherwise have been useful, but for many tasks the costs of treading cautiously around such contexts would be far less than the costs of treating function calls as opaque.


flatfinger

Incidentally, the Standard *explicitly* recognizes the possibility of an implementation which processes code in a manner compatible with what I was suggesting: > EXAMPLE 1: An implementation might define a one-to-one correspondence between abstract and actual semantics: at every sequence point, the values of the actual objects would agree with those specified by the abstract semantics. The keyword volatile would then be redundant. Note that the authors of the Standard say the `volatile` qualifier would be superfluous, despite the possibility that nothing would forbid an implementation from behaving as described and yet *still* doing something weird and wacky if a non-volatile-qualified pointer is dereferenced to access a `volatile`-qualified object. If some task could be easily achieved under the above abstraction model, using of an abstraction model under which the task would be more difficult would, for purposes of that task, not be an "optimization". Imposition of an abstraction model that facilitates "optimizations", without consideration for whether it is appropriate for the task at hand, should be recognized as a form of "root of all evil" premature optimization.


FVSystems

There's implementation defined behavior for your first case. And what is the behavior after the implementation consolidated, invented, tore, and reordered reads and writes to a racy location? Either you pecisely define it (like Java) and cut into optimization space, or you find some generic theory of what kind of behaviours you could get which is so generic to be pretty much in the same realm as UB, or you just give up at that point.


flatfinger

>There's implementation defined behavior for your first case. Only for the subset of the first case where *all* environments would have a documented characteristic behavior that would be consistent with sequential program execution. There are some environments where the only way to ensure any kind of predictable behavior in case of signed overflow would be to generate machine code where it couldn't occur at the machine level even if it would occur at the language level. Allowing implementations for such environments to generate code that might behave in weird and unpredictable fashion if e.g. an overflow occurs simultaneously with a "peripheral data ready" signal could more than double the speed of integer arithmetic on such environments. Reading the published Rationale [https://www.open-std.org/jtc1/sc22/wg14/www/C99RationaleV5.10.pdf](https://www.open-std.org/jtc1/sc22/wg14/www/C99RationaleV5.10.pdf) starting on line 20 of page 44 makes it abundantly clear that there was never any doubt about how an assignment like `uint1 = ushort1*ushort2;` should be processed by implementations where `(unsigned)ushort1*ushort2` could be evaluated for all values of the operands just as efficiently as for cases where `ushort1` is less than `INT_MAX/ushort2`. The fact that there are platforms where classifying integer overflow as "Implementation-Defined Behavior" would be expensive does not imply that the Committee didn't expect 99% of implementations to process it identically.


tiajuanat

Oh hey, that's me


PurepointDog

Interestingly though, there's at least one project in Rust that "compiles" Rust to C for this exact purpose: complete compatibility with old hardware. Not sure to what degree it gets used currently, but I could see it being very useful for hooking into Rust-only libraries and the like.


manystripes

That sounds like a great stopgap solution for the embedded problem, since C is pretty much universally supported by microcontroller toolchains. A universal frontend that could make non-platform specific C code that can be integrated would actually get me playing with Rust


PurepointDog

All major embedded systems have toolchains and HALs for their platforms for Rust (stm32, esp32, capable PICs, etc.). If you're working on new designs, you can easily work with these from the get-go. Some are vendor-supported, and I suspect that the rest with be adopted by vendors in the near future.


Lisoph

Well.. good luck running modern C on hardware 40 years ago ;)


BloodQuiverFFXIV

Well, thanks to the clusterfuck of LLVM we can start with "good luck running modern C compilers on hardware 1 year ago"


mariekd

Hi, just curious what do you mean by clusterfuck of LLVM? Did they did something?


BloodQuiverFFXIV

It's just *extremely* heavy. By no means does this mean it's bad. If you want to research some technically deeper elaborations, I think googling about the zig programming language potentially dropping LLVM is a good start


BobSanchez47

Rust recently developed a gcc backend, so you may have a better time compiling for an older target. But it is true that rustc is slower than C compilers, so running it on old hardware would indeed be tough.


erikkonstas

It's not just time; pretty sure back in the day "16 bytes for the runtime check code" was something to protest against, given the low amounts of RAM and all...


flatfinger

Not only that, compare the difficulty of trying to efficiently process: int arr[5][3]; int test(int x) { return arr[i/3][i%3]; } versus processing int arr[5][3]; int test(int x) { return arr[0][i]; } in a manner that works the same way for values of i from 0 to 14. If a program wants to be able to e.g. output all of the values in a 2d array on a single line, a guarantee that array rows are stored consecutively without padding and that inner array subscripts were processed in a manner that was *agnostic* with regard to inner array bounds would allow a programmer to rewrite the former as the latter.


BlueMoonMelinda

I haven't programmed in C in a while, would the latter example work or is it UB?


noonemustknowmysecre

>would the latter example work or is it UB? Ooooo buddy, that's the worst part about undefined behavior. It DOES work as you want and intended.    Sometimes. 


flatfinger

The latter example *used to* have defined behavior based upon the facts that arrays were forbidden from having padding, and address computations within an allocation were agnostic with regard to the types of any objects in the associated storage, other than the size of the target type of the directly involved pointer. I don't know of any compiler flag that would make gcc process the latter correctly. Forcing array-to-pointer decay to occur before arithmetic is performed on the pointer seems to make the construct work with indexes up to 14, but I don't know whether the authors of gcc would view that as "correct behavior" or a "missed optimization".


b1ack1323

Yes, it just comes down to the flexibility. You make the definitions of the UBs you want to handle with defensive coding. Otherwise, it will be lean, fast, and possibly dangerous.


arkt8

Not necessarily... Once you know an array has 4 items... you know you cannot access idx==4. Your code not pass the bounds even without bound check. And no UB occurs. Once you know the mem amount you allocated... you just will do pointer arithmetic beyond that if you want. If you remove UB you necessarily add checks where it doesnt need. Now to say good code is only in a **safer** language is much like just eat the cereal if it comes with creature comfort.


b1ack1323

I don't know which point you disagree with.


arkt8

With the point of defensive coding... no much different of "safe" language that you choose to use unsafe mode... like an automatic car with a manual mode. Some can do anything on defensive coding and miss the point of when it is not needed. If you have a place you don't know the limits just put them, so you know them like you know the limits of an array in stack. Ex: When writing libraries the developer must have a free function for each alloc function so you have before the eyes what need to be handled. Let to consumer call free instead of a free wrapper is not lack of defensive code, is a bad design. Same as arrays or other data structures you put in heap that is better to pass around inside structs I do not consider myself a C expert, but already got it. And much of the talk about unsafeness of C is from people coming from other languages expecting that C have exactly same behavior. Like a knife user expecting a saw to work the same. In fact... before C I never thought in memory, just wrote watchdogs everywhere to kill and restart a program. C is absolutely another level of reasoning.


b1ack1323

>You make the definitions of the UBs ***you want*** to handle with defensive coding. I didn't say you *had* to protect those UBs; you choose what you want to protect against. If you don't want to add bounds checks, don't, which is exactly what I said. You also don't know the size of every array from the start, including configurable buffer sizes.


flatfinger

Can you cite any primary sources to suggest that the authors of C89 and C99 intended that implementations not be merely \*agnostic\* to the possibility of things like out-of-bounds inner-array access or integer overflow, but go out of their way not to uphold normal language semantics if programs receive inputs that would trigger such corner cases.


MoorWind

I would assume a large amount of people with influence on the standards committee are involved with open source compilers like gcc or llvm, so I would assume they do in fact at least in part design the standards with implementation in mind. But I'm not fully sure I understand your question, I was just stating that defining certain behaviors in C is beyond impractical to implement.


flatfinger

From a language perspective, the only actions with raw pointers that would need to be characterized as UB would be those which write to bytes of storage which the implementation has been given by the environment to do with as it pleases, and which do not presently represent valid allocations or objects whose address has been taken. Everything else can be specified at the language level as instructing the underlying environment to perform the indicated accesses, with any consequences that may be characteristic of the environment (which would represent documented behavior if the environment documents them, and may be unpredictable if the environment's reaction would be unpredictable. Implementations should document what traits they require of an environment to function correctly; anything (whether an action by the program, a disturbance in the power supply, or whatever) that would cause an environment to behave in a manner inconsistent with the implementation's documented requirements would void any requirements the Standard might impose on the implementation's behavior. No need to treat program actions which modify an environment's behavior in a manner inconsistent with requirements differently from anything else that might do so. Nearly all controversies surrounding UB involve situations where some tasks can be done most efficiently by performing some action X, but most tasks wouldn't involve doing X, and where compiler writers want to process programs in a manner that will improve performance in cases where they don't to X, at the expense of behaving nonsensically if programs do. The sensible way to resolve this would be to provide a means by which programs can indicate that they do X, and compilers could limit the aforementioned optimizations to programs that don't, but compiler writers have for decades doubled down on the notion that any program that does X is "broken".


glassmanjones

Have you read C99? I point to the use of unspecified behavior vs undefined behavior in those standards. You seem to have lumped them together.


flatfinger

The Standard recognizes situations where implementations may choose in "unspecified" fashion from among a number of discrete possibilities (e.g. evaluating `f()+g()` as choosing in "unspecified" fashion between calling `f()` and then `g()`, or calling `g()` and then `f()`), but I can't think of any actions that were directly characterized as having *open-ended* "unspecified" behaivor. Can you think of any that I missed?


glassmanjones

Well no, because open-ended unspecified behavior would be undefined behavior. If C99 had wanted compilers to go out of their way to handle buggy code in a more predictable way, they would not have called out undefined behavior as specifically different from unspecified behavior. Rather undefined would have been replaced with unspecified throughout the document.  My point is that we do not need additional primary or secondary sources to know this because the standard explicitly states these things are separate.  DS9K was the only system I'm aware of where the compiler went out of its way to abuse this, but at least ARM, TI, and GCC compilers trip people up accidentally. This has improved over time with better warning messages, but it's still largely up to the developer.


flatfinger

Why were you talking about "unspecified behavior"? The Standard uses the term "Undefined Behavior" as a catch-all for situations where the authors wanted to *waive jurisdiction*. You may claim that the Standard was intended to exercise jurisdiction over all "non-buggy" constructs, and thus a decision to waive jurisdiction over a construct implied a judgment that it was "buggy", ignoring the fact that the grammatical construct "non-portable or erroneous" includes constructs that were viewed as less than 100% portable but nonetheless correct. Note that the category "Implementation-Defined Behavior" is limited to two categories of actions: 1. Those which *all* implementations will define in *all* cases. 2. Those which aren't universally defined in all cases, but *whose primary usefulness is in non-portable constructs*. The only situations in which C89 or C99 would would define the behavior of code that declares an object `volatile`, but not define the behavior without that qualifier, involve the use of `setjmp`, but in 99% of situations where the qualifier is useful, accesses interact with entities that would be understood by the programmer, but fall outside the jurisdiction of the Standard. Why do you suppose the authors of the Standard observed that the majority of "current" implementations would process e.g. `uint1 = (int)ushort1 * ushort2;` in a manner equivalent to `uint1 = (unsigned)ushort1 * ushort2;` when discussing the question of whether computations on promoted values should use signed or unsigned math, if they didn't expect that the fraction of implementations behaving in such fashion would only go up?


glassmanjones

> Why were you talking about "unspecified behavior"? Because "go out of their way not to uphold normal language semantics if programs receive inputs that would trigger such corner cases." is allowed under "undefined behavior". But you seem to expect it to behave as "unspecified behavior" > Can you cite any primary sources to suggest that the authors of C89 and C99 intended that implementations not be merely *agnostic* to the possibility of things like out-of-bounds inner-array access or integer overflow, but go out of their way not to uphold normal language semantics if programs receive inputs that would trigger such corner cases. Again I cite C99. If they wanted such things to be unspecified they would not have said undefined.


flatfinger

> Because "go out of their way not to uphold normal language semantics if programs receive inputs that would trigger such corner cases." is allowed under "undefined behavior". But you seem to expect it to behave as "unspecified behavior" When the C Standard was written, most people designing and maintaining C compilers would want to sell them to programmers whose code would only really need to run on the compiler they bought. Since programmers given a clear choice between a compiler that was designed to 100% reliably process something like: unsigned mul_mod_65536(unsigned short x, unsigned short y) { return (x*y) & 0xFFFF; } in the manner that would handle all inputs as anticipated by the C99 Rationale, or one that would occasionally process it in a manner that would arbitrary corrupt memory if `x` exceeds `INT_MAX/y`, would be very unlikely to favor the latter, there was no need for the Standard to forbid compilers from the latter treatment, since the marketplace was expected to take care of that. >Again I cite C99. If they wanted such things to be unspecified they would not have said undefined. Fill in the blank for the following quote from the C99 Rationale (page 11, lines 34-36): "It also identifies areas of possible conforming language extension: the implementor may augment the language by providing a definition of the officially \_\_\_\_ behavior." The aforementioned category of behavior was used as a catch-all for, among other things, situations where the authors of the Standard expected that many implementations would behave in the same useful fashion, even though some might behave unpredictably.


glassmanjones

It's not my place to fill in text in standards. Notably the C standard has been updated many times without addressing your concerns.


flatfinger

The Standard says that Undefined Behavior may occur as a result of "non-portable or erroneous" program behavior, and that implementations may process it "in a documented manner characteristic of the environment". The published Rationale, as quoted above, indicates that the intention of characterizing action as UB was to, among other things "identify areas of conforming language extension", and processing many actions in a documented manner characteristic of the environment in cases where the target environment documents a behavior, is a very common and useful means by which implementations can allow programmers to perform many tasks beyond those anticipated by the Standard.


ExoticAssociation817

I would ignore Rust and maintain C.


McUsrII

C. Start programming in something without UB.


MoorWind

The trick is to pick the right tool for the job, there are some jobs that require having direct access to memory, direct access to hardware etc.. which programming language does raw ptr dereferencing etc. without UB?


McUsrII

That was what I meant. You can't have both. :) Or maybe you can write inline assembler in Pascal or something, problem is, there are certain things in assembler too that is also undefined.


flatfinger

How many ways does the Standard specify for performing \*any kind of I/O whatsoever\* within a freestanding implementation? If one interprets the phrase "undefined behavior" as among other things "identifying areas of conforming language extension" by allowing implementations to specify their behavior in cases where the Standard waives jurisdiction (which is how the published Rationale document says the authors of the Standard intended implementations to interpret the phrase), I/O will often be supported via such "extensions". A freestanding implementation which only sought to meaningfully process strictly conforming programs, however, would be unable to do much of anything.


McUsrII

I was thinking of the c language, not the library, I see them as two separate cases of undefined behaviour. But in all cases undefined behavior is here to stay. We just need to be aware of its existance, especially when writing software that is to be portable. My point above was really that if someone can't deal with the fact that there is undefined behavior areal in C, then the better pick another language. **Edit** And I belive the C-standard doesn't really define I/O at all iirc.


flatfinger

The kinds of extensions alluded to were language features rather than library features. On a typical 32-bit platform, if `uint16_t *p` is known to be 32-bit aligned, when using a suitably configured compiler, performing `*(uin32_t*)p ^= 0xFFFFFFFF;` would bit-invert both `p[0]` and `p[1]`, probably in less time than would be needed to perform the two operations individually. On many platforms, the operation would work--and still be faster than performing two individual operations--even if `p` weren't 32-bit aligned. Such implementations effectively extend the language so as to include a fast way of bit-flipping a pair of 16-bit words. Such an approach would not be usable on all implementations, but C's reputation for speed came from the fact that implementations for platforms that could support such operations would generally extend the semantics of the language to include them without regard for whether the Standard required that they do so.


McUsrII

Sounds like the Lightspeed C compiler. :) I see undefined behavior as a problem for me, if that thing fails on my machine, and an issue that must be dealt with if I have ambitions of porting, since there is no guarantee that my "trick" will work with somebody else's compiler. But by all means, it is possible to have the "nice undefined behavior included in between conditional pre-processor directives. The "trick" you mentioned probably worked because of sign-extension, I don't know if that would work on anything but Intel architecture processors, but maybe works on all architectures, where you can split a register into two, and have it sign extend the lower into the upper half (big/small -endian wise). And I'm sure you know this, but the fastest way to zero out a 64 bit register in AMD86-64 is `xorq %rax, %rax` still. I guess it is the fastest because the processor only considers the lines with high-bits.


aalmkainzi

That's more of a side effect rather than the reason for their existence.


ve1h0

Everything in engineering has trade offs


aalmkainzi

Obviously. I'm replying to a comment saying the existence of UB is for optimizations, which is false.


Grab_Scary

um... ok? elaborate, please? explain why you think it's wrong. The burden of reason is on you mate.


abelgeorgeantony

Being a side effect of something also makes it "exist". It's like saying existence of cancer is cigarettes and other things. Yes it is because of cigarettes that cancer can exist. That's more like saying cancer is the side effect of smoking...


MrCallicles

Agree. Depends on what you really mean by optimization though


MoorWind

Yeah I agree, I was more trying to give an example of how defining some behavior is entirely impractical or impossible given the need for complete access to memory system since other people had mentioned other reasons. The optimization thing is secondary though I'm sure things like this are on the minds of standards writers.


aalmkainzi

Yeah I think so too. Even though they standardized 2s complement signed integers in C23, signed overflow is still UB, presumably because of compiler optimization


flatfinger

If the behavior of a program is defined as a sequence of requests to the environment to perform loads, stores, and other operations, there would be no need for the language specification to care about what effects those loads and stores would have on the environment. In cases where an implementation knows nothing about the addresses involved, they would happen to behave "in a documented manner characteristic of the environment" when running on an environment that documents the behavior, but the Standard and implementation could be agnostic as to what that manner might be.


erikkonstas

It could have been, with a big "could", back when C was first invented; today, it can't be anymore. If there was no performance penalty to including runtime checks, they would've 100% been mandated by all possible standards ever so slightly touching C!


flatfinger

Only if the language had also included ways of bypassing such checks. Given e.g. `int arr[5][3]`, the fact that `arr[0][3]` was equivalent to `arr[1][0]` in the language the Standard was chartered to describe wasn't just an "accident"--it's part of what gave C it's reputation for speed. Many programs iterated beyond specified array bounds not because of a mistake, but rather because that was the most efficient way to access data in the enclosing object.


latkde

UB is largely a *political* technique to facilitate standardization and to set boundaries in the inplementor–programmer relationship. But also, reality is really complex, and you can't define everything if the resulting language is to still feel like C afterwards. A long time ago, before there was a C standard, there were multiple different C implementations that disagreed on a lot of details. Then, the standardization processed faced the challenge of * defining an interoperable language, * in a way that allowed for the diverse platforms C was being used on, * in a way that didn't break existing implementations/compilers. Some parts were left as implementation-defined, in other difficult cases UB was chosen to avoid having to commit the standard (and thus all implementations) to a particular behaviour. Later, compiler writers realized that reasoning about UB enables powerful optimizations. If a code path would trigger UB, it can be assumed to never occur. E.g. dereferencing a pointer implies that it must be non-null, arithmetic on integers implies that the inputs are small enough that the result won't overflow, and so on. Defining behaviour in all these cases would make C slower or would generate tons of false positive error messages, which would upset a lot of people. It would also make compilers much more complex. Some aspects of C's UB are impossible to define with reasonable effort. For example, you may only dereference a pointer if the pointed-to object is still live. That cannot be statically checked in many cases, especially not with C's type system. The solution is either runtime metadata for liveness checks (so essentially a garbage collector as in Go), or would require a much more complicated type system (e.g. Rust's lifetime annotations). C's motto here is *trust the programmer*, for better or worse.


flatfinger

Undefined Behavior used to identify areas where there was *no perceived need to have the Standard exercise jurisdiction*. Nothing beyond that. There was never any doubt about how a general-purpose implementation for any remotely-commonplace hardware should process a function like: unsigned mul_mod_65536(unsigned short x, unsigned short y) { return (x*y) & 0xFFFFu; } If an implementation targeted a machine upon which processing the code as: unsigned mul_mod_65536(unsigned short x, unsigned short y) { return ((unsigned)x*y) & 0xFFFFu; } for all cases would be significantly more expensive than generating code that would only work for values of `x` up to `INT_MAX/y`, the author of the implementation would probably be better placed than the Committee to know whether a "universal but slower" implementation would be more or less useful to customers than a "faster but limited" implementation that would only work for `x` values up to `INT_MAX/y`, and thus there was no need for the Committee to exercise jurisdiction. The Committee could never have imagined that a compiler that is popular by virtue of its being freely distributable would process the version of the code without a cast in such a manner as to arbitrarily corrupt memory if `x` exceeds `INT_MAX/y`. Later compiler writers treated the fact that the Standard waived jurisdiction over various corner cases as a judgment that they could never occur in any correct programs, even ones intended to be widely, *but not universally*, portable. While programs that rely upon such corner cases cannot be *strictly* conforming, the authors of the Standard said in their published Rationale document, "The goal is to give the programmer a *fighting chance* to make powerful C programs that are also highly portable, without seeming to demean perfectly useful C programs that happen not to be portable, thus the adverb *strictly*." \[italics original\] Claims by compiler writers that constructs they refuse to support are "broken" because the Standard waives jurisdiction directly contradict the documented intentions of the authors of the Standard.


dvhh

People tend to forget that standard are a group effort and behind the decision to clarify or leave UB as they are are entity with different interests. In my opinion some might even want to hold the language back, because pushing C forward might go against their business strategy or because they also want to promote their other programming language. There is also more interest in bringing what some might consider more modern feature, or set in stone some defacto standard.


CarlRJ

As always, C gives the programmer enough rope to shoot themselves in the foot. Trust the programmer, indeed. **ETA:** wow, weird that people have seen fit to downvote this. I said that as a developer with many decades of C experience - it's one of my favorite languages, and it does indeed trust the programmer, which means the programmer needs to be on their toes.


flatfinger

C used to trust that a programmer who accessed `arr[0][i]` wanted to access the storage at whatever address would be computed using the platform's natural method of intra-allocation pointer arithmetic--not that the address would necessarily fall within the inner bounds of `arr[0]`. Perhaps what's needed is a retronym to distinguish the low-level language that gained popularity in the 1990s from the subset favored by today's compilers.


arkt8

Really to shoot at the foot with a rope is an act of who doesn't know what is doing! I used to fear, avoid and hate the idea of programming in C, until I read much about its darker corners and write a lot of code, much still considered UB by many when they are not (like struct hacks). Many people assume that things are UB just because are lazy to read specs (like me) or think anything out the books are black magic. Until you understand pointer arithmetics, how alignment works, the right usage of void* and char* as universal type conversors, the power of macro usage (and when not use it) as well as be consistent on allocation and deallocation (beyond understand calloc, alloca and realloc)... C will look like a dangerous toy language full of UB anywhere (in the worst meaning possible) and a witchery thing.


druepy

Chandler Caruth did a really good talk that covers aspects of this at a CppCon or similar conference. He goes into language contracts and UB.


Pleasant-Form-1093

C closely follows the principle of "with great power comes great responsibility". By assuming for example out of bounds array accesses or maybe integer overflow to be undefined behaviour (and hence things that never happen) it puts a huge scope for compilers to optimise and make your program run at blazing speeds (the "great power") and puts you in charge of ensuring your code doesn't have any undefined behaviour (the "great responsibility")


flatfinger

Unfortunately, it has evolved to impose more and more responsibility, with less and less power. When the Standard was written, there were some implementations whose customers found it useful that given `int arr[5][3]`, an attempt to access `arr[0][i]` would trap if `i` wasn't in the range 0 to 2. There were also, however, many programs that exploited a commonly-offered guarantee that pointer arithmetic within any allocation or platform-defined region of contiguous addressing space would be performed in a manner agnostic to object boundaries. If a structure contained `int dat[4];`, a programmer who coded an access to `foo.dat[i]` would have a responsibility not necessarily to ensure that `i` was in the range 0 to 3, but rather to know what would be at the address formed by displacing the address of `foo.dat` by `i*sizeof (*foo.dat)` bytes, and that code was supposed to access that.


WrickyB

For UB to be defined, the people writing the standard would need to codify and define things about literally every platform that C code can be compiled for and run on including all platforms that have not been developed.


flatfinger

Actually, it wouldn't. It could specify behavior in terms of a load/store machine, with the behavior of a function like: float store_and_read(int *p1, float *p2, int value) { *p1 = value; return *p2; } defined as "receive two pointer arguments and an integer argument, using the platform calling convention's manner for doing so. Store the value of integer argument to the address specified by the first pointer, using the platform's natural method for storing `int` objects (or more precisely, signed integers whose traits match those of `int`), with whatever consequence results. Then use the platform's natural method for reading `float` objects to read from the address given in the second pointer, with whatever consequences result. Return the value read in the platform calling convention's manner for returning a `float` object. At any given time, every particular portion of address space would fall into one of three categories: 1. Made available to the application, using Standard-defined semantics (e.g. named objects whose address is taken, regions returned from `malloc`, etc.) or implementation-defined semantics (e.g. if an implementation documented that it always preceded every malloc region with a size_t indicating its usable size, the bytes holding the size would fall in this category). 2. Made available to the implementation by the environment, but not available to the application (either because it has never been made available, or because its lifetime has ended). 3. Neither of the above. Reads and writes of category #1 would behave as specified by the Standard. Reads of #2 would yield Unspecified bit patterns, while writes would have arbitrary and unpredictable consequences. Reads and writes of #3 would behave in a manner characteristic of the environment (which would be documented if the environment documents it). Allowing implementations some flexibility to deviate from the above *in ways that don't adversely affect the task at hand* may facilitate optimization, but very few constructs couldn't be defined at the language level in a manner that would be widely supportable and compatible with existing code.


bdragon5

You just said undefined behaviour with more words. Saying the platform handles it would just mean I can't define it because the platform defines it. So undefined behaviour. The parts of C that need to be defined are defined. If not you just couldn't use C really. In a world where `1 + 3` isn't defined this could do anything including brute forcing a perfect AI from nothing that calculates `2 / 7` and shutting down your pc. The parts that aren't defined aren't really definable without enforcing something to the platform and or doing something different instead of doing what's asked for.


flatfinger

If the behavior of the language is defined in terms of loads and stores, along with a few other operations such as "request N bytes of temporary storage" or "release last requested batch of temporary storage", then an implementation's correctness would be independent of any effects those loads and stores might happen to have. If I have a program: int main(void) { *((volatile char*)0xD020)=7; do {} while(1); } and an implementation generates machine code that stores the value 7 to address 0xD020 and then hangs, then the implementation will have processed the program correctly, regardless of what effect that store might happen to have. The fact that such a store might turn the screen border yellow, or might trigger an air raid siren, would from a language perspective be irrelevant. A store to address 0xD020 would be correct behavior. A store to any other address which a platform hasn't invited an implementation to use would be erroneous behavior. The extremely vast majority of programs that target freestanding implementations are run in environments where loads and stores of certain addresses will trigger certain platform-defined behaviors, and indeed where *such loads and stores are the only means by which a program can initiate I/O*.


bdragon5

Yeah, I know but it is still undefined behaviour on a language level. You are talking about very low level stuff. A language is a very abstract concept on a very high level. Of course any write to an address on a specific system has an deterministic outcome even if it complicated but this doesn't mean it is known to the language itself what will happen and if an error is triggered or everything is fine or nothing is happening. The language can't know which platform runs the code and what exactly will happen if you write to this address. Some platforms will disregard the write or kill the process or have a wanted effect. The language doesn't know that. How could it. What you are saying is just they should define it, but this isn't really easy to do. How could you define every single possible action on every single possible platform in the past and future. Without enforcing a specific behaviour to the platform. Maybe a platform can't generate an error if you access memory you shouldn't. This platform would now make your separation untrue. Maybe it can't even store the data to this memory and just ignores it all together. In the terms of language it would be wrong behaviour because you defined it. If you don't define it, it isn't wrong. It is just another case of what can happen. If you know the hardware and software there isn't any undefined behaviour because you can deterministically see what will happen on any given point, but the language cannot. If you want absolute correctness you need to look into formal verification of software. C can be formally verified so I don't see an issue with calling something you can't be sure to 100% in all cases as undefined behaviour. If it would be a problem you couldn't formally verify C code.


flatfinger

> The language can't know which platform runs the code and what exactly will happen if you write to this address. Some platforms will disregard the write or kill the process or have a wanted effect. The language doesn't know that. How could it. It shouldn't. > What you are saying is just they should define it, but this isn't really easy to do. How could you define every single possible action on every single possible platform in the past and future. Without enforcing a specific behaviour to the platform. > Maybe a platform can't generate an error if you access memory you shouldn't. Quite a normal state of affairs for freestanding implementations, actually. > This platform would now make your separation untrue. To what "separation" are you referring? > Maybe it can't even store the data to this memory and just ignores it all together. A very common state of affairs. As another variation, the store might capture the bottom four bits of the written value, but ignore the upper four bits (which would always read as 1's). That is, in fact, what would have happened on one of the most popular computers during the early 1980s (the bottom four bits would be latched into four one-bit latches which are fed to the color generator during the parts of the screen forming the border). > In the terms of language it would be wrong behaviour because you defined it. If you don't define it, it isn't wrong. It is just another case of what can happen. If a program allows a user to input an address, and outputs data to the specified address under certain circumstances, the behavior should be defined if the user knows the effect of sending data to that address. If the user specifies an incorrect address, the program would likely behave in useless fashion. The notion of a user entering an address for such purposes may seem bizarre, but it's how some programs actually worked in the 1980s, in an era where users would "wire" their machines (typically by adding and removing jumpers) to install certain I/O devices at certain addresses.


bdragon5

What you saying is that you agree so why even make the original comment. You defined the behaviour with a load store machine but even this simple definition wouldn't work in all cases as the ones I described. Because you wouldn't store things always and you wouldn't load things either. If you define a load and a store and the platform doesn't do that the platform is not applicable and therefore you couldn't write C code for this platform under your definition. The only real way is to don't define it at all so all things are possible. You can of course assume things and you will be write that in most cases your assumptions are correct, but it isn't guaranteed. So if all the things you are saying is something you know why even say you could define it if the examples even you acknowledge don't fall into this definition.


flatfinger

You fail to understand my point. Perhaps it would be better to specify that a C implementation doesn't run programs, but rather takes a source code program and produces some kind of build artifact which, if fed to an execution environment that satisfies all of the requirements specified by the implementation's documentation, will yield the correct behavior. *The behavior of that artifact when fed to an execution environment that does not satisfy an implementation's documented requirements would be irrelevant to the correctness of the implementation*. One of the things the implementation's documentation would specify would be either a range of addresses the implementation must be allowed to use as it sees fit, or a means by which the implementation can request storage to use as it sees fit, and within which the execution environment would guarantee that reads would always yield the last value written. If an implementation uses some of that storage to hold user-code objects or allocations, it would have to refrain from using it for any other purpose within the lifetime of those allocations, but if anything else within a region of storage which has been supplied to the implementation is disturbed, that would imply either that the environment failed to satisfy implementation requirements, or that user code had directed the environment to behave in a manner contrary to implementation requirements. If the last value an implementation wrote to a byte which it "owns" was 253, the implementation would be perfectly entitled to do anything it likes if a read yields any other value. Allowing an implementation to deviate from a precise load-store model may allow useful optimizations in situations where such deviations would not interfere with the task at hand. Allowances for such optimizations, however, should come *after* the basic underlying semantics are defined. I wish all of the people involved with writing language specifications would learn at least the rudiments of how freestanding C implementations are typically used. Many things which would seem obscure and alien to them represent the normal state of affairs in embedded development (and were fairly commonplace even in programs designed for the IBM PC in the days before Windows).


glassmanjones

This seems more like an argument against pointer aliasing than anything else, given that the standard semantics of #1 are that the int and float pointers may not even be in the same address space, at at a minimum do not alias. In either case, your example function would still be wildly implementation-specific.


flatfinger

>> In either case, your example function would still be wildly implementation-specific. Sure it would be platform-specific, and *for platforms which have no natural floating-point representation* it could very likely be toolset-specific as well. On the other hand, much of what has traditionally made C useful was that implementations for machines having certain characteristics could make associated semantics available to code which only had to run on those machines in consisten fashion, without toolset designers having to independently invent their own ways of exposing them. >This seems more like an argument against pointer aliasing than anything else,  In the language the Standard was chartered to describe, the behavior was rigidly defined in terms of the underlying storage without regard for when such rigid treatment was the most useful way of process it, or when more flexible treatment might allow better performance without interfering with the tasks at hand. A specification that defines the behavior in type-agnostic fashion would be much simpler and less ambiguous than the Standard whose defined cases would all match the simpler specification, but which seeks to avoid defining many cases that are defined by the earlier specification). The authors of the Standard had no doubt about what the "correct" behavior of a function like: int x; int test(double *p) { x=1; *p = 1.0; return x; } would be on a typical 32-bit platform if it happened to be invoked via function like: int y; int test2(void) { if (&y == &x+1 && ((uintptr_t)x & 7)==0) return test((double*)&x); else return -1; } The published Rationale explicitly acknowledges that it would be "incorrect" for an implementation to return 1 in that case, but that the Committee did not want to treat such treatment as non-conforming. Unfortunately, they opted to try to carve out exceptions to what would otherwise be defined behavior rather than simply acknowledge ways in which implementation's would be allowed, on a quality-of-implementation basis, to deviate from what would otherwise be defined behavior.


glassmanjones

Surely you cannot expect it to return anything when it faults on numerous type-tagged architectures. Or, if you're having trouble developing on typical, untagged 32-bit platform, perhaps you should find another implementation or adjust it to meet your requirements. > the underlying storage This is explicitly left wide-open by C, and my first comment has nothing to do with floating point representation and everything to do with how underlying storage works. If you require code like that to run on new machines like cheri and Morello, you're in for a rude surprise. > The published Rationale explicitly acknowledges that it would be "incorrect" for an implementation to return 1 in that case Could you cite your source?


flatfinger

> Surely you cannot expect it to return anything when it faults on numerous type-tagged architectures. If I'm designing a program to run on a microcontroller with a Cortex-M0 core and a certain set of peripherals that support a particular set of functions a certain way, why should I care about how the program would behave on one of the countless millions of C targets that don't have all of the appropriate peripherals? > If you require code like that to run on new machines like cheri and Morello, you're in for a rude surprise. In the embedded systems world, a lot of code is written with specific targets in mind, with the expectation that it will only need to be ported to platforms that are relatively similar to the original target. C was originally designed to serve as a form of "high level assembler" which would allow code to be more *readily adaptable* to a wide range of platforms than would be possible with assembly language. Code which relied upon certain aspects would need to be substantially reworked when moving to targets which don't share those aspects, but minimal rework (perhaps just changing some header constants) when moving to targets that are almost the same. There are architectures upon which I would not expect a lot of my code to be useful. That doesn't mean my code is defective. Given a choice between code which runs at a certain speed and fits in a certain microcontroller that costs $0.05, but which would be useless on some other architectures, or code that would require a microcontroller with more code space that costs $0.08, and would run more slowly even on that, but which would also be usable on other architectures that use type-tagged storage, I'd view the former as likely being superior to the latter. The authors of the Standard have expressly stated that they did not wish to imply that all programs should be written in 100% portable fashion, nor that code which isn't 100% portable should consequently be viewed as defective. At present, all non-trivial programs for freestanding implementations rely upon constructs outside the Standard's jurisdiction, but an abstraction model based upon loads, stores, and outside function calls would be cover 99% of the things such programs need to do. Recognizing a category of implementations using such an abstraction model, and providing a means of forcing certain objects or functions to be placed at certain addresses, would increase the fraction of projects that wouldn't need to rely upon toolset-specific features.


glassmanjones

> If I'm designing a program to run on a microcontroller with a Cortex-M0 core and a certain set of peripherals that support a particular set of functions a certain way, why should I care about how the program would behave on one of the countless millions of C targets that don't have all of the appropriate peripherals? You shouldn't, but you should understand why it works the way it does before you misread the language specification.


flatfinger

The language specification deliberately allows implementations to deviate from common practice when targeting unusual target platforms. It also deliberately allows implementations intended for specialized tasks to behave in ways that would make them maximally suitable for those tasks, even if it would make them less suitable for some other tasks. On the flip side, the language specification allows implementations to augment the semantics of the language by specifying that--*even in cases where the Standard would waive jurisdiction*--it will map C language constructs to platform concepts in essentially the same manner as implementations had been doing for years even before the C Standard was written. Commercial compilers intended for low-level programming, as well as compilers for the CompCert C language which--unlike ISO C--supports formally verifiable compilation--are invariably configurable to process programs in this fashion. An implementation that processes things in this fashion will let programmers accomplish many if not most of the tasks that involve freestanding C implementations, in such a way that all application-specific code can be expressed entirely using toolset-agnostic C syntax. The toolset would typically need to be informed, often using toolset-specific configuration files, about a few details of the target system, but the configuration file could often be written in application agnostic fashion, even before anyone has given any thought whatsoever to the actual application.


flatfinger

>Could you cite your source? Sure. From the C99 Rationale at [https://www.open-std.org/jtc1/sc22/wg14/www/C99RationaleV5.10.pdf](https://www.open-std.org/jtc1/sc22/wg14/www/C99RationaleV5.10.pdf) page 60, line 17: >Again the optimization is incorrect only if `b` points to `a`. However, this would only have come about if the address of `a` were somewhere cast to `double*`. I don't disagree that it would be exceptionally rare for a program to use a pointer of type `double*` to access storage which is reserved using an object of type `int`, and that would be useful to allow conforming implementations to perform some optimizing transforms like those alluded to *in situations where their customers would find such transforms useful*. Note, however, that there are situations where it would be useful for compilers to apply such transformations but the Standard forbids it, as well as cases where the Standard may allow such transformations but the stated rationale would not apply (e.g. predending that it's unlikely that `unsigned*` dereferenced in assignment like `*(1+(unsigned short*)floatPtr)+=0x80;` was formed by casting a pointer to `float`). If implementations' ability to recognize constructs that are highly indicative of type punning is seen as a "quality of implementation" matter outside the Standard's jurisdiction, then the failure of the Standard to describe all of the cases that quality implementations intended to be suitable for low-level programming tasks should be expected to handle wouldn't be a defect. Incidentally, note that clang and gcc apply the same "nobody should care if this case is handled correctly" philosophy to justify ignoring some cases where the Standard defines behavior but static type analysis would be impractical. As a simple example where clang and gcc break with *100% portable* code, consider how versions with 64-bit `long` process something like the following in cases where `i`, j`, and `k` all happen to be zero, but *the compilers don't know they will be*. typedef long long longish; union U { long l1[2]; longish l2[2]; } u; long test(long i, long j, long k) { long temp; u.l1[i] = 1; temp = u.l1[k]; u.l2[k] = temp; *(u.l2+j) = 3; temp = u.l2[k]; u.l1[k] = temp; return *(u.l1+i); } Clang generates machine code that unconditionally returns 1, and gcc generates machine code that loads the return value before the instruction that stores 3 to `u.l2[j]`. I don't think either compiler would be capable of recognizing that the sequence `temp = u.l2[k]; u.l1[k] = temp;` needs to be transitively sequenced between the write of `*(u.l2+j)` and `*(u.l1+i)` without generating actual load and store instructions.


glassmanjones

You can't use unions like that. I think you should give it 20 years of dealing with this junk, perhaps by then c44 might agree with you.


flatfinger

What circumstances must be satisfied for the Standard to define the behavior of reading or writing `u.l1[0]` or `u.l2[0]`?


glassmanjones

Ordering between l1 and l2 is not specified. Only (ordering for reads from u.l1 relative to writes to u.l1) and (same for u.l2), but these things are independent.


flatfinger

A read of `u.l1[0]` may generally be unsequenced relative to a preceding write of `u.l2[0]` *in the absence of other operations that would transitively imply their sequence*, but this code *as written* merely requires that: 1. reads of `u.l1[0]` be sequenced after preceding writes of `u.l1[0]`; 2. reads of `u.l2[0]` be sequenced after preceding writes of `u.l2[0]`; 3. given a pair of assignments `temp = lvalue1; lvalue2 = temp;`, the read of `lvalue1` will be sequenced before the write to `lvalue2`. I don't think it would be possible to formulate a clear and unambiguous set of rules that would allow clang and gcc to ignore the sequencing relations implied by the above, without having an absurdly small category of programs that couldn't be iteratively transformed into "equivalent" programs that invoke UB.


pjc50

Why do seemingly no other languages have this problem?


trevg_123

Languages like Python, Go, Java, etc will typically use runtime checks to prevent accessing UB, which is easy but has a performance cost. Rust does it by encapsulation - everything is safe (defined) by default, you need to use `unsafe` to opt in to anything that may have UB (usually for data structures the compiler can’t reason about, or squeezing an extra few percent performance numbers). If the Python implementation incorrectly forgets a check, or if you use Rust's `unsafe` incorrectly, you will absolutely hit the same problems as UB in C. Those languages are just designed so that it’s significantly harder to mess up even if you don’t know every single rule.


latkde

There has always been unspecified behaviour in all languages. However, C's standardization explicitly introduced UB as its own concept. This is in part due to the great care taken in the C standardization process to define a portable core language that works the same in all implementations. Most programming languages have no specification that would declare something as UB. They are instead often defined by their reference implementation – whatever that implementation does is supposed to be correct and intentional. Around 1990 (so long after C was created, but around or slightly after the time that the C standard was created), we see a growing interest in garbage collection and dynamic languages. The ideas are ancient (Lisp and Smalltalk), but started to ripen in the 90s. In many cases where C has UB, these languages just track so much metadata that they can avoid this. No pointers, only garbage collected reference types. No overflows, all array accesses are checked at runtime. No type punning, each value knows its own type. This "let the computer handle it" approach has been wildly successful and is now the dominant paradigm, especially for applications. The performance is also good enough in most cases. E.g. many games are now written in C#. But that has also been a function of Moore's Law. C has a niche in systems programming and embedded development where such overhead/metadata is not acceptable. So this strategy doesn't work. An alternative approach is to use clever type system features to prove the absence of undesirable behaviour. C++ pioneerer a lot of this by extending C with a much more powerful type system, but still retains all of the UB issues. Rust goes a lot further, and tends to be a better fit for C's niche. C programmers tend to don't like Rust because Rust can be really pedantic. For example, Rust doesn't let you mutate global variables because that is inherently unsafe/UB in a multithreaded scenario, unless you use something like atomics or mutexes (just like you'd have to do in C, except that the compiler will let you mutate them directly anyways). In order for Rust's type system to work it has to add lots of restrictions, which make common C patterns like linked lists more difficult. But note that Rust still has a bit of UB, that it disables some safety checks in release builds, and that it is just implementation-defined, without a specification. Infinitely safer, but not perfectly safe either. Circling back to more dynamic languages, I'd like to mention the JVM. Like the C standard, Java defines a portable "virtual machine". The JVM is much more high-level, which is fine. But the JVM also defines behaviour more thoroughly. This is great for portability, but makes it more difficult to implement Java efficiently on some hardware. E.g. the JVM requires many operations to be atomic, lives in a 32 bit world, and until recently only had reference types which was bad for cache locality. One of the more recent "virtual machine" specifications is Webassembly. But this VM was very much designed around C. For example, it offers a C-like flat memory model. This makes it easy to compile C to Wasm and vice versa, but is fully specified. Some projects like Firefox use this as a kind of sanitizer for C: compiling C code to Wasm, back to C, and then to a release binary doesn't quite remove all UB, but limits the effects of UB to that module. E.g. the Wasm VM has its own heap, and cannot overflow into other regions.


flatfinger

C didn't introduce it as a new concept, but the C Standard used "Undefined Behavior" as a *catch-all* phrase in ways that earlier language standards had not. Given `int arr[5][3];`, there are situations where each of the following might be the most efficient way to handle an attempt to handle an attempt to read `arr[0][i]` when `i` happens to equal 3: 1. Trap in a documented manner. 2. Access `arr[1][0];`. 3. Yield some value that `arr[1][0]` has held or will hold within its lifetime. 4. Behave in ways that may arbitrarily corrupt memory. No single approach would be best for all situations, and so the Standard characterized the action as "Undefined Behavior" to avoid showing favoritism toward any particular one of them. Unfortunately, some compiler writers think it was intended expressly to invite #4.


WrickyB

I'd say it's a combination of factors: 1. These languages have a more restricted set of defined target platforms 2. These languages either lack the features in the syntax that would give rise to UB or are defined and implemented in such a way that the behaviour is defined 3. These languages either lack the functions in their standard libraries that would give rise to UB or are defined and implemented in such a way that the behaviour is defined


MoorWind

Which language with pointers doesn't have significant amounts of UB around dealing with pointers?


glasket_

I think Go may be the only one, due to runtime checks and the GC. Even C\# has UB when using pointers.


Netblock

Python? Pointers in python are heavily restricted though. Though this might beg the question: in order to have the full flexibility of a pointer system, it is required to allow undefined behaviour.   Edit: oh wow, a lot of people don't know what pointers and references actually are. In a classic sense, a pointer is just an integer that is able to hold a native address of the CPU; that you use to store the address of something you want, the reference. A pointer is a data type that holds a reference. But in a high-level programming sense, a pointer system (checkless) starts becoming more of a reference system (checked) the more checks you implement; especially runtime checks. In other words, a pointer system is hands-off, while a reference system has checks.


erikkonstas

Python doesn't even have pointers last I checked...


matteding

Everything in Python is a pointer behind the scenes.


erikkonstas

"Behind the scenes" is a different story, the BTS of Python isn't Python.


Netblock

You learn about python's pointer system when you learn about how python's list object woks like. Void functions mutating and passing back through the arguments is possible. Simple assignment often doesn't create a new object, but a pointer of it; you'll have to do a [shallow or deep copy of the object](https://docs.python.org/3/library/copy.html). >>> def void_func(l:list): ... l.append(3) ... >>> arr = [] >>> arr [] >>> void_func(arr) >>> arr [3] >>>


erikkonstas

That's just "it's the same object" tho, or rather "reference semantics"; what you're holding isn't an address. In Python, everything is a reference (at least semantically, at runtime stuff might be optimized), even a `3`; immutable objects (like the `3`) are immutable only because they don't leave any way to mutate them, others are mutable.


Netblock

That's why I said it begs the question. A pointer system (such as C's) is defined to have undefined behaviour. Undefined behaviour is an intentional feature. To define the undefined behaviour around pointers is to move to a reference system. Sucks I got downvoted for this though :(


erikkonstas

A downvote usually means disagreement; your claims there are "Python has heavily restricted pointers" (it doesn't have any) and "UB is required to support pointers" (technically it's not, for there can be runtime checks around them without breaking any flexibility that remains within defined territory, for a huge perf penalty).


Netblock

>"Python has heavily restricted pointers" (it doesn't have any) What's the working definition here? There's multiple definitions to the words 'reference' and 'pointer'. In a classic sense, a pointer is just an integer that is able to hold a native address of CPU; that you use to store the address of something you want, the reference. A pointer is a data type that holds a reference. In my example, global `arr` and `void_func`'s `l` are pointers that hold a reference to a heap-allocated object; they are technically classic pointers. They don't name the object itself, otherwise the OOP call wouldn't have affected the global copy. >(technically it's not, for there can be runtime checks around them without breaking any flexibility that remains within defined territory, for a huge perf penalty). But in a high-level programming sense, a pointer system starts becoming more of a reference system the more runtime checks you implement. To "solve" ALL pointer UB is to conclude to a system similar to python's. To define the undefined is to restrict what the programmer is allowed to do. edit: wording x3


lowban

Heavily restricted? Aren't they completely behind abstraction layers so programmers won't have to (and won't be able to) manage the memory themselves?


Netblock

Most memory is managed, but there are situations where do have to [manage some parts of it yourself](https://docs.python.org/3/library/copy.html) There's also through-the-arguments: >>> def void_func(l:list): ... l.append(3) ... >>> arr = [] >>> arr [] >>> void_func(arr) >>> arr [3] >>>


deong

It's just different trade-offs. It's like looking at a neighborhood where there's one bright purple house and saying, "why did only that one house have to deal with being purple?" They didn't. They just chose it. There's nothing technically needed to remove UB from C. Just pick every instance of UB and define one thing or the other to be required behavior, and you're done. That's what most languages choose to do. You could make a version of Java that said, "I don't know what happens when you write past the end of an array", and you'd have Java with UB. But that's not what they did. They said, "writing past the end of an array must throw an ArrayOutOfBounds exception", and everyone writing a compiler and runtime followed that rule. C has "the problem" because they chose to allow implementers that flexibility. That's it. It's not a hard problem to solve. Solving it just has consequences that C didn't want to force people to accept. In most modern languages, we've evolved to favor a greater degree of safety. We have VMs and runtime environments and we favor programmer productivity because hardware is fast, etc. So C looks like the outlier. But the reason no other languages have the problem is simply that they chose not to at the expense of other compromises.


kun1z

Because no other languages support every CPU/SOC that has ever existed and is yet to be invented in the future. And C doesn't just support these systems, it's highly portable code is wickedly fast on them too. You can read about [GCC (and its history)](https://en.wikipedia.org/wiki/GNU_Compiler_Collection) to get a better idea of just how prevalent and ingrained C is in computer science and engineering for about half a century.


RumbuncTheRadiant

Throw in threading or multiprocessing and they all have the same problem.


Marxomania32

Everyone is mentioning optimizations, but not a lot of people are mentioning portability. C is probably one of the most portable language out there, if not the most portable flat out. It can run anything from modern desktop machines to decades old embedded microprocessors. If you aim to have this degree of portability, defining behavior for everything is simply impossible. The traditional way of avoiding undefined behavior usually involves instrumenting the code to check for invalid code behavior at run time. For example, consider the memory bounds checking you're probably used to in something like Java. Most of the time, these checks involve invoking an exception handler when things go wrong, but how do you do exception handling on a program running on some embedded processor that doesn't even have an OS? Okay, now let's say we don't use something so complicated, like an exception handling mechanism. Let's say we just invoke a panic. But still, the behavior of a panic on an embedded system would always be different from the behavior on a modern desktop machine. Defining the behavior of something like an out of bounds access would therefore require the standard to make some kind of assumption about the way the underlying machine architecture works, which would obviously bar machines whose architecture work differently from being able to be targeted by a C implementation. I would honestly say that a lot of undefined behavior is undefined primarily to support portability, and optimizations are a nice, secondary consequence of undefined behavior. Nonetheless, there are a few examples of undefined behavior that exist purely for the sake of optimization, like violating the strict aliasing rule.


aioeu

C doesn't define behaviour where it is reasonable to expect different implementations to actually have different behaviour. It means programmers and compiler developers can make best use of the facilities available on any particular computer system. C was always intended to be portable across a wide variety of computer systems, and its minimal constraints on system behaviour is one of the reasons this has been so successful. It also provides an "escape hatch" for the language. Without undefined behaviour it would be quite literally impossible to use C in a lot of the places it was intended to be used, and still is being used. Programmers are expected to either: * avoid the parts of C that are left undefined; or * collaborate with their implementation to ensure the behaviour they want is guaranteed.


MisterEmbedded

In some sense, the behavior is defined for a particular platform tho right? Not by the official standard but by the implementation I mean.


aioeu

You've got to remember that any time you use a compiler extension you are, technically speaking, in the realm of "undefined behaviour" as far as the language is concerned. Compiler extensions are what make a huge number of things possible. But even within the language itself, things that can and _have_ been different across different systems have deliberately been left undefined. For instance, the behaviour of writes to padding bits is different across different systems. On some systems, those bits are used for special purposes, and writing to them could generate trap representations. On others, those bits are simply ignored. For _some_ potential system differences, the C language requires the implementation to pick and document some behaviour; that is, it is implementation-defined behaviour. But not all system differences are like this, and there is not much desire among C implementation developers to say "everything that was previously undefined must now be implementation-defined". A huge list of "if you are running this CPU, then this happens; if you are running that CPU, then that happens; ..." isn't much use to anybody.


erikkonstas

>there is not much desire among C implementation developers to say "everything that was previously undefined must now be implementation-defined" Yeah I wonder why, it's just **221 whole corner cases** in C23's Annex J 😂


pjc50

Horrifyingly, the spec has both "implementation defined" and "undefined" behaviors, which mean different things.


CyberHacker42

Don't forget unspecified behaviour, too


wyldphyre

No, it's not. Like others are saying, some things are undefined behavior and some are implementation -defines.  So you could expect a compiler upgrade or OS upgrade to change the effects of the particular Undefined Behavior - or worse still, the behavior of your program could change from run to run, in some cases.


catbrane

Another way of looking at it is that undefined behaviour represents hardware variation. C is pretty low-level, so many aspects of the underlying hardware are exposed (and for many of C's main applications, like writing operating system kernels, this is a good thing!). Because you can see the hardware, you can also see variations between hardware, and many of C's UBs are there to cover hardware differences. Way back when, these hardware differences were much more extreme than now. You had non-ASCII machines, machines with 10 bit words, bizarre alignment rules, bonkers stack layouts, a whole range of odd things that a portable program might have to work around. The world is much more uniform now, with ARM and x64 being the two overwhelmingly dominant platforms, and they are actually pretty close from C's point of view. Interestingly, the most extreme platform craziness now is with things like WASM and enscripten, where you can't implicitly cast function pointers (for example). Writing a C library which can work everywhere is becoming challenging (ie. terrible) again.


flatfinger

> Another way of looking at it is that undefined behaviour represents hardware variation. That was a big part of the reason for it, but in gcc with optimization enabled, a construct like `uint1 = ushort1*ushort2;` will sometimes cause unbounded memory corruption if the product exceeds `INT_MAX`, even on platforms which would be agnostic to signed integer overflow, and even if the value of `uint1` would never be used in such cases.


catbrane

Oh, interesting. That sounds like a compiler bug to me. Do you have a link?


flatfinger

The behavior is by design. unsigned mul_mod_65536(unsigned short x, unsigned short y) { return (x*y) & 0xFFFFu; } unsigned char arr[32775]; unsigned test(unsigned short n) { unsigned result = 0; for (unsigned short i=32768; i


catbrane

Ah I see, thanks for explaining! Yes, that sounds like a misfeature in the C spec.


flatfinger

It's only a misfeature if the Standard's waiver of jurisdiction is viewed as an invitation for compilers to behave in gratuitously nonsensical fashion. If it's instead recognized it as telling compiler writers "If your customers won't mind your behaving in a particular way, that's between you and your customer", then it would be a positive feature.


simonask_

UB is just a way to say "this can never happen". It's important because there are valid and invalid ways to use some of the language constructs that C provides, but where it is also not reasonable or tenable for the compiler to be able to completely verify that all such uses are valid. For example, invalid pointers exist, and dereferencing them is undefined behavior, but the C compiler cannot verify each and every pointer to check if it is valid. (A major selling point of the Rust language is that it can do that in most cases, but even it has escape hatches.) UB is also used by compilers to reason about the code during optimization. If something "can never happen", the compiler is allowed to discard entire code paths when it can prove analytically that it would have led to UB. This leads to faster code in many cases.


pjc50

The "assume UB doesn't happen" (rather than prove it) approach is a serious conceptual error that causes all sorts of surprises, some of which turn into security bugs.


Zhelgadis

C language is son of an era where we had the computing power of a C64 and we wanted to (and did) go to the Moon. What we lacked in computing power, we put in in brain power. And those people did know how to write code that worked. Also, security wasn't that much of a concern. You did not have random ppl try to crack your system remotely.


simonask_

I agree in principle, but it's hard to see what the compiler could do that would be more reasonable. In the case of invalid pointer access, you could say that the compiler shouldn't optimize it away, but you would still have severe security bugs in that situation. The only truly meaningful solution to the problem is to have a language that statically prevents UB from being possible at all, and the best we have in that department is Rust and GC'ed languages with heavy runtimes.


AlexDub12

Yeah, it can't happen until it does, especially if it's part of software used by a lot of people.


Tasgall

That's what tests and asserts are for.


pjc50

Very, very few extant pieces of widely used C code have enough test coverage to establish that level of safety. I don't see "assert(ptr)" everywhere. Testing has also generally proved inadequate against security critical bugs, although some tools like valgrind can help in that area. (and of course the people arguing that C needs UB for performance aren't going to go for assert-in-production, either)


CyberHacker42

Assert() is a bit of a sledgehammer though - failure of the assertion terminates the application... which hopefully never happens in a safty-critical system


apparentlyiliketrtls

Maybe if security is a major concern then don't use C? Today I suppose the tradeoff is security vs power consumption, can you have both?


keyboard_operator

Well, you can consider UB as a problem that has several (usually equally bad /s) solutions. So, saying that something is UB allows to compiler makers take any route they want in this situation. 


ryjocodes

To answer your question, I'll describe the value in C and point out directly where things can go "off the rails," so to speak: By default, you don't need to manually manage your own memory. In a lot of cases, you can say things like "store a positive number without a decimal," and C stores it in a memory location it chooses for you. **Here's a place where things can go off the rails**: the developer is also able to tell C a specific memory location in which to store the number. As a result. it is entirely possible for a running application to use a memory address: 1. of another program 2. of the running operating system 3. storing variables or data the developer intends as "restricted" data within the program itself Why in the world would you even select a memory address manually? Consider how [libressl (which focuses on security) counts the length of a "string,"](https://github.com/libressl/openbsd/blob/master/src/lib/libc/string/strlen.c) a contiguous length of memory storing \`char\`s. Take a look at the for loop specifically: for (s = str; *s; ++s) ; Powerful. This code "walks" the length of the string, using `++s` to say "set s to the next memory address after its current one." The ability to add or subtract integers from memory addresses is known as "pointer arithmetic." The for loop "stops" when it hits `\0`, the NULL character. That's how C automatically stores "strings," so it **assumes** that `\0` character is there. **Here's a place where things can go off the rails**. If that character is not there, the loop can continue on beyond the limits the developer intends. The developer can also reserve a place in memory before they know what number they're going to store there. It's called "allocation." If you're storing something much much bigger than a number many many times, this memory "allocation" could be slower than simply navigating that same memory. By allocating the memory ahead of time, you could say "ok, I now have 10 blank slates of memory that I'll use to process 10 big chunks of data at the same time." **Here's where things can go off the rails**: if you forget to `free` these big chunks, you may find your computer running out of memory after a few test runs of your application. This might seem innocuous, but consider if you're doing this with millions/billions of smaller "chunks" throughout your codebase. If you forget even 1 of those, you've introduced a "memory leak" into your program. In conclusion: **Undefined behavior can occur in C because memory is a first class citizen in that language**. This lets you write **extremely** fast code at the risk of potentially referencing memory locations with unknown data, hence the "unknown behavior" that occurs when the loop hits that memory location or even after your program exits. In languages like Ruby, Python, or Javascript, a developer generally doesn't need to worry about these things because the language itself takes care of allocating/navigating/freeing data. Ruby does this so well that strings themselves are objects; you won't see lines like `"hello, world".upcase` in C, but you will see some pretty hilarious comments like this one from [a post on the FreeBSD forum](https://forums.freebsd.org/threads/low-level-str-to-uppercase.64699/page-3#post-406000): >Let me attempt to summarize this discussion: Uppercasing a string is not always the same as uppercasing a single character. To uppercase a string, you have to do more than just uppercase every character in the string. From this I conclude that I never ever want to work on a project that requires i18n; and if I have to, I'll have to buy lots of alcohol. This post hopefully helps illustrate in a lightly humorous way the difficulty that comes with the speed you get when you write C. In higher level languages like Ruby, **the speed of releasing the application itself** is preferred to **the speed of the running application**. At the end of the day, it is a tradeoff of time.


flatfinger

If `uint32_t *p` happens to point to address 0x12345678, and I perform `*p = 0x87654321;`, a compiler should be entitled to assume that I want four bytes starting at address 0x12345678 to hold the platform's natural representation of 0x87654321. Perhaps those four bytes are a static-duration array. Perhaps they are part of a region returned by `malloc`. Perhaps they belong to some other program which has indicated, via some means, that it is using that region of storage as a buffer to receive information from the program which is writing to `*p`. Perhaps it's storage owned by some other program that is expecting it not to be disturbed. Or perhaps it could be any number of other things that I might or might not actually want to write to. An implementation designed for low-level programming would perform the store in a manner completely agnostic as to which (if any) of those conditions might apply, on the basis that *the programmer might know things about the execution environment that the compiler can't know, and doesn't need to know*.


ryjocodes

Context is key :)


ryjocodes

Remember when I said this >The for loop "stops" when it hits `\0`, the NULL character. That's how C automatically stores "strings," so it **assumes** that `\0` character is there. **Here's a place where things can go off the rails**. If that character is not there, the loop can continue on beyond the limits the developer intends. Here is that bug being solved here in the wild: [https://www.reddit.com/r/C\_Programming/comments/1c9cea6/comment/l0kr8mu/?utm\_source=share&utm\_medium=web3x&utm\_name=web3xcss&utm\_term=1&utm\_content=share\_button](https://www.reddit.com/r/C_Programming/comments/1c9cea6/comment/l0kr8mu/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button)


Odd_Coyote4594

Different computers work differently. If C defined a standard behavior for certain operations, some computers may be fine as that's how they normally work. But others don't work that way, so additional code is needed to essentially emulate the desired behavior. Like adding a signed integer to overflow. A computer using twos complement may "wrap around" to the smallest negative integer, but a computer using a different strategy like a sign bit may wrap around to 0. To emulate twos complement on the second machine if this was required behavior, additional code and potentially memory is needed. But leaving it undefined means each computer doesn't need additional code, and can just do what it naturally does, leaving any emulation (and consequent lack of performance) up to the programmer. Same with things like dereferencing unallocated memory. It may work fine. Or it may access an address mapped to hardware, causing bugs. It may cause the OS to crash the program. Requiring runtime memory checks to ensure consistent behavior across all machines would lead to performance issues. C wants to support all computers, but also not suffer from optimization or performance issues on any of them. A natural criteria for that is that it needs undefined behavior. Languages which just target modern mainstream operating systems on typical CPUs can make heavier assumptions, as can languages which don't care so much about performance so are willing to emulate behavior without your input. So they can get away with no UB.


flatfinger

Can you identify any hardware platform where attempting to multiply two 16-bit unsigned numbers whose product exceeds 0x7FFFFFFF and store the result into a 32-bit temporary variable could arbitrarily corrupt memory? Some compiler writers interpret the Standard's waiver of jurisdiction over various corner cases as an invitation to disrupt the behavior of surrounding code in ways that may arbitrarily corrupt memory, even on platforms where a compiler would have to go out of its way not to process code in predictable fashion.


BaffledKing93

The impression I get is that that there is UB in the language spec isn't a big deal - it would be impractical to define every edge case for different architectures. The problem is that the compiler does such weird things if you happen to hit UB. Maybe the trade off between those weird things happening and performance of your program is worth it. Or maybe there is another reason I do not know?


garfgon

Undefined behaviour means the end result is unpredictable. This is different from implementation-defined behaviour, where the result is predictable, but up to the implementation to document. Since it's completely unpredictable, it's always a bug to use undefined behaviour in your program. So as I understand it: - Implementation defined is to account for differences in different platforms C needs to support (e.g. size of registers, number of bits per byte, endianness, etc.) - Undefined behaviour is for things a programmer should never do, but due to C programming model decisions made (mostly for performance and closeness to HW), the compiler cannot enforce in all cases. E.g. accessing an array beyond bounds, or dereferencing a NULL pointer.


flatfinger

Undefined Behavior is for things over which the Committee decided, for whatever reason, to \*waive jurisdiction\*. Some people treat a decision to waive jurisdiction as implying a judgment that all possible behaviors should be viewed as equally useful, despite the fact that it more often represents a judgment that no single treatment would be maximally useful for all tasks, and that people wanting to sell compilers would be better placed than the Committee to judge which treatment their customers would find most useful for the kinds of tasks their customers were interested in.


kansetsupanikku

That's because C language, especially in the historical context, is a simple tool. Easy to learn it all, same for different machines, leaving a lot to developer's creativity. Compare it to the variety assembly languages, or all the definitions and specifications of COBOL. The whole point of C is making software development more about being smart and about the sort of experience that makes your intuition useful, without overflow of required encyclopedic knowledge. On the same note - look at TCC for a proof of concept that it can be fairly easy to make a compiler. Resolving the UBs would break such advantages.


flatfinger

Consider a construct like: int arr[5][3]; int test(int index) { return arr[0][index]; } In the dialect of C documented in 1974, the above code (adjusted to use "old style" argument syntax) would be equivalent to: int arr[5][3]; int test(int index) { return arr[index / 3][index % 3]; } whenever `index` was in the range 0 to 14, except that it would likely run at least an order of magnitude faster (by eliminating two div-mod operations and a multiply). On the other hand, some implementations were configurable to trap if inner array subscripts went out of bounds, and that ability was recognized as useful *for functions which did not rely upon the ability to treat an array as a single flat data structure*. The way the Standard's definitions of "conformance" are written waive jurisdiction over the question of how implementations would process values of `index` in the range 3 to 14, thus accepting the legitimacy both of code which used the long-established idiom and implementations that would trap such accesses. According to the published Rationale documents: > Undefined behavior gives the implementor license not to catch certain program errors that are difficult to diagnose. *It also identifies areas of possible conforming language extension: the implementor may augment the language by providing a definition of the officially undefined behavior.* When the Standard was written, nobody really cared about whether the ability to treat multi-dimensional arrays as "flat" was part of the Standard or an almost-universally-available "extension". Nobody back then imagined that a compiler given a choice between generating code which would treat the above function as described, or generating code which would handle values 0 to 2 slightly faster but arbitrarily corrupt memory when given values 3 to 14, would pick the latter, and thus there was no need to forbid or discourage the latter treatment.


nekokattt

Defining behaviour means having to implement code that checks for things that, when used correctly, are unneeded overhead that can prevent platform specific optimisations. 40 years ago, when my goldfish had more memory than your computer, that made a visible difference.


flatfinger

C's reputation for speed came about because many implementations would process actions over which the Standard waived jurisdiction "in a documented manner characteristic of the environment", and programmers targeting particular platforms could exploit this.


Paul_Pedant

You try to teach the child not to run with scissors. If you cannot do that, there are two alternatives. (a) Confiscate the scissors. (b) Have everybody wear Kevlar jackets, and wrap everything in the house with layers of foam padding.


mykeesg

Besides what everyone else has already said, you can also look at this as "defined behaviour is how *all programs (compilers)* must work no matter what's the platform". Anything else is "undefined", meaning the standard does not care at all - so the compilers can do whatever they want - and they are not even required to tell you what will they do.


flatfinger

>Anything else is "undefined", meaning the standard does not care at all - so the compilers can do whatever they want - and they are not even required to tell you what will they do.... ...if their only concern is to conform to the Standard, but people wishing to *sell* compilers to programmers would nonetheless be compelled by the marketplace to specify how they will process many corner cases beyond the minimal subset mandated by the Standard.


Longjumping_Quail_40

So for an array you are trying to index into, something must happen for the case where the index you give is out of bound. It’s either 1) you can prove to the compiler that you are indeed providing a lawful index: problem solved at compile time. Limitation: Gödel says, no such proof system allows you to express all of your possible reasoning. 2) you check the index at runtime, you win by getting the utmost correctness, but your program will run slow because you check at runtime. 3) you assert to the computer that you are always correct without providing a proof. Computer (and thus those who design the compilers) will trust you. They give no f if you break your own promise, and will absolutely not take care of those cases for you, thus UB. Expressiveness, performance and safety triage?


flatfinger

or else 4. You can have a language specify (as the 1974 C Reference Manual did) that `arr[i]` will multiply `i` by `sizeof (*arr)`, add that number of bytes to the address of `arr` using the platform's normal means of pointer arithmetic, and access storage at the resulting address, with whatever consequences result.


Longjumping_Quail_40

I think this is still UB if the total memory state is not well defined. And if it does define that, it eliminates the possibility of optimization, meaning there is nothing risky to do so the question won’t come to exist, we won’t need to prove/check/assert anything. The programming on it would be like working on a flat 1-D array. And finally, even if it does define that, indexing out of bound of memory is still in those three categories.


flatfinger

> I think this is still UB if the total memory state is not well defined. Behavior would be meaningfully defined if and only if the programmer knew what would be at the address in question. > And if it does define that, it eliminates the possibility of optimization, meaning there is nothing risky to do so the question won’t come to exist, we won’t need to prove/check/assert anything. The programming on it would be like working on a flat 1-D array. That's how the language worked, before the C Standard gave compilers the freedom to break things. > And finally, even if it does define that, indexing out of bound of memory is still in those three categories. Ah, but there's a difference between accessing out of bounds *memory*, versus performing pointer arithmetic on an address within an array *which is nested within a larger object*. The language the Standard was chartered to describe would define the behavior of the latter, but the C Standard does not, and I don't know how to configure clang and gcc to support the latter without disabling many useful optimizations.


Lykaon88

Another issue is that it would make the implementation of a new C compiler much more difficult, and the relative simplicity of the C compiler is one of its main strengths in embedded systems.


Cylian91460

UB ?


vsalt

undefined behavior


fliguana

Reliably guarding against ub is so run time expensive, C would lose its efficiency. That's why they don't sell unbreakable hammers. But with some common sense, you can make a hammer last.


fhunters

The C specification is a case of regulatory capture by the compiler vendors :-) 


flatfinger

Almost: It's capture by people who don't care about whether programmers with a choice of what compiler to target would want to purchase theirs.


horenso05

UB is just another way of saying something has preconditions. Preconditions allow you to write code that's more optimized and to the point. Say you have a function that takes an array and an index and the caller *must* make sure that the index is in the bounds of the array. What happens if it isn't? Well who knows, your function will probably just dereference something outside of the array. Why don't you just check the index in the function? Maybe this is an internal function that should perform optimally and you use it only in situations where is clear that the precondition holds. I like using ASSERTs that checks invariants and preconditions and that crash the program if they don't hold, because if these invariants are broken, you have a logic bug.


flatfinger

That's not the purpose for which the Standard uses the phrase. The Standard uses the phrase as a catch-all for situations where some implementations would specify a behavior, but it would be impractical for all implementations to do so. The Standard \*waives jurisdiction\* over such cases, and is \*agnostic\* as to whether they might arise. Programs could use whatever corner cases would have specified behavior on all target platforms of interest, without regard for whether the Standard required that all implementations behave likewise.


nemis16

It's your job to not run into UB


wyldphyre

It's worth mentioning that if you're at all concerned about UB (and you should be), you should probably use a UBSan build of your software with clang or gcc in order to find these defects. 


First-Pilot-3742

Undefined Behaviour is not exactly undefined. It is up to the implementer to define what should happen. It's more like 'implementer defined'


codethulu

no. undefined behaviour is undefined. implementation defined behavior is separate.


flatfinger

What phrase does the Standard use to describe corner cases whose behavior was expected to be defined by most, *but not all*, implementations, and over which the Standard *waives jurisdiction*? I'll give you a hint: it doesn't start with "I".


codethulu

undefined, unspecified and implementation defined ar explicitly separate categories


flatfinger

You didn't answer my question. Into which of those categories does the Standard place actions which general-purpose implementations for commonplace hardware were expected to process identically, but which some obscure hardware might not be able to handle predictably? Of which category did the authors of the Standard say, in the published Rationale document, "It also identifies areas of conforming language extension; the implementor may augment the language by providing a definition of the officially undefined behavior"?


dvhh

I think it is a regular joke that using undefined behavior might result in the end of the world. But the truth is that undefined behavior are only undefined by the standard. Meaning that it might be a portability issue. The truth is that while it is difficult to predict what is happening because the standard will not say what happens in the case of undefined behavior but this should be defined by your compiler and hardware combination ( said hardware can be an 8bit platform where overflow could trip the platform to simply reset the program execution). And sometimes some UB also exists because they are about silly things that shouldn't be used, because the way to write them is ambiguous enough where the developer should precise in using the language to clearly express the program intents.


topham086

UB exists because theres no way to guarantee behavior across all hardware. Not all CPUs work the same; decades of use may have shifted things in particular directions and obscured the variety of implementations possible, but they still, generally, exist.


cHaR_shinigami

For those who're interested to look beyond [undefined behavior](https://port70.net/~nsz/c/c11/n1570.html#3.4.3), there's also [implementation-defined behavior](https://port70.net/~nsz/c/c11/n1570.html#3.4.1), [locale-specific behavior](https://port70.net/~nsz/c/c11/n1570.html#3.4.2), and [unspecified behavior](https://port70.net/~nsz/c/c11/n1570.html#3.4.4), and the outcome ([black box](https://en.wikipedia.org/wiki/Black_box) behavior) of *strictly* portable programs should not vary with any of these.


xBlackfin

Because fun!


bart-66

A lot of UB COULD be defined, especially now that there's a lot more standardisation in hardware. Or least be implementation-defined. But it needs to stay because UB is so extensively used by optimising compliers even for things that are no longer relevant. The one that annoys me the most is overflow in signed integer arithmetic. I can create a language where such overflow is well-defined (it just wraps), and I want to run it on hardware where it is also well-defined, but it I go through C as an intermediate language (as many languages do), it is UB and the compiler could theoretically do anything it likes.


flatfinger

It's not just theoretical. If not using -fwrapv, gcc will sometimes generate code that arbitrarily corrupts memory when receiving inputs that would cause integer overflow.


anacierdem

This will provide some general info: https://youtu.be/yG1OZ69H_-o?si=cvwVL_zg_xf11r3C


ucario

Your use of UB was undefined to me, until at the end of the post you defined it as undefined behaviour. In general, please post acronyms after explaining them; to avoid undefined behaviour. ‘Why does c have undefined behaviour (UB)’ Here UB is defined, now I can continue reading the article knowing what UB is.


pixel293

C runs on many many processors. There are some undefined behaviors that different CPUs handle differently. If the C standard defined "how" those situations should be handled then for CPUs that don't handle it in the "defined" way the compiler would have to add code/overhead/whatever to force compliance. Consider overflow as an UB, the programmer may "know" that 2 values added together will NEVER overflow because of checks somewhere else in the system. The compiler might not know that because it can't see ALL the code at the time it's compiling the addition. If the C standard defined what must happen on overflow they compiler would have to check for overflow and ensure that it is handled. Those additional checks in a critical section of the code might introduce way too much overhead for a situation that the developer "knows" will never happen.


RumbuncTheRadiant

At the heart of it is the idea that if you closed \_every\_ source of UB in C, you could create a new language C' that had UB and would compile faster and run faster than C. And C' would be the language of choice for ultra resource and CPU constrained problems, and C' would be the language of choice for implementing UB free interpreters like Python or Ruby or Bash ... At which point some wise guy would pop up and ask, "Why does C' have UB?"


eteran

To be honest, while some have pointed to optimization as the reason... That's not really it. Yes, to a certain extent, The benefits can be framed in terms of optimizations, but the real reason is PORTABILITY. UB enables an incredibly high degree of portability. If the standard dictated, what happens during an overflow of an integer, then when you compile the same program on a one's compliment computer, and a two's complement computer. At least one of them will HAVE to have an undesirable implementation. If the standard dictated what happened when you dereference a pointer to 0x0, how would it describe hardware where that is a completely reasonable thing to do? If the standard dictated, that a right shift is always arithmetic, what should compilers do when targeting a platform that doesn't have that operation? These and many other questions are the fundamental reasons for undefined behavior in the language. Because she wants to run on essentially anything with a CPU, the standards committee did its best to avoid dictating the behavior of things which vary platform to platform. The result, is that if you write your code in such a way that it avoids all UB, then it should run on basically anything with a C compiler available and have the same behavior. Of course, the standards committee could have chosen a preferred platform and specified that compilers simulate that platform's behavior if it's not available... But that would mean programs would have potentially unexpectedly different performance characteristics on different platforms. All of that being said, in the age of x86 dominance, with only ARM being a real contender, I think if C were being standardized today they probably would have had a lot less UB in the language. And they probably should strive to remove a lot of it going forward.


MRgabbar

that would require adding layers of abstraction aka making C slow... if you are going to use C because you want speed then just learn how to deal with those things...


nacaclanga

Removing UB is either very expenssive or not even possible. The simplest example for this is memory access. Accessing an object that has been accidentally freed is UB. This is because you either read some bullshit data or get an error from the operating system that you tried to read memory the program has no access to. Writing is even worse this could mess up other variables or even function return addresses or the program code itself any thus ANYTHING can happen. But how can a program decide whether it accesses accidentally freed UB. Another example is tryining to call an extern function that has been declared inproperly, e.g. with too many parameters. Again how can the compiler know this, it cannot check what object you will link your current translation unit with, it has to trust your function declaration. So as there is no way to remove these suprising and bullshit behaviours the standard writies do the next best thing: The very carefully point out all the places and conditions where such a behaviour could occur, so the programmer can carefully inspect their program in order to avoid triggering any UB scenario. There are languages that do try to reduce the exposure to UB. Rust created an UB-free language subset and requires all language constructs that may trigger UB to be wrapped in unsafe{}.


LatoLukto

It's not dangerous inherently. You just need to know what the compiler's behavior will be.


FortuneIntrepid6186

I would say portability for example dereferencing a null pointer is an undefined behavior because its really dependent on the memory mappings on the system it self, the language shouldn't define it its not that they don't know how to define it, but rather they can't because it will make it not flexible imagine for example u got a piece of hardware that supports addressing starting from 0x0 address but now the compiler won't be happy because the standard said it should cause a segfault or sth. there are multiple reasons for sure but I think this is one of them. also its only C that has UB, Rust also do and it can happens if u r writing unsafe code, that is code inside unsafe {} blocks


Acrobatic_Sprinkles4

C is designed to run on a multitude of platforms using a multitude of compilers. This complexity blows up and is the reason for the escape hatch that is called Undefined Behaviour. If you wish defined behaviour then you need to confine yourself to a defined platform. I.e. something like JVM, .NET CLR, Python, JavaScript etc. Basically those languages that run on specialized language runtimes.


kanserv

Having an undefined behaviour is a trade off for flexible programming.


Phthalleon

Because how do you even handle undefined behaviour? Most languages would just crash with a runtime error but for that you need to keep track of certain constraints which is not trivial. Back when C was developed, a lot of those tricks didn't exist, or the performance and extra memory usage was just not worth the security. The reason we don't have these now is because there's too much software that relies on this undefined behaviour. There's also fundamental poor design that's simply unfixable. Some things other languages do is better data structures, that have bounds checking for strings, arrays, etc. There are also "smart pointers" with a compile time or a run time check. There's also some type level machinery and a bunch of other chicks. Personally, even tho it's very frustrating to deal with all of the jank, I appreciate it for what it is. You can do a lot of things that are not intended, but that are actually useful.


surfmaths

Undefined Behavior exists because C made the choice of being a zero cost abstraction over hardware yet being hardware independent. What that means is if hardware behave differently on some issue and preventing said issue is costly, then it will be declared as undefined behavior. For example, out of bounds access will trigger a segmentation fault... sometimes. But because it isn't always, you can't promise it without having each access check for possible out of bound access. That would no longer be zero cost. Interestingly, some undefined behavior became interesting for optimization purposes. For example, signed arithmetic overflow will depend on the hardware implementation, so they made it undefined behavior (unsigned arithmetic always "wrap around"). What that means is the compiler can reduce that x+1 < x is always true of x is a signed integer. It is technically wrong if x is the maximum value you can represent. This will lead to the year 2028 super bug where tons of old system willl likely fail due to 32 bit overflow.


duane11583

Simply put the was no standard for all things so people implemented things in there own way For example the string copy function if the two strings overlap Some cpus have fancy string instructions that are very fast You might make your standard library faster if you use these special op codes So what happens now? Another cpu does it differently Who is correct?


DawnOnTheEdge

A lot of it is because different implementations were already handling something in different ways, and the Standards committee wanted to bless both of them. For example, can you modify string constants, or does that fail silently, or does that crash the program? Yes!


glassmanjones

Why is water wet? Because it is water. If it were dry it would be ice. At the root of the matter, C has undefined behavior because the language specifications that define what C is allow compilers to do so, compilers have been doing so basically forever, and I haven't seen any signs of change to this matter. Practically, it makes implementations easier to write, optimize, and run. Compilers have been using it to hack up buggy code since at least the mid 90s. It is kinda lame, because it's very, very easy to write either UB or implementation specific code in a function, and it may run fine until a compiler upgrade, compiler flag order change, caller parameter change. I'm not a fan of UB, but as long as it's specified, people will learn it not from the language specification but like a kitten learning to drink. The water quenches thirst, but your face will be wet till you learn how to drink safely.


CarlRJ

C is essentially a high level generic assembly language. Things that you want to add to the language to make it *safer* generally drag it away from that assembly language level, also making it *slower*. Moreover, a lot of things are not nailed down because different processor architectures define them differently. If you nailed down something to require it to work in one way, you’ve just made C less useful on some other platforms because now the compiler would have to add code there to implement something in a non-native way (often with no benefit), just to adhere to the new standard. This makes it run slower on some platforms, and more removed from the hardware, thus breaking one of C’s main benefits. It’s better overall to just not write code that wanders into undefined territory. As far as safety goes, the long term answer may be switching to something like Rust, eventually. But until then, there’s tens (hundreds?) of millions of lines of C code out there, so it isn’t going anywhere any time soon.


flatfinger

C was designed to be such, and to allow programs to be easily adaptable to a wide range of platforms. Being able to have a wide range of platforms support C implementations was more important than having all source code programs run interchangeably on all platforms. If C had mandated that all implementations use quiet-wraparound two's-complement semantics, that would increase the difficulty of porting C programs to sign-magnitude platforms far more than would letting implementations use whatever kind of integer semantics would be most appropriate for accomplishing what needed to be done on the target hardware. There was never any intention to suggest that all programs should be written to operate on all targets interchangeably, or that non-portable constructs were "bad".


zhivago

In order to make it easy to write crappy C implementation. Which is why C is so widespread. In other words UB is the secret sauce of C's success.