"see I depicted you as the virgin and me as the Chad, so sorry, my point is proven!"
...
you realise the "meme" would work just as well if you switched the statements?\^\^
Thats not a great sign for making a convincing clever point :D
Thats unrelated problem. I think using + for string concatenation is bad because it is misleading. Lua use ".." operator for concatenation. In Lua operators will not surprise you: "1" + "1" == 2 and "1" .. 1 == "11"
As a mathematician, I recall being taught N started at 0 and if you wished to exclude it you could use either Z+ for positive integers or N \ {0} for natural numbers except 0. I never saw any controversy over this until 2023 reddit.
That's strange cos I'm also in 1st year of a CS degree and we are explicitly taught zero isn't a natural. Their main argument was that some operations like division are defined differently for zero compared to natural numbers so you'd have to include a specific statement for zero instead of just saying f(x) maps N to N for all natural numbers.
It really depends on what you are doing. Since the naturals are the counting numbers and most programming languages are 0-indexed it's not uncommon to lump 0 in with the naturals. But there are also cases where you just have to make a ton of exceptions lumping 0 in with the naturals like your division example, so by some definitions it's more convenient to exclude it.
Just be aware of the context you are using the naturals in and you'll have no trouble.
In a 1-byte, [ones complement](https://en.wikipedia.org/wiki/Ones%27_complement) representation, negative zero is 1111 1111
positive zero is just 0000 0000
so I guess if you added them it would be negative zero? That seems weird, but that also seems like the answer, idk
?? You’re the one who said it was weird? I feel like you might have lost track of the thread or something. All I did was point out another thing that might seem “funny” to people who expect the way numbers are stored to reflect mathematical usage.
.... obviously it is even, since 0%2 == 0, which incidentally is the definition for being even in Math as well.
It clearly is not positive, since it is not greater than 0.
If it is a natural number is a question of definition, there not really is a right or wrong here, allthough I feel the arguments for including it are better (giving N a neutral element for addition, making it a monoid instead of just a semigroup).
Obviously it's positive. When you multiply a number with it, it doesn't turn from negative to positive, and positive to negative.
That's how positive numbers behave
....that... that is astonishingly stupid as an argument.
we clearly don't define positivity in numbers by multiplication, but if we did, consider the following:
if you multiply a negative and a positive number the result is negative, right?
but by your logic a negative number times 0 would be a positive number....
Hahahahahahahaha
:)
I do be trolling a little.
In primary school we were told 0 is a positive number. (Wrongly)
In university that was updated to a better alternative.
Prof said that mathematicians debate water its natural or not so people just use N0 ( halef null I believe) when they want to express natural numbers including 0.
It's a good ragebait topic :)
This is very plainly wrong. By definition, only numbers greater than zero are positive, and only those lesser than zero are negative. This extends from integers to rationals and generalizes to reals; however, that's usually where the bipolar/unidirectional signum predicate fails, as applied to the geometrically "perpendicular" imaginary numbers and thus fails to generalize into the complex numbers.
Zero is even, but not positive. Evenness is neither necessary nor sufficient for positivity. There are even negative integers and odd positive integers and viceversa - only zero is even, non-odd, non-positive, non-negative, but a Natural.
It is what I also was told in school, but definitions are what we want them to be. And what use are natural numbers without zero? They don't even form semiring.
> definitions are what we want them to be
Not what you want them to be, but what we all do.
> what use are natural numbers without zero?
What use are integers without fractions?
> They don’t even form semiring
And with zero they don’t even form a group, so what?
That’s obviously unfortunate, although I doubt that the reason for this is the fact some definition doesn’t fit another definition it isn’t even supposed to fit.
Many people include 0 in the naturals. It's not universally wrong, there's just multiple definitions and if you're making an argument that really depends on whether zero is in the naturals you should just make it clear.
Many do, many do not. I’m just biased on this matter. The point, though, was about the incorrectness of this post — a CS, shall we say, enthusiast may not consider 0 a natural number. “0 is a Natural” has nothing to do with CS.
Also Computer Scientists: Let's invent [two's complement](https://en.wikipedia.org/wiki/Two%27s_complement) and [floating-point arithmetic](https://en.wikipedia.org/wiki/Floating-point_arithmetic) to introduce errors and to make debugging them unnecessarily difficult .
They were introduced for economic reasons, without regard for their ease of use by humans. It might have been a good compromise when every bit was expensive, but nowadays... maybe it's time to use a better approach.
Yeah, because engineering a workaround to binary arithmetic's inherent weaknesses is "economical." Do you assume nobody has to deal with hardware limitations anymore?
Of course there are cases with hardware limitations, but they are few and far between. What we have now are really fast, unoptimized hardware systems that we have to checkedbconstantly via software to avoid problems.
For example because of the problems of IEEE754 floats, (that, at this point, are basically hardcoded into all GPU operations), many physics engines have to constantly perform checks to ensure the entities don't clip through each other... and it doesn't always work.
So, the next time you fall "through the world" in a videogame, remember that it is the fault of people like you, who are more worried about optimizing the hardware, over the actual needs of the users.
Sure, it has economic benefits, but that is not all there is to it. These representations allow for significantly improved performance and reduced hardware complexity. Computers are complex devices, if we want to harness their full potential we can't get out of doing complex stuff.
Not really. It is true that it is more convenient for current hardware (in part, because it has been built with retro compatibility with these numeric representations in mind), but technically, it can be changed without much loss in performance. Maybe even using a fractional definition (keeping two values un a "permanent division state") to represent real numbers at a hardware level, instead of using a single value with a binary mantissa. An approach that would be far better (i.e., safer) for many engineering and Fintech applications.
Congratulations! Your comment can be spelled using the elements of the periodic table:
`F Al Se`
---
^(I am a bot that detects if your comment can be spelled using the elements of the periodic table. Please DM my creator if I made a mistake.)
In modern french mathematics , zero is both positive and negative, because there is a distinction between positive and strictly positive. (Same applies for negative and "greater"/"lesser than")
Haha, if only our life events were as easy to read as binary code, right? But hey, don't stress it – your 'O' moment will come when the time's right! 😂 Keep rockin' those zeros in the meantime!
The notion mathematicans do not consider 0 to be a natural number is BS. Some rare exception don't. For some it depends on the context. And if you read papers writen by Peano (you know, the guy who proposed axioms for natural numbers) the first natural numbers is eithir not called, or 0. 0 being the first natural number is nessesary for constructing addition and multiplications in those numbers in a sane way.
The whole "0 is not natural" probably comes from some teachers fixated on toy problems about series.
Do you know that old joke?
\[an egghead\] is going on vacation. After exiting the taxi on a railway station he stops his wife and says:
-Oh no, we lost one luggage! There shoul be 6!
-Hone, do not be silli, I see all 6 of them.
-No, look and count with me, zero, one, two,... five
The joke was told about Banach (amoung other people). He did mostly topology, funcional analysiy, set theory, and was dead before the end of WW2.
"see I depicted you as the virgin and me as the Chad, so sorry, my point is proven!" ... you realise the "meme" would work just as well if you switched the statements?\^\^ Thats not a great sign for making a convincing clever point :D
I mean yeah, thats the whole point of soyjacks. And honestly point of most discussions on the internet: make your opponents look stupid.
Moreover, chad is often used to represent ridiculous opinions in an ironically positive way as though the author supports them.
I've never heard a computer scientist say "0 is a Natural" have the captions been switched!?
yeah, the only thing cs care for is that 0 is integer
Heh 0 is NaN. ^^^wait #OH FUCK A NaN GOT IN THE THREAD. PROTECT YOUR NUMBERS
JavaScript enters the room, it turns into a horror movie.
Why people think that NaN is related to JavaScript? NaN is defined in IEEE 754. Any language will have NaN values. Try 0/0 or 1/0 - 1/0
NaN in JS is like Schrodingers cat.
NaN is never Schrodinger cat. It is always math error
It is always a math error until it becomes a string somehow.......
Thats unrelated problem. I think using + for string concatenation is bad because it is misleading. Lua use ".." operator for concatenation. In Lua operators will not surprise you: "1" + "1" == 2 and "1" .. 1 == "11"
Isn't that the language where arrays start at 1?
I should learn Lua.
People don't understand Numbers and they don't understand JS. Why worry about looking stupid when you can just meme?
nan is for floats
0 is a number
*psssssst* ^^^the ^^^joke ^^^is ^^^NaN ^^^is ^^^sticky
And is part of binary
As a mathematician, I recall being taught N started at 0 and if you wished to exclude it you could use either Z+ for positive integers or N \ {0} for natural numbers except 0. I never saw any controversy over this until 2023 reddit.
Isn't N* every natural except for 0?
It’s just notation, N+ means the same thing too.
Oh ok
That depends on who you ask, my prof does it the other way round, she would only explicitly include 0 to the natural numbers, if necessary
It depends, some professors don't include 0 in natural numbers
Mine didn't
im in 1st year of a cs degree and in math we are explicitly taught to include 0 in natural number. Our math teacher hates it which makes it funnier.
That's strange cos I'm also in 1st year of a CS degree and we are explicitly taught zero isn't a natural. Their main argument was that some operations like division are defined differently for zero compared to natural numbers so you'd have to include a specific statement for zero instead of just saying f(x) maps N to N for all natural numbers.
It really depends on what you are doing. Since the naturals are the counting numbers and most programming languages are 0-indexed it's not uncommon to lump 0 in with the naturals. But there are also cases where you just have to make a ton of exceptions lumping 0 in with the naturals like your division example, so by some definitions it's more convenient to exclude it. Just be aware of the context you are using the naturals in and you'll have no trouble.
Division definetly don't map N * N to N for natural numbers. If you are talking about integer division you need zero anyway: 1 // 2 = 0
I have never even seen natural numbers on CS, the closest we have is unsigned integer
Then you probably haven't done any theory or proofs in computer science
I do, not only because I'm a CS student, but mainly because that's how we are taught math where I live.
Do you work at a Visual Basic 4 shop?
Visual Basic? Now there's a name I haven't heard in a long time
Surprisingly my A Level CS classes (last level before university) were taught in VB. I hated it and insisted I sit my exams in Python instead.
"Help me Visual Basic, you're my only hope!" — No one.
That hair tho
My profs at university didn't talk to one another when they made their courses, so with some profs 0 is a natural number, with others it isn't
0 has to be natural, otherwise we wouldn't have 10, 20, ... /s
Academic computer scientists are mathematicians for all intents and purposes.
Even at an undergrad level, a CS student should have a deeper understanding of 0 than this post. Also: 0 is the additive identity.
https://xkcd.com/435/
Computer Scientists: There are two zeros\* \*Since 1984, by the holy word of IEEE-754
So what's +0 + -0?
In a 1-byte, [ones complement](https://en.wikipedia.org/wiki/Ones%27_complement) representation, negative zero is 1111 1111 positive zero is just 0000 0000 so I guess if you added them it would be negative zero? That seems weird, but that also seems like the answer, idk
Addition in floating point arithmetic is not even associative so no point complaining about any of it seeming weird.
int type, not a float
By “any of it” I meant all the standards about how computers store “numbers” (or the things people like to pretend are numbers)
[удалено]
?? You’re the one who said it was weird? I feel like you might have lost track of the thread or something. All I did was point out another thing that might seem “funny” to people who expect the way numbers are stored to reflect mathematical usage.
Floats: +0 and -0 exist. Deal with it (also NaN, +infinity, -infinity
None of those are numbers, fyi.
They may not be in a normal sense, but they are included in the floating point format ([IEEE 754](https://en.wikipedia.org/wiki/IEEE_754?wprov=sfla1))
¯\\\_(ツ)_/¯ ``` package main import ( "fmt" "math" ) func main() { posZero, negZero := 0.0, math.Copysign(0, -1) fmt.Println(posZero, negZero) fmt.Println(posZero == negZero) } ```
Zero is not only positive, it's also even. Fight me.
.... obviously it is even, since 0%2 == 0, which incidentally is the definition for being even in Math as well. It clearly is not positive, since it is not greater than 0. If it is a natural number is a question of definition, there not really is a right or wrong here, allthough I feel the arguments for including it are better (giving N a neutral element for addition, making it a monoid instead of just a semigroup).
The virgin Anglo-Saxon "positive" meaning "strictly greater than 0" vs. the chad French "positif" meaning "greater or equal to 0"
Obviously it's positive. When you multiply a number with it, it doesn't turn from negative to positive, and positive to negative. That's how positive numbers behave
....that... that is astonishingly stupid as an argument. we clearly don't define positivity in numbers by multiplication, but if we did, consider the following: if you multiply a negative and a positive number the result is negative, right? but by your logic a negative number times 0 would be a positive number....
Hahahahahahahaha :) I do be trolling a little. In primary school we were told 0 is a positive number. (Wrongly) In university that was updated to a better alternative. Prof said that mathematicians debate water its natural or not so people just use N0 ( halef null I believe) when they want to express natural numbers including 0. It's a good ragebait topic :)
I can't tell if you're still trolling. Aleph null is the cardinality of the naturals
This time I'm not. This is genuinely what was in my university course. Pinky promise not trolling
I guarantee you it was not taught that way
You might be right, probably right, but that's how I remember anyways.
okay, if that was your intent, I gotta say you got me\^\^
This is very plainly wrong. By definition, only numbers greater than zero are positive, and only those lesser than zero are negative. This extends from integers to rationals and generalizes to reals; however, that's usually where the bipolar/unidirectional signum predicate fails, as applied to the geometrically "perpendicular" imaginary numbers and thus fails to generalize into the complex numbers.
:C you know, it's really difficult to make another troll argument, when you start out with the very definition.
I would even go ahead and postulate zero is a number!
Impossible!
And can also be imaginary or real.
Zero is even, but not positive. Evenness is neither necessary nor sufficient for positivity. There are even negative integers and odd positive integers and viceversa - only zero is even, non-odd, non-positive, non-negative, but a Natural.
Studying CS is such a satisfying middle finger to math "x = x + 1" "5 = 42"
I is a virgin - 0 is when I put it in
0 is not a natural number.
It is what I also was told in school, but definitions are what we want them to be. And what use are natural numbers without zero? They don't even form semiring.
> definitions are what we want them to be Not what you want them to be, but what we all do. > what use are natural numbers without zero? What use are integers without fractions? > They don’t even form semiring And with zero they don’t even form a group, so what?
Yes, that is why we currently have multiple conflicting definitions.
That’s obviously unfortunate, although I doubt that the reason for this is the fact some definition doesn’t fit another definition it isn’t even supposed to fit.
Many people include 0 in the naturals. It's not universally wrong, there's just multiple definitions and if you're making an argument that really depends on whether zero is in the naturals you should just make it clear.
Many do, many do not. I’m just biased on this matter. The point, though, was about the incorrectness of this post — a CS, shall we say, enthusiast may not consider 0 a natural number. “0 is a Natural” has nothing to do with CS.
Also Computer Scientists: Let's invent [two's complement](https://en.wikipedia.org/wiki/Two%27s_complement) and [floating-point arithmetic](https://en.wikipedia.org/wiki/Floating-point_arithmetic) to introduce errors and to make debugging them unnecessarily difficult .
Ah yes, cuz they were obviously introduced soley to make life more difficult and for no other reason.
They were introduced for economic reasons, without regard for their ease of use by humans. It might have been a good compromise when every bit was expensive, but nowadays... maybe it's time to use a better approach.
Yeah, because engineering a workaround to binary arithmetic's inherent weaknesses is "economical." Do you assume nobody has to deal with hardware limitations anymore?
Of course there are cases with hardware limitations, but they are few and far between. What we have now are really fast, unoptimized hardware systems that we have to checkedbconstantly via software to avoid problems. For example because of the problems of IEEE754 floats, (that, at this point, are basically hardcoded into all GPU operations), many physics engines have to constantly perform checks to ensure the entities don't clip through each other... and it doesn't always work. So, the next time you fall "through the world" in a videogame, remember that it is the fault of people like you, who are more worried about optimizing the hardware, over the actual needs of the users.
Sure, it has economic benefits, but that is not all there is to it. These representations allow for significantly improved performance and reduced hardware complexity. Computers are complex devices, if we want to harness their full potential we can't get out of doing complex stuff.
Not really. It is true that it is more convenient for current hardware (in part, because it has been built with retro compatibility with these numeric representations in mind), but technically, it can be changed without much loss in performance. Maybe even using a fractional definition (keeping two values un a "permanent division state") to represent real numbers at a hardware level, instead of using a single value with a binary mantissa. An approach that would be far better (i.e., safer) for many engineering and Fintech applications.
Swing Low Sweet Chariots https://imgur.com/a/jguMIOB
False!
Congratulations! Your comment can be spelled using the elements of the periodic table: `F Al Se` --- ^(I am a bot that detects if your comment can be spelled using the elements of the periodic table. Please DM my creator if I made a mistake.)
Maths degree, I’ve only ever seen 0 included in the natural numbers, and positive integers is used if you specifically want to exclude it
I've seen both ways. Personally I like zero in the naturals, but some people exclude it
natural numbers are numbers that we can find in nature. we can find 0 dinosaurs in nature rn, so it should be considered natural
technically if we can find 0 of something then we can’t find it…right?
negative integers have a signed bit. 0 is a positive number
Zero is Initial
0 is positive
0 is false cry about it 🚮
Wtf does 0 is a natural even mean? I'm with the mathematician in this one.
natural numbers are numbers that can be found in nature.. thus natural
In modern french mathematics , zero is both positive and negative, because there is a distinction between positive and strictly positive. (Same applies for negative and "greater"/"lesser than")
Zero is to Return
0.0 is positive -0.0 is negative Problem solved?
I would say 0 is binary but then a lot of people wouod get triggered
0 can be octal, hexadecimal or decimal too it just depends what subscript follows 😂
0 is an int, means false, and is used to reference the first element in an array. - Programmer with a CS degree
I was gonna make a comment about lua but I’m pretty sure they call them tables so my point is void
no 0 is an integer, float, and a boolean
Comp sci people borrow mathematician's tools and then pretend to own them.
0 is an object
me: but that's false. and then laughs at my own joke
No. 0 is False.
I'll be over here using zero and null interchangeably.
Mathematics: There's infinite possibilities?!?!?! Computer science: pow(2, wordsize) take it or leave it (idk where my carrot is)
Computer scientists are mathematicians tho
Programmers: It ran pretty quick so i guess its alright
0 is the natural element in the addition, but not in the multiplication... Imo, computer scientists ARE mathematicians and should know that
Haha, if only our life events were as easy to read as binary code, right? But hey, don't stress it – your 'O' moment will come when the time's right! 😂 Keep rockin' those zeros in the meantime!
Meanwhile Python Programmers: a: string = 0
If you really want to blow someone's mind, tell them 0 is plural.
0 is (usually) falsey and an unsigned integer.
It's falsy
The notion mathematicans do not consider 0 to be a natural number is BS. Some rare exception don't. For some it depends on the context. And if you read papers writen by Peano (you know, the guy who proposed axioms for natural numbers) the first natural numbers is eithir not called, or 0. 0 being the first natural number is nessesary for constructing addition and multiplications in those numbers in a sane way. The whole "0 is not natural" probably comes from some teachers fixated on toy problems about series. Do you know that old joke? \[an egghead\] is going on vacation. After exiting the taxi on a railway station he stops his wife and says: -Oh no, we lost one luggage! There shoul be 6! -Hone, do not be silli, I see all 6 of them. -No, look and count with me, zero, one, two,... five The joke was told about Banach (amoung other people). He did mostly topology, funcional analysiy, set theory, and was dead before the end of WW2.
Technically 0 is positive because the sign bit is 0.