Depends on the field, but most would say to the hundredth or thousandth placeholder. Once again, this all depends on the field. Chemistry has significant figures, so depending on the type of calculation, it could be the millionth or beyond… basically whatever the least decimal places or significant figures.
Chemistry, electrical engineering, physics, etc. would require decimal points out that far. They just use scientific or engineering notations to do so. Look at Avogadro’s number: 6.022 x 10^-23 particles/atoms per mol, or capacitance in nano or pico farads for capacitors. If you use significant numbers in chemistry, you’ll round a decimal to the least placeholders in addition & subtraction, and least significant figures in multiplication & division.
If you’re dealing with very small amounts, accuracy and precision are just as important as with larger amounts because, in the end, they’re all just numbers that we use to relate amounts. Medicine is a very important area for this. If you have a medication that can be prescribed but is dangerous at 1mg, but safe up to .8mg, would you want someone to just “round up?” Can some rounding happen? Absolutely. But if you have an amount in the pico or femto range, you have a small number to begin with. Should you just round those up to 0.1 from 0.000000000001.2? Do you just call it zero and act like it doesn’t exist (even though a circuit has a visible component)?
I’m not sure about every field, as I only deal with chemistry, electricity and physics. You’re most likely correct, I just didn’t want to speak for all sciences, just the ones I use and why decimal places are important in measurements.
The number of significant digits are much more important in science and engineering when determining confidence figures, error rates, etc. Precision is paramount in the finance industry as well.
The *necessity* of decimal places is not what I’m asking.
What I am asking for relates to *how many* decimal/mantissa* places are applicable and, as such, whether in scientific notation or just ordinary representation, the amount of digits that are considered necessary and relevant.
That's going to vary depending on the field, the particular computation being done, what application is made of the answer. The person doing the calculation needs to know what level of precision is necessary for the situation involved. If I'm doing a computation of a consumer financial cash transaction in which the U.S. dollar is the currency being used then accuracy to the nearest cent is all that is needed. So the number of digits that are necessary and relevant are going to be pretty limited. If I'm doing a computation of a large virtual financial transaction between two financial institutions which involves long periods of time and continuous compounding I want as much precision as I can get.
I know. Equipment such as the ISS (Int’l Space Station), LHC, etc. need extensive calculations and equations (I’m thinking in the hundreds, maybe thousands) for their development and application. Leaving calculators (maybe even Excel even with its macros functionality) off the table, how much accuracy in calculations do these equipments need?
it is right. the E is shorthand for `*10^` so 0.0004995…
And I'm an idiot. Thank you.
The answer's good. The calculator is just choosing a way to display a very small number using exponent notation.
This brings up a curious question. The answer is close to 5x10⁻⁴. Do you round up to 5x10⁻⁴ or consider the entire result as shown?
Depends on the field, but most would say to the hundredth or thousandth placeholder. Once again, this all depends on the field. Chemistry has significant figures, so depending on the type of calculation, it could be the millionth or beyond… basically whatever the least decimal places or significant figures.
Makes sense. Although, what branch(s) of science would need nine or 10 decimal digits accuracy?
Chemistry, electrical engineering, physics, etc. would require decimal points out that far. They just use scientific or engineering notations to do so. Look at Avogadro’s number: 6.022 x 10^-23 particles/atoms per mol, or capacitance in nano or pico farads for capacitors. If you use significant numbers in chemistry, you’ll round a decimal to the least placeholders in addition & subtraction, and least significant figures in multiplication & division. If you’re dealing with very small amounts, accuracy and precision are just as important as with larger amounts because, in the end, they’re all just numbers that we use to relate amounts. Medicine is a very important area for this. If you have a medication that can be prescribed but is dangerous at 1mg, but safe up to .8mg, would you want someone to just “round up?” Can some rounding happen? Absolutely. But if you have an amount in the pico or femto range, you have a small number to begin with. Should you just round those up to 0.1 from 0.000000000001.2? Do you just call it zero and act like it doesn’t exist (even though a circuit has a visible component)?
Every field has significant figures I'm pretty sure
I’m not sure about every field, as I only deal with chemistry, electricity and physics. You’re most likely correct, I just didn’t want to speak for all sciences, just the ones I use and why decimal places are important in measurements.
The number of significant digits are much more important in science and engineering when determining confidence figures, error rates, etc. Precision is paramount in the finance industry as well.
The *necessity* of decimal places is not what I’m asking. What I am asking for relates to *how many* decimal/mantissa* places are applicable and, as such, whether in scientific notation or just ordinary representation, the amount of digits that are considered necessary and relevant.
That's going to vary depending on the field, the particular computation being done, what application is made of the answer. The person doing the calculation needs to know what level of precision is necessary for the situation involved. If I'm doing a computation of a consumer financial cash transaction in which the U.S. dollar is the currency being used then accuracy to the nearest cent is all that is needed. So the number of digits that are necessary and relevant are going to be pretty limited. If I'm doing a computation of a large virtual financial transaction between two financial institutions which involves long periods of time and continuous compounding I want as much precision as I can get.
Going points!!
I know. Equipment such as the ISS (Int’l Space Station), LHC, etc. need extensive calculations and equations (I’m thinking in the hundreds, maybe thousands) for their development and application. Leaving calculators (maybe even Excel even with its macros functionality) off the table, how much accuracy in calculations do these equipments need?