1/3 = .333…

1/3 + 1/3 + 1/3 = 3/3 = 1

.333… + 333… + 333… = .999…

.999… = 1

Discuss

  • Saik0@lemmy.saik0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I don’t really think I’m “cherry picking”

    You really are.

    Virtually my whole last paragraph was ignored in my original comment. But you keep doing you.

    • myslsl@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I’m cherry picking, yet you cherry picked the sentence “I don’t really think I’m cherry picking” over the entirety of my previous comment to you?

      Virtually my whole last paragraph was ignored in my original comment.

      Did you not read the entire last paragraph of my first comment where I directly quoted and responded to the last paragraph of your original comment? Here, let me quote it for you. I see reading is not your strong suit.

      Quote I took from your last paragraph:

      But I do think it throws a wrench in other parts of math if we assume it’s universally true. Just like in programming languages… primarily float math that these types of issues crop up a lot, we don’t just assume that the 3.999999… is accurate, but rather that it intended 4 from the get-go, primarily because of the limits of the space we put the number in.

      My response:

      It definitely doesn’t throw a wrench into things in other parts of math (at least not in the sense of there being weird murky contradictions hiding in math due to something like this). Ieee floats just aren’t comparable. With ieee floats you always have some finite collection of bits representing some number. The arrangement is similar to how we do scientific notation, but with a few weird quirks (like offsets in the exponent for example) that make it kinda different. But there’s only finitely many different numbers that these kinds of standards can represent due to there only being finitely many bit patterns for your finite number of bits. The base 10 representation of a number does not have the same restriction on the number of digits you can use to represent numbers. When you write 0.999…, there aren’t just a lot (but finitely many) 9’s after the decimal point, there are infinitely many 9’s after the decimal point.

      In a programming context, once you start using floating point math you should avoid using direct equality at all and instead work within some particular error bound specified by what kind of accuracy your problem needs. You might be able to get away with equating 4.000001 and 4 in some contexts, but in other contexts the extra accuracy of 0.0000001 might be significant. Ignoring these kinds of distinctioms have historically been the cause of many weird and subtle bugs.

      Quote I took from your last paragraph:

      I have no reason to believe that this isn’t the case for our base10 numbering systems either.

      My response:

      The issue here is that you don’t understand functions, limits, base expansions of numbers or what the definition of notation like 0.999… actually is.

      But you keep doing you.

      Lmao, be sure to work on that reading comprehension problem of yours.

      What are you even expecting? How am I supposed to read your mind and respond to all the super important and deep points you think you’ve made by misunderstanding basic arithmetic and calculus? Maybe the responsibility is on you to raise those points if you want further response from me on them and not on me to somehow just magically know what you want?