

And so instead of explain why and clarify any misunderstanding you chose to snarkily insult my intelligence, very mature.
And so instead of explain why and clarify any misunderstanding you chose to snarkily insult my intelligence, very mature.
I fail to see a difference there, 10.0/3 = 3.33333333333 which you round down to 3.33 (or whatever fraction of a cent you are using) as you say for all accounts then have to deal with the leftovers, if you are using a fixed decimal as the article sugests you get the same issue, if you are using integer fractions of a cent, say milicents you get 1000000/3 = 333333 which gives you the exact same rounding error.
This isnt a problem with the representation of numbers its trying to split a quantity into unequal parts using division. (And it should be noted the double is giving the most accurate representation of 10/3 dollars here, and so would be most accurate if this operation was in the middle of a series of calcuations rather than about to be immediately moving money).
As I said before, doubles probably arent the best way to handle money if you are dealing with high volumes of or complex transactions, but they are not the waiting disaster that single floats are and using a double representation then converting to whole cents when you need to actually move real money (like a sale) is fine.
You are underestimating how precice doubles are. Summing up one million doubles randomly selected from 0 to one trillion only gives a cumulative rounding error of ~60, that coud be one million transactions with 0-one billion dollars with 0.1 cent resolution and ending up off by a total of 6 cents. Actually it would be better than that as you could scale it to something like thousands or millions of dollars to keep you number ranger closer to 1.
Sure if you are doing very high volumes you probably dont want to do it, but for a lot of simple cases doubles are completely fine.
Edit: yeah using the same million random numbers but dividing them all by 1000 before summing (so working in kilodollars rather than dollars) gave perfect accuracy, no rounding errors at all after one million 1e-3 to 1e9 double additions.
Single floats sure, but doubles give plenty of accuracy unless you absolutely need zero error.
For example geting 1000 random 12 digit ints, multiplying them by 1e9 as floats, doing pairwise differences between them and summing the answers and dividing by 1e9 to get back to the ints gives a cumulative error of 1 in 10^16. assuming your original value was in dollars thats roughly 0.001cent in a billion dollar total error. That’s going deliberately out of the way to make transactions as perverse as possible.
I think her point was that you were doing the annoying “everyone is from USA so I’ll just talk like we all are” by bringing up Trumps tarrifs when they were not the topic of conversation and are irrelevant to everyone outside the USA.
It wont do anything of the sort. Even if you accept the premise that somehow artists are being exploited from learning from their previous works, all that will happen is the AI companies will shift out of America to a juristiction that doesnt value extracting rents from IP above all else.
For all those cheering on the copyright mafia going after Anthropic, consider that some of the groups supporting anthropic against this massive overreach of “we get to decide how you use our works” include:
Maybe this is not such a great thing?
Yes, i find it difficult to believe that they mess up a dozen line algo that is in their training set in a prominant place with no complicating factors. Despite what a lot of people here think, LLMs do have value for coding. Even if the companies selling them make ridiculous claims about what they can do.
I find that very difficult to believe. If for no other reason that there is an implementation in the wiki page for Levenshtein distance (and wiki is known to be very prominant in the training sets used for foundational models), and that trying it just now and it gave a perfectly functional implementation.
No you spouted some stuff about “trust me I’ve seen it” (almost certainly relating to using single floats) then an irrelevant tangent about how ten doesnt divde cleanly into three and how thats a problem for floats, when you have exactly the same problem with fixed point/integer division.
Do you have an actual example of where double precission floats would cause an issue? Preferably an example that could be run to demonstrate it.