hacker-news-custom-logo

Hackr News App

178 comments

  • zcbenz

     

    4 hours ago

    [ - ]

    It is a bug in MLX that has been fixed a few days ago: https://github.com/ml-explore/mlx/pull/3083

    zozbot234

     

    4 hours ago

    [ - ]

    So the underlying issue is that the iPhone 16 Pro SKU was misdetected as having Neural Accelerator (nax) support and this caused silently wrong results. Not a problem with the actual hardware.

    TimByte

     

    2 hours ago

    [ - ]

    From a debugging point of view, the author's conclusion was still completely reasonable given the evidence they had

    llm_nerd

     

    3 hours ago

    [ - ]

    Apple's documentation is utter garbage, but this code almost seems like a separate issue (and notably the MLX library uses loads of undocumented properties in metal which isn't cool). It looks like the change used to allow the NAX kernel to be used on the iPhone 17 or upcoming 18 if you're on 26.2 or later, to instead only allow it on the iPhone 17 Pro or upcoming 18. I'm fairly sure the GPU arch on the A19 is 17. They changed it so it will only use that kernel on the 17 Pro or upcoming 18, which is notable as the A19 Pro in the 17 Pro has a significantly changed GPU, including GPU tensor cores. The only real change here is that it would limit to the pro variants for the "17" model.

    zozbot234

     

    3 hours ago

    [ - ]

    > The neural accelerator exists in iPhones going back many years.

    What has existed before is the Apple Neural Engine (ANE) which is very different from the newer Neural Accelerator support within the GPU blocks. In fact MLX does not even support ANE yet since at least in previous versions it was hardware-limited to computing FP16 and INT8 MADDs, and not even that fast.

    llm_nerd

     

    3 hours ago

    [ - ]

    Sure, I directly and explicitly talked about Apple's version of tensor cores in the GPU. But the ANE is by every definition a neural accelerator. Yes, I'm aware of Apple's weird branding for their tensor cores.

    "In fact MLX does not even support ANE yet"

    I didn't say otherwise. The ANE is a fantastic unit for small, power-efficient models, like extracting text from images, doing depth modelling, etc. It's not made for LLMs, or the other sorts of experimental stuff MLX is intended for. Though note that MLX's author's reason for not supporting the ANE is that it has a "closed-source" API (https://github.com/ml-explore/mlx/issues/18#issuecomment-184...), making it unsuitable for an open-source project, and given that MLX didn't want to just lean on CoreML. But anyways, the ANE is fantastically fast at what it does, while sipping juice.

    In any case, the code change shown should have zero impact on the running of MLX on an iPhone 16 Pro. MLX tries to really leverage platform optimizations so maybe another bifucation is making the wrong choice.

    zozbot234

     

    2 hours ago

    [ - ]

    The change's effects are dependent on what each SKU reports as its Metal architecture, both as identifying string (the equivalent to running 'metal-arch' in the Mac CLI) and as generation 'gen' number. Most likely you're misinterpreting the change as not affecting the iPhone 16 Pro, where in fact it does.

    The MLX folks have various rationales for not supporting the ANE (at least as of yet), but one of them is that any real support requires implementing explicit splits in the graph of computations, where ANE-suitable portions are to be dispatched to the ANE and everything else goes back to the GPUs. That's not necessarily trivial.

    pjmlp

     

    2 hours ago

    [ - ]

    It used to be great, but those days are long gone, see the archived docs.

    embedding-shape

     

    4 hours ago

    [ - ]

    Blog post dated 28 Jan 2026, the bug fix posted 29 Jan 2026, so I guess this story had a happy ending :)

    Still, sad state of affairs that it seems like Apple is still fixing bugs based on what blog posts gets the most attention on the internet, but I guess once they started that approach, it's hard to stop and go back to figuring out priorities on their own.

    dahcryn

     

    2 hours ago

    [ - ]

    I think you overestimate the power of a blogpost and the speed of bugfixing at Apple for something like this.

    I almost guarantee there is no way they can read this blogpost, escalate it internally, get the appropriate approval to the work item, actually work on the fix, get it through QA and get it live in production in 3 days. That would only happen on really critical issues, and this is definitely not critical enough for that.

    spacedcowboy

     

    50 minutes ago

    [ - ]

    Three days is, agreed, too short. A week is just about possible, though...

    I've seen a blog-post, authored a bug in Radar, assigned it to myself, and fixed it the same day. Whether it goes out in the next release is more a decision for the bug-review-board, but since the engineering manager (that would have been me) sits on that too, it's just a matter of timing and seeing if I can argue the case.

    To be fair, the closer we are to a release, the less likely a change is to be accepted unless you can really sweet-talk the rest of the BRB, and there's usually a week of baking before the actual release goes out, but that has sometimes been shrunk for developer-preview releases...

    embedding-shape

     

    2 hours ago

    [ - ]

    Or, one of the developers of the library saw it, decided to fix it in their spare time (does that exist at Apple?) before it became a bigger thing.

    If not, talk about coincident that someone reported an issue and all of that you mentioned was already done before that happened, and the only thing missing was merging the code to the repository which was done after the issue was reported. Not unheard of, but feels less unlikely than "Engineer decided to fix it".

    jckahn

     

    4 hours ago

    [ - ]

    Just goes to show that attention is all you need.

    rafaelcosta

     

    2 hours ago

    [ - ]

    Extremely bad timing on my end then, should've waited for a few more days

    llm_nerd

     

    3 hours ago

    [ - ]

    MLX is a fairly esoteric library seeing very little usage, mostly to try to foment a broader NN space on Apple devices. This isn't something that is widely affecting people, and most people simply aren't trying to run general LLMs on their iPhone.

    I don't think that fix is specific to this, but it's absolutely true that MLX is trying to lever every advantage it can find on specific hardware, so it's possible it made a bad choice on a particular device.

    mrtesthah

     

    2 hours ago

    [ - ]

    How do you know that it wasn’t merely that the blog post elicited multiple people to file the same duplicate bug in Apple’s radar system, which is how they ostensibly prioritize fixes?

    embedding-shape

     

    2 hours ago

    [ - ]

    I don't, but the effect is the same, "something might land in the news, lets fix it before it does, since multiple people reporting the same issue based on this public post someone made".

    syntaxing

     

    3 hours ago

    [ - ]

    Kinda sucks how it seems like there’s no CI that runs on hardware.

  • csmantle

     

    16 hours ago

    [ - ]

    Methodology is one thing; I can't really agree that deploying an LLM to do sums is great. Almost as hilarious as asking "What's moon plus sun?"

    But phenomenon is another thing. Apple's numerical APIs are producing inconsistent results on a minority of devices. This is something worth Apple's attention.

    JimboOmega

     

    14 hours ago

    [ - ]

    (This is a total digression, so apologies)

    My mind instantly answered that with "bright", which is what you get when you combine the sun and moon radicals to make 明(https://en.wiktionary.org/wiki/%E6%98%8E)

    Anyway, that question is not without reasonable answers. "Full Moon" might make sense too. No obvious deterministic answer, though, naturally.

    awesome_dude

     

    13 hours ago

    [ - ]

    FTR the Full Moon was exactly 5 hours ago (It's not without humour that this conversation occurs on the day of the full moon :)

    layer8

     

    2 hours ago

    [ - ]

    > Almost as hilarious as asking "What's moon plus sun?"

    It’s a reasonable Tarot question.

    CrispinS

     

    14 hours ago

    [ - ]

    > What's moon plus sun?

    Eclipse, obviously.

    christophilus

     

    14 hours ago

    [ - ]

    That’s sun minus moon. Moon plus sun is a wildly more massive, nuclear furnace of a moon that also engulfs the earth.

    spacedcowboy

     

    47 minutes ago

    [ - ]

    Not sure about that. You can't have an eclipse without both the moon and the sun. Ergo, the eclipse is the totality (sorry!) of the sun and moon, or sun+moon (+very specific boundary conditions).

    Still think it was a good response :)

    SauntSolaire

     

    10 hours ago

    [ - ]

    Reminds me of this AI word combination game recently shared on HN, with almost exactly these mechanics:

    https://neal.fun/infinite-craft/

    For the record, Sun+Moon is indeed eclipse.

    fsckboy

     

    10 hours ago

    [ - ]

    >Moon plus sun is a wildly more massive, nuclear furnace of a moon that also engulfs the earth.

    i just looked up mass of sun vs mass of moon (they differ by 10^30 vs 10^20), and the elemental composition of the sun: the moon would entirely disappear into the insignificant digits of trace elements which are in the range of .01 % of the sun. I could be off by orders of magnitude all over the place and it would still disappear.

    mcny

     

    13 hours ago

    [ - ]

    Wait so moon plus sun != sun plus moon? :Thinking:

    chii

     

    10 hours ago

    [ - ]

    celestial objects don't need to obey algebraic commutativity!

    direwolf20

     

    5 hours ago

    [ - ]

    I wonder if SCP-1313 does

    dcrazy

     

    13 hours ago

    [ - ]

    This thread reminds me of Scribblenauts, the game where you conjure objects to solve puzzles by describing them. I suspect it was an inspiration for Baba Is You.

    Der_Einzige

     

    13 hours ago

    [ - ]

    Scribblenauts was also an early precursor to modern GenAI/word embeddings. I constantly bring it up in discussions of the history of AI for this reason.

    veqq

     

    10 hours ago

    [ - ]

    Could you explain? :3

    lkjdsklf

     

    10 hours ago

    [ - ]

    Here i was, like an idiot, thinking it was moon light

    tessierashpool9

     

    4 hours ago

    [ - ]

    but then eclipse + moon = sun, which doesn't make much sense either :/

    AuryGlenz

     

    12 hours ago

    [ - ]

    Or potentially a sun that lasts slightly longer?

    nkrisc

     

    2 hours ago

    [ - ]

    The set of celestial objects visible to the naked eye during the day.

    jraph

     

    7 hours ago

    [ - ]

    Moon plus sun would be sun because the sun would be an absorbing element.

    acka

     

    7 hours ago

    [ - ]

    Moon implies there is a planet the moon is orbiting. So unless the planet and its moon are too close to the sun the long term result could also be: solar system.

    jraph

     

    7 hours ago

    [ - ]

    This goes to show how that plus operation is awfully defined.

    speed_spread

     

    4 hours ago

    [ - ]

    That's operator overloading for you.

    geuis

     

    13 hours ago

    [ - ]

    Not obvious. Astronomers are actively looking for signatures of exomoons around exoplanets. So "sun plus moon" could mean that too.

    xattt

     

    13 hours ago

    [ - ]

    The OP said moon + sun, rather than sun + moon. We have no idea yet if celestial math is non-communicative.

    BenjiWiebe

     

    11 hours ago

    [ - ]

    *commutative

    tbrownaw

     

    11 hours ago

    [ - ]

    Well, that too.

    godelski

     

    10 hours ago

    [ - ]

    Well you find the signature by looking for a dip in but sun's luminosity. So minus might be the better relationship here

    IsTom

     

    6 hours ago

    [ - ]

    INSUFFICIENT DATA FOR MEANINGFUL ANSWER.

    TimByte

     

    2 hours ago

    [ - ]

    The scary part isn't "LLMs doing sums." It's that the same deterministic model, same weights, same prompt, same OS, produces different floating-point tensors on different devices

    idk1

     

    7 hours ago

    [ - ]

    As an aside, one of my very nice family members like tarot card reading, and I think you'd get an extremely different answer for - "What's moon plus sun?" - something like I would guess as they're opposites - "Mixed signals or insecurity get resolved by openness and real communication." - It's kind of fascinating, the range of answers to that question. As a couple of other people have mentioned, it could mean loads of things. I thought I'd add one in there.

    I'll just add that if you think this advice applies to you, it's the - https://en.wikipedia.org/wiki/Barnum_effect

    nonoesp

     

    5 hours ago

    [ - ]

    > What's moon plus sun?

    "Monsoon," says ChatGPT.

    embedding-shape

     

    4 hours ago

    [ - ]

    "moonsun" says JavaScript, 1-0 to JS I'd say.

  • tgma

     

    8 hours ago

    [ - ]

    The author is assuming Metal is compiled to ANE in MLX. MLX is by-and-large GPU-based and not utilizing ANE, barring some community hacks.

    woadwarrior01

     

    29 minutes ago

    [ - ]

    What community hacks?

  • DustinEchoes

     

    15 hours ago

    [ - ]

    I wish he would have tried on a different iPhone 16 Pro Max to see if the defect was specific to that individual device.

    crossroadsguy

     

    14 hours ago

    [ - ]

    So true! And as any sane Apple user or the standard template Apple Support person would have suggested (and as they actually suggest) - did they try reinstalling the OS from scratch after having reset the data (of course before backing it up; preferably with a hefty iCloud+ plan)? Because that's the thing to do in such issues and it's very easy.

    post-it

     

    11 hours ago

    [ - ]

    Reinstalling the OS sucks. I need to pull all my bank cards out of my safe and re-add their CVV's to the wallet, and sometimes authenticate over the phone. And re-register my face. And log back in to all my apps. It can take an hour or so, except it's spread out over weeks as I open an app and realize I need to log in a dozen times.

    RulerOf

     

    5 hours ago

    [ - ]

    There was a magical period. I suspect it ended with the introduction of the Secure Enclave. But maybe it was a little later.

    An encrypted iTunes backup of a device was a perfect image. Take the backup, pull the SIM card, restore the backup to a new phone with the sim card installed, and it was like nothing had happened.

    No reauthentication. No missing notifications. No lost data. Ever.

    It was nice.

    throwaway132448

     

    2 hours ago

    [ - ]

    Security theatre killed this. Everyone must be assumed to be a moron incapable of living with the consequences of their own choices at all times.

    fragmede

     

    2 hours ago

    [ - ]

    It's not theater. If an attacker can duplicate your device, that's a problem.

    throwaway132448

     

    1 hour ago

    [ - ]

    Says who? How do you know what’s on my device, how much it matters to me, and what countless other options I have for recourse if that did happen?

    JCharante

     

    4 hours ago

    [ - ]

    > And log back in to all my apps

    Isn’t this built in when transferring devices? Are backups different?

    TimByte

     

    2 hours ago

    [ - ]

    Yeah, that would've been the cleanest experiment

    jajuuka

     

    11 hours ago

    [ - ]

    Latest update at the bottom of the page.

    "Well, now it's Feb. 1st and I have an iPhone 17 Pro Max to test with and... everything works as expected. So it's pretty safe to say that THAT specific instance of iPhone 16 Pro Max was hardware-defective."

    Someone

     

    8 hours ago

    [ - ]

    That logic is somewhat [1] correct, but it doesn’t say anything about whether all, some, or only this particular iPhone 16 Pro Maxes are hardware-defective.

    [1] as the author knows (“MLX uses Metal to compile tensor operations for this accelerator. Somewhere in that stack, the computations are going very wrong”) there’s lots of soft- and firmware in-between the code being run and the hardware of the neural engine. The issue might well be somewhere in those.

  • watt

     

    6 hours ago

    [ - ]

    Does it bother anyone else that the author drops "MiniMax" there in the article without bothering to explain or footnote what that is? (I could look it up, but I think article authors should call out these things).

    embedding-shape

     

    4 hours ago

    [ - ]

    There are tons of terms that aren't explained that some people (like me) might not understand. I think it's fine that some articles have a particular audience in mind and write specifically for those, in this case, it seems it's for "Apple mobile developers who make LLM inference engines" so not so unexpected there are terms I (and others) don't understand.

    JCharante

     

    4 hours ago

    [ - ]

    I think articles are worse when they have to explain everything someone off the street might not know.

    spockz

     

    4 hours ago

    [ - ]

    Yes, maybe. But it would be nice if there would be footnotes or tooltips. Putting the explanation in the text itself breaks the flow of the text so that would make it worse indeed.

    cowsandmilk

     

    3 hours ago

    [ - ]

    MiniMax is a company. It isn’t a term of art or something. It would be like defining Anthropic.

    fnord77

     

    20 minutes ago

    [ - ]

    minimax is an algorithm for choosing the next move in an n-player game, discovered by John von Neumann in 1928

  • raincole

     

    17 hours ago

    [ - ]

    Low level numerical operation optimizations are often not reproduceable. For example: https://www.intel.com/content/dam/develop/external/us/en/doc... (2013)

    But it's still surprising that that LLM doesn't work on iPhone 16 at all. After all LLMs are known for their tolerance to quantization.

    bri3d

     

    17 hours ago

    [ - ]

    Yes, "floating point accumulation doesn't commute" is a mantra everyone should have in their head, and when I first read this article, I was jumping at the bit to dismiss it out of hand for that reason.

    But, what got me about this is that:

    * every other Apple device delivered the same results

    * Apple's own LLM silently failed on this device

    to me that behavior suggests an unexpected failure rather than a fundamental issue; it seems Bad (TM) that Apple would ship devices where their own LLM didn't work.

    DavidVoid

     

    5 hours ago

    [ - ]

    I would go even further and state that "you should never assume that floating point functions will evaluate the same on two different computers, or even on two different versions of the same application", as the results of floating point evaluations can differ depending on platform, compiler optimizations, compilation-flags, run-time FPU environment (rounding mode, &c.), and even memory alignment of run-time data.

    There's a C++26 paper about compile time math optimizations with a good overview and discussion about some of these issues [P1383]. The paper explicitly states:

    1. It is acceptable for evaluation of mathematical functions to differ between translation time and runtime.

    2. It is acceptable for constant evaluation of mathematical functions to differ between platforms.

    So C++ has very much accepted the fact that floating point functions should not be presumed to give identical results in all circumstances.

    Now, it is of course possible to ensure that floating point-related functions give identical results on all your target machines, but it's usually not worth the hassle.

    [P1383]: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p13...

    physicsguy

     

    5 hours ago

    [ - ]

    Even the exact same source code compiled with different compilers, or the same compiler with different compiler options.

    Intel Compiler for e.g. uses less than IEEE764 precision for floating point ops by default, for example.

    sva_

     

    15 hours ago

    [ - ]

    > floating point accumulation doesn't commute

    It is commutative (except for NaN). It isn't associative though.

    ekelsen

     

    14 hours ago

    [ - ]

    I think it commutes even when one or both inputs are NaN? The output is always NaN.

    DavidVoid

     

    7 hours ago

    [ - ]

    Unless you compile with fast-math ofc, because then the compiler will assume that NaN never occurs in the program.

    addaon

     

    14 hours ago

    [ - ]

    NaNs are distinguishable. /Which/ NaN you get doesn't commute.

    ekelsen

     

    13 hours ago

    [ - ]

    I guess at the bit level, but not at the level of computation? Anything that relies on bit patterns of nans behaving in a certain way (like how they propagate) is in dangerous territory.

    addaon

     

    12 hours ago

    [ - ]

    > Anything that relies on bit patterns of nans behaving in a certain way (like how they propagate) is in dangerous territory.

    Why? This is well specified by IEEE 754. Many runtimes (e.g. for Javascript) use NaN boxing. Treating floats as a semi-arbitrary selection of rational numbers plus a handful of special values is /more/ correct than treating them as real numbers, but treating them as actually specified does give more flexibility and power.

    ekelsen

     

    11 hours ago

    [ - ]

    Can you show me where in the ieee spec this is guaranteed?

    My understanding is the exact opposite - that it allows implementations to return any NaN value at all. It need not be any that were inputs.

    It may be that JavaScript relies on it and that has become more binding than the actual spec, but I don't think the spec actually guarantees this.

    Edit: actually it turns out nan-boxing does not involve arithmetic, which is why it works. I think my original point stands, if you are doing something that relies on how bit values of NaNs are propagated during arithmetic, you are on shaky ground.

    xmcqdpt2

     

    3 hours ago

    [ - ]

    See 6.2.3 in the 2019 standard.

    > 6.2.3 NaN propagation

    > An operation that propagates a NaN operand to its result and has a single NaN as an input should produce a NaN with the payload of the input NaN if representable in the destination format.

    > If two or more inputs are NaN, then the payload of the resulting NaN should be identical to the payload of one of the input NaNs if representable in the destination format. This standard does not specify which of the input NaNs will provide the payload.

    ekelsen

     

    44 minutes ago

    [ - ]

    As the comment below notes, the language should means it is recommended, but not required. And there are indeed platforms that do not implement the recommendation.

    xmcqdpt2

     

    23 minutes ago

    [ - ]

    Oh right sorry. That is confusing.

    addaon

     

    10 hours ago

    [ - ]

    Don't have the spec handy, but specifically binary operations combining two NaN inputs must result in one of the input NaNs. For all of Intel SSE, AMD SSE, PowerPC, and ARM, the left hand operand is returned if both are signaling or both or quiet. x87 does weird things (but when doesn't it?), and ARM does weird things when mixing signaling and quiet NaNs.

    ekelsen

     

    9 hours ago

    [ - ]

    I also don't have access to the spec, but the people writing Rust do and they claim this: "IEEE makes almost no guarantees about the sign and payload bits of the NaN"

    https://rust-lang.github.io/rfcs/3514-float-semantics.html

    See also this section of wikipedia https://en.wikipedia.org/wiki/NaN#Canonical_NaN

    "On RISC-V, most floating-point operations only ever generate the canonical NaN, even if a NaN is given as the operand (the payload is not propagated)."

    And from the same article:

    "IEEE 754-2008 recommends, but does not require, propagation of the NaN payload." (Emphasis mine)

    I call bullshit on the statement "specifically binary operations combining two NaN inputs must result in one of the input NaNs." It is definitely not in the spec.

    j16sdiz

     

    7 hours ago

    [ - ]

    Blame the long and confusing language in spec:

    > For an operation with quiet NaN inputs, other than maximum and minimum operations, if a floating-point result is to be delivered the result shall be a quiet NaN which should be one of the input NaNs.

    The same document say:

    > shall -- indicates mandatory requirements strictly to be followed in order to conform to the standard and from which no deviation is permitted (“shall” means “is required to”)

    > should -- indicates that among several possibilities, one is recommended as particularly suitable, without mentioning or excluding others; or that a certain course of action is preferred but not necessarily required; or that (in the negative form) a certain course of action is deprecated but not prohibited (“should” means “is recommended to”)

    i.e. It required to be a quiet NaN, and recommended to use one of the input NaN.

    ekelsen

     

    55 minutes ago

    [ - ]

    Thanks for the direct evidence that the output NaN is not required to be one of the input NaNs.

    danpalmer

     

    16 hours ago

    [ - ]

    FYI, the saying is "champing at the bit", it comes from horses being restrained.

    mylifeandtimes

     

    14 hours ago

    [ - ]

    hey, I appreciate your love of language and sharing with us.

    I'm wondering if we couldn't re-think "bit" to the computer science usage instead of the thing that goes in the horse's mouth, and what it would mean for an AI agent to "champ at the bit"?

    What new sayings will we want?

    nilamo

     

    14 hours ago

    [ - ]

    Byting at the bit?

    odo1242

     

    14 hours ago

    [ - ]

    chomping at the bit

    danpalmer

     

    14 hours ago

    [ - ]

    Actually it was originally "champing" – to grind or gnash teeth. The "chomping" (to bite) alternative cropped up more recently as people misheard and misunderstood, but it's generally accepted as an alternative now.

    kortilla

     

    13 hours ago

    [ - ]

    It’s actually accepted as the primary now and telling people about “champing” is just seen as archaic.

    danpalmer

     

    13 hours ago

    [ - ]

    Do you have a source on this, or a definition for what it means to be "primary" here? All I can find is sources confirming that "champing" is the original and more technically correct, but that "chomping" is an accepted variant.

    BeetleB

     

    14 hours ago

    [ - ]

    As a sister comment said, floating point computations are commutative, but not associative.

    a * b = b * a for all "normal" floating point numbers.

  • Buttons840

     

    17 hours ago

    [ - ]

    I clicked hoping this would be about how old graphing calculators are generally better math companions than a phone.

    The best way to do math on my phone I know of is the HP Prime emulator.

    watersb

     

    10 hours ago

    [ - ]

    PCalc -- because it runs on every Apple platform since the Mac Classic:

    https://pcalc.com/mac/thirty.html

    My other favorite calculator is free42, or its larger display version plus42

    https://thomasokken.com/plus42/

    For a CAS tool on a pocket mobile device, I haven't found anything better than MathStudio (formerly SpaceTime):

    https://mathstud.io

    You can run that in your web browser, but they maintain a mobile app version. It's like a self-hosted Wolfram Alpha.

    Melatonic

     

    9 hours ago

    [ - ]

    The last one was interesting but both apps haven't been updated in 4 years. Hard to pay for something like that.

    They do have some new AI math app that's regularly updated

    xoa

     

    14 hours ago

    [ - ]

    My personal favorite is iHP48 (previously I used m48+ before it died) running an HP 48GX with metakernal installed as I used through college. Still just so intuitive and fast to me.

    wolvoleo

     

    14 hours ago

    [ - ]

    I still have mine. Never use it though as I'm not handy with RPN anymore. :'(

    xp84

     

    14 hours ago

    [ - ]

    I was pretty delighted to realize I could now delete the lame Calculator.app from my iPhone and replace it with something of my choice. For now I've settled on NumWorks, which is apparently an emulator of a modern upstart physical graphing calc that has made some inroads into schools. And of course, you can make a Control Center button to launch an app, so that's what I did.

    Honestly, the main beef I have with Calculator.app is that on a screen this big, I ought to be able to see several previous calculations and scroll up if needed. I don't want an exact replica of a 1990s 4-function calculator like the default is (ok, it has more digits and the ability to paste, but besides that, adds almost nothing).

    vscode-rest

     

    37 minutes ago

    [ - ]

    Calculator.app does have history now FWIW, it goes back to 2025 on my device. And you can make the default vertical be a scientific calculator now too.

    Also it does some level of symbolic evaluation: sin^-1(cos^-1(tan^-1(tan(cos(sin(9))))))== 9, which is a better result than many standalone calculators.

    Also it has a library of built in unit conversations, including live updating currency conversions. You won’t see that on a TI-89!

    And I just discovered it actually has a built in 2D/3D graphing ability. Now the question is it allows parametric graphing like the MacOS one…

    All that said, obviously the TI-8X family hold a special place in my heart as TI-BASIC was my first language. I just don’t see a reason to use one any more day to day.

    Buttons840

     

    14 hours ago

    [ - ]

    I looked at that calculator. But HP Prime and TI-89 have CAS systems that can do symbolic math, so I prefer to emulate them.

    TimByte

     

    2 hours ago

    [ - ]

    HP Prime emulator still wins for actually solving equations

    VorpalWay

     

    17 hours ago

    [ - ]

    I run a TI 83+ emulator on my Android phone when I don't have my physical calculator at hand. Same concept, just learned a different brand of calculators.

    varun_ch

     

    15 hours ago

    [ - ]

    built-in calculator apps are surprisingly underbaked... I'm surprised neither of the big two operating systems have elected to ship something comparable to a real calculator built in. It would be nice if we could preview the whole expression as we type it..

    I use the NumWorks emulator app whenever I need something more advanced. It's pretty good https://www.numworks.com/simulator/

    josephg

     

    8 hours ago

    [ - ]

    That’s certainly an improvement - but why can’t I modify a previous expression? Or tap to select previous expressions?

    What I want is something like a repl. I want to be able to return to an earlier expression, modify it, assign it to a variable, use that variable in another expression, modify the variable and rerun and so on.

    varun_ch

     

    1 hour ago

    [ - ]

    I think on the numworks you can use the arrow keys to pull up an old expression. I think it would be really cool if someone built out an interpreted, nicely rendered calculator language/repl that could do variables and stuff. Might be an interesting idea

    nickorlow

     

    11 hours ago

    [ - ]

    Anytime I have to do some serious amount of math, I have to go dig around and find my TI-84, everything is just burned into muscle memory

    shiroiuma

     

    12 hours ago

    [ - ]

    I use the "RealCalc" app on my phone. It's pretty similar to my old HP48.

  • johngossman

     

    16 hours ago

    [ - ]

    Posting some code that reproduces the bug could help not only Apple but you and others.

  • _kulang

     

    16 hours ago

    [ - ]

    Maybe this is why my damn keyboard predictive text is so gloriously broken

    sen

     

    16 hours ago

    [ - ]

    Oh it's not just me?

    Typing on my iPhone in the last few months (~6 months?) has been absolutely atrocious. I've tried disabling/enabling every combination of keyboard setting I can thinkj of, but the predictive text just randomly breaks or it just gives up and stops correcting anything at all.

    macintux

     

    15 hours ago

    [ - ]

    I haven't watched the video, but clearly there's a broad problem with the iOS keyboard recently.

    https://news.ycombinator.com/item?id=46232528 ("iPhone Typos? It's Not Just You - The iOS Keyboard is Broken")

    acdha

     

    15 hours ago

    [ - ]

    It’s not just you, and it got bad on my work iPhone at the same time so I know it’s not failing hardware or some customization since I keep that quite vanilla.

    taneq

     

    15 hours ago

    [ - ]

    It’s gotten so bad that I’m half convinced it’s either (a) deliberately trolling, or (b) ‘optimising’ for speech to text adoption.

  • TimByte

     

    2 hours ago

    [ - ]

    The real lesson here isn't even about Apple. It's about debugging culture

  • ryeguy_24

     

    3 hours ago

    [ - ]

    What expense app are you building? I really want an app that helps me categorize transactions for budgeting purposes. Any recommendations?

  • Metacelsus

     

    14 hours ago

    [ - ]

    >"What is 2+2?" apparently "Applied.....*_dAK[...]" according to my iPhone

    At least the machine didn't say it was seven!

    tolciho

     

    12 hours ago

    [ - ]

    Maybe Trurl and Klapaucius were put in charge of Q&A.

  • mungoman2

     

    10 hours ago

    [ - ]

    Good article. Would have liked to see them create a minimal test case, to conclusively show that the results of math operations are actually incorrect.

  • bri3d

     

    17 hours ago

    [ - ]

    I love to see real debugging instead of conspiracy theories!

    Did you file a radar? (silently laughing while writing this, but maybe there's someone left at Apple who reads those)

    djmips

     

    9 hours ago

    [ - ]

    IKR - this is very typical

  • nickorlow

     

    11 hours ago

    [ - ]

    I'd think other neural-engine using apps would also have weird behavior. Would've been interesting to try a few App Store apps and see the weird behavior

  • swyx

     

    8 hours ago

    [ - ]

    > Update on Feb. 1st: > Well, now it's Feb. 1st and I have an iPhone 17 Pro Max to test with and... everything works as expected. So it's pretty safe to say that THAT specific instance of iPhone 16 Pro Max was hardware-defective.

    nothing to see here.

  • z3t4

     

    8 hours ago

    [ - ]

    neural nets or AI are very bad at math, it can only produce what's in the training data. So if you have trained it from 1+1 to 8+8 it can't do 9+9, it's not like a child brain that it can make logical conclusions.

  • ftyghome

     

    13 hours ago

    [ - ]

    I also would like to see if the same error happens in another phone with the exactly same model.

  • thinkbud

     

    6 hours ago

    [ - ]

    So the LLM is working as intended?

  • refulgentis

     

    16 hours ago

    [ - ]

    .

    bri3d

     

    16 hours ago

    [ - ]

    Can you read the article a little more closely?

    > - MiniMax can't fit on an iPhone.

    They asked MiniMax on their computer to make an iPhone app that didn't work.

    It didn't work using the Apple Intelligence API. So then:

    * They asked Minimax to use MLX instead. It didn't work.

    * They Googled and found a thread where Apple Intelligence also didn't work for other people, but only sometimes.

    * They HAND WROTE the MLX code. It didn't work. They isolated the step where the results diverged.

    > Better to dig in a bit more.

    The author already did 100% of the digging and then some.

    Look, I am usually an AI rage-enthusiast. But in this case the author did every single bit of homework I would expect and more, and still found a bug. They rewrote the test harness code without an LLM. I don't find the results surprising insofar as that I wouldn't expect MAC to converge across platforms, but the fact that Apple's own LLM doesn't work on their hardware and their own is an order of magnitude off is a reasonable bug report, in my book.

    refulgentis

     

    16 hours ago

    [ - ]

    Emptied out post, thanks for the insight!

    Fascinating the claim is Apple Intelligence doesn't work altogether. Quite a scandal.

    EDIT: If you wouldn't mind, could you edit out "AI rage enthusiast" you edited in? I understand it was in good humor, as you describe yourself that way as well. However, I don't want to eat downvotes on an empty comment that I immediately edited when you explained it wasn't minimax! People will assume I said something naughty :) I'm not sure it was possible to read rage into my comment.

    LoganDark

     

    16 hours ago

    [ - ]

    > Fascinating the claim is Apple Intelligence doesn't work altogether. Quite a scandal.

    No, the claim is their particular device has a hardware defect that causes MLX not to work (which includes Apple Intelligence).

    > EDIT: If you wouldn't mind, could you edit out "AI rage enthusiast" you edited in? I understand it was in good humor, as you describe yourself that way as well. However, I don't want to eat downvotes on an empty comment that I immediately edited when you explained! People will assume I said something naughty :) I'm not sure it was possible to read rage into my comment.

    Your comment originally read:

    > This is blinkered.

    > - MiniMax can't fit on an iPhone.

    > - There's no reason to expect models to share OOMs for output.

    > - It is likely this is a graceful failure mode for the model being far too large.

    > No fan of Apple's NIH syndrome, or it manifested as MLX.

    > I'm also no fan of "I told the robot [vibecoded] to hammer a banana into an apple. [do something impossible]. The result is inedible. Let me post to HN with the title 'My thousand dollars of fruits can't be food' [the result I have has ~nothing to do with the fruits]"

    > Better to dig in a bit more.

    Rather than erase it, and invite exactly the kind of misreading you don't want, you can leave it... honestly, transparently... with your admission in the replies below. And it won't be downvoted as much as when you're trying to manipulate / make requests of others to try to minimize your downvotes. Weird... voting... manipulating... stuff, like that, tends to be frowned upon on HN.

    You have more HN karma than I do, even, so why care so much about downvotes...

    If you really want to disown something you consider a terrible mistake, you can email the HN mods to ask for the comment to be dissociated from your account. Then future downvotes won't affect your karma. I did this once.

    fragmede

     

    14 hours ago

    [ - ]

    Oh no, all my meaningless internet points, gone!

    mikestew

     

    14 hours ago

    [ - ]

    Then future downvotes won't affect your karma.

    Who cares? The max amount of karma loss is 4 points, we can afford to eat our downvotes like adults.

    LoganDark

     

    14 hours ago

    [ - ]

    Huh. I thought the minimum comment score was -4 (which would make the maximum amount of karma loss 5, since each comment starts at 1 point), but I didn't know if that was a cap on karma loss or just a cap on comment score.

  • dav43

     

    14 hours ago

    [ - ]

    My thousand dollar iPhone can't even add a contact from a business card.