hacker-news-custom-logo

Hackr News App

35 comments

  • vlovich123

     

    5 hours ago

    [ - ]

    > Additionally, in C++, requesting that heap memory also requires a syscall every time the container geometrically changes size

    That is not true - no allocator I know of (and certainly not the default glibc allocator) allocates memory in this way. It only does a syscall when it doesn’t have free userspace memory to hand out but it overallocates that memory and also reuses memory you’ve already freed.

  • aidenn0

     

    10 hours ago

    [ - ]

    Exceptions in C++ are never zero-overhead. There is a time-space tradeoff for performance of uncaught exceptions, and G++ picks space over time.

    mpyne

     

    7 hours ago

    [ - ]

    There's a time-space tradeoff to basically any means of error checking.

    Including checking return codes instead of exceptions. It's even possible for exceptions as implemented by g++ in the Itanium ABI to be cheaper than the code that would be used for consistently checking return codes.

    netr0ute

     

    9 hours ago

    [ - ]

    > G++ picks space over time

    By definition, that's zero-overhead because Ultrassembler doesn't care about space.

    aidenn0

     

    9 hours ago

    [ - ]

    Okay, than a traditional setjmp/longjmp implementation is zero-overhead because I don't care about space or time!

  • netr0ute

     

    13 hours ago

    [ - ]

    Hi everyone, I'm the author of this article.

    Feel free to ask me any questions to break the radio silence!

    benreesman

     

    13 hours ago

    [ - ]

    Nice work and good writeup. I think most of that is very sound practice.

    The codegen switch with the offsets is in everything, first time I saw it was in the Rhino JS bytecode compiler in maybe 2006, written it a dozen times since. Still clever you worked it out from first principles.

    There are some modern C++ libraries that do frightening things with SIMD that might give your bytestring stuff a lift on modern stupid-wide high mispredict penalty stuff. Anything by lemire, stringzilla, take a look at zpp_bits for inspiration about theoretical minimum data structure pack/unpack.

    But I think you got damn close to what can be done, niiicccee work.

    Sesse__

     

    10 hours ago

    [ - ]

    FWIW, this is basically an implementation of perfect hashing, and there's a myriad of different strategies. Sometimes “switch on length + well-chosen characters” are good, sometimes you can do better (e.g. just looking up in a table instead of a long if chain).

    The “value speculation” thing looks completely weird to me, especially with the “volatile” that doesn't do anything at all (volatile is generally a pointer qualifier in C++). If it works, I'm not really convinced it works for the reason the author thinks it works (especially since it refers to an article talking about a CPU from the relative stone age).

    inetknght

     

    13 hours ago

    [ - ]

    Overall, this is a fantastic dive into some of RISC-V's architecture and how to use it. But I do have some comments:

    > However, in Chata's case, it needs to access a RISC-V assembler from within its C++ code. The alternative is to use some ugly C function like system() to run external software as if it were a human or script running a command in a terminal.

    Have you tried LLVM's C++ API [0]?

    To be fair, I do think there's merit in writing your own assembler with your own API. But you don't necessarily have to.

    I'm not likely to go back to assembly unless my employer needs that extra level of optimization. But if/when I do, and the target platform is RISC-V, then I'll definitely consider Ultraseembler.

    > It's not clear when exactly exceptions are slow. I had to do some research here.

    There are plenty of cppcon presentations [1] about exceptions, performance, caveats, blah blah. There's also other C++ conferences that have similar presentations (or even, almost identical presentations because the presenters go to multiple conferences), though I don't have a link handy because I pretty much only attend cppcon.

    [0]: https://stackoverflow.com/questions/10675661/what-exactly-is...

    [1]: https://www.youtube.com/results?search_query=cppcon+exceptio...

    netr0ute

     

    13 hours ago

    [ - ]

    > LLVM's C++ API

    I think I read something about this but couldn't figure out how to use it because the documentation is horrible. So, I found it easier to implement my own, and as it turns out, there are a few HORRIBLE bugs in the LLVM assembler (from cross reference testing) probably because nobody is using the C++ API.

    > There are plenty of cppcon presentations [1] about exceptions, performance, caveats, blah blah.

    I don't have enough time to watch these kinds of presentations.

    mpyne

     

    6 hours ago

    [ - ]

    A specific presentation I'd point to is Khalil Estell's presentation on reducing exception code size on embedded platforms at https://www.youtube.com/watch?v=bY2FlayomlE

    But honestly you'd get vast majority of the benefit just by skimming through the slides at https://github.com/CppCon/CppCon2024/blob/main/Presentations...

    With a couple of symbols you define yourself a lot of the associated g++ code size is sharply reduced while still allowing exceptions to work. (Slide 60 on)

    0x98

     

    8 hours ago

    [ - ]

    > I think I read something about this but couldn't figure out how to use it because the documentation is horrible.

    Fair enough.

    > So, I found it easier to implement my own, and as it turns out, there are a few HORRIBLE bugs in the LLVM assembler (from cross reference testing)

    Interesting claim, do you have any examples?

    jclarkcom

     

    10 hours ago

    [ - ]

    You might look into using memory mapped IO for reading input and writing your output files. This can save some memory allocations and file read and write times. I did this with a project where I got more than 10x speed up. For many cases file IO is going to be your bottleneck.

    Sesse__

     

    10 hours ago

    [ - ]

    mmap-based I/O still needs to go through the kernel, including memory allocation (in the page cache) and all. If you've got 10x speedup from mmap, it is usually because your explicit I/O was very inefficient; there are situations where mmap is useful, but it's rarely a high-performance strategy, as it's really hard for it to guess what your intended I/O patterns are just from the page faults it's seeing.

    msla

     

    12 hours ago

    [ - ]

    What's the difference between a Programming Furu and a Programming Guru? Is there a joke I'm missing?

    netr0ute

     

    12 hours ago

    [ - ]

    Furus are "fake gurus." It comes from the Fintwit space where "furus" share their +1000% option trades as if they're geniuses in order to get you to sign up for their expensive Substack.

  • throwaway81523

     

    12 hours ago

    [ - ]

    I wonder if you thought about perfect hashing instead of that comparison tree. Also, flex (as in flex and bison) can generate what amounts to trees like that, I believe. I haven't benchmarked it compared to a really careful explicit tree though.

    netr0ute

     

    12 hours ago

    [ - ]

    I thought about hashing, but found that hashing would be enormously slow to compute compared to a perfectly crafted tree.

    dafelst

     

    11 hours ago

    [ - ]

    But did you think about using a perfect hash function and table? Based on my prior research, it seems like they are almost universally faster on small strings than trees and tries due to lower cache miss rates.

    dist1ll

     

    10 hours ago

    [ - ]

    Ditto. Perfect hashing strings smaller than 8 bytes has been the fastest lookup method in my experience.

    netr0ute

     

    10 hours ago

    [ - ]

    Problem is, there are a lot of RISC-V instruction way longer than that (like th.vslide1down.vx) so hashing is going to be slow.

    ashdnazg

     

    9 hours ago

    [ - ]

    You could copy the instruction to a 16 byte sized buffer and hash the one/two int64s. Looking at the code sample in the article, there wasn't a single instruction longer than 5 characters, and I suspect that in general instructions with short names are more common than those with long names.

    This last fact might actually support the current model, as it grows linearly-ish in the size of the instruction, instead of being constant like hash.

    Lerc

     

    6 hours ago

    [ - ]

    Is there a handy list of all RISC-V instructions?

    snvzz

     

    7 hours ago

    [ - ]

    Note th.vslide1down.vx is a T-Head instruction, a vendor custom extension.

    It is not part of RISC-V, nor supported by any CPUs outside of that vendors' own.

    Sesse__

     

    10 hours ago

    [ - ]

    You're probably thinking of gperf, not flex and bison.

    throwaway81523

     

    3 hours ago

    [ - ]

    I meant flex, for generating a switch table for that type of lexer. gperf is for hashing which is different. But, there may be better methods by now since the field has changed a lot.

  • IshKebab

     

    13 hours ago

    [ - ]

    Neat, but it's not like assembly is really a bottleneck in any but the most extreme cases. LLVM and GAS are already very fast.

    I feel like this might mostly be useful as a reference, because currently RISC-V assembly's specification is mostly "what do GCC/Clang do?"

    drob518

     

    10 hours ago

    [ - ]

    Exactly. I don’t know too many assembly language programmer's who are griping about slow tools, particularly on today’s hardware. Yea, Orca/M on my old Apple II with 64k RAM and floppy drives was pretty slow, but since then not so much. But sure, as a fun challenge to see how fast you can make it run, go for it.

    CyberDildonics

     

    8 hours ago

    [ - ]

    ASM should compile at hundreds of MB/s. All the ASM you could write in your entire life will compile instantly. There is no one in decades that has thought their assembler is too slow.

    benreesman

     

    13 hours ago

    [ - ]

    ptxas comes to mind.

    gdiamos

     

    12 hours ago

    [ - ]

    ptxas is a bit of a misnomer - it actually wraps the entire NVIDIA driver backend compiler

    PTX isn’t the assembly language, it is a virtual ISA, so you need a full backend compiler with 10s to 100s of passes to get to machine code

    benreesman

     

    12 hours ago

    [ - ]

    I appreciate that hitting sm_70 through sm_120 in one call isn't the same as hitting RISC-V in one call, but I do a lot of builds just for sm_120 which is closer to a fair comparison.

    It's imperfect, but I take any excuse to point out how bad monopolies are for customers. All you have to do is build the driver to see that "low priority" is a pretty broad term on the allegedly elite trillion dollar toolchain.

    I'm not saying CUDA is unimpressive, its a very, very, very hard problem. But if they were in an uncorrupted market ptxas would be fast instead of devastating znver5 workstations with 6400MT DDR5.

  • StilesCrisis

     

    7 hours ago

    [ - ]

    “Here's one weird trick I haven't seen anywhere else.” … describes a simplistic lexer. Hmm.