• ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
    link
    fedilink
    arrow-up
    24
    arrow-down
    4
    ·
    9 months ago

    I’d wager that both AMD and Intel very worried about RISC because of what Apple managed to do with M1 architecture. This is a great technical write up about it. The TLDR is that there are two major benefits.

    First is that system on a chip approach allows sharing memory between different chips such as CPU, GPU, and so on, without needing a bus which removes a major bottleneck. You no longer have to copy data from CPU memory to GPU memory, do some computation, and then copy stuff back. This bit is compatible with CISC architecture, and AMD has already been building SoC style chips for a little while now.

    The second benefit is the one that I suspect is making AMD and Intel nervous. Since RISC instructions have a fixed length, it’s possible to load a batch of instructions, figure out which ones have to be run in order and which ones don’t, and then execute all the independent instructions in parallel. This makes it possible to just keep adding more cores and parallelize the work across them. And this scales to hundreds of cores working together.

    Turns out that this is basically impossible to do at scale with CISC chips because the instructions are variable length, and calculating dependencies between them simply doesn’t scale. AMD found that the overhead of figuring out dependencies starts negating the benefits you get from parallel execution around 3-4 instructions.

    This is why Apple M series chips are now in a class of their own right now. They run way faster than x86 chips and they use a lot less power.

    Now that Apple showed the benefits of such architecture, I think it’s only a matter of time before we’ll see similar approach taken with RISC-V chips which will basically make x86 architecture obsolete.