HNNewShowAskJobs
Built with Tanstack Start
AMD continues to chip away at Intel's x86 market share(tomshardware.com)
199 points by speckx 3 days ago | 127 comments
  • hengheng3 days ago

    How's AMD's engineering support these days? I've heard through the grapevine that many laptops were mostly engineered by intel engineers, creating a natural moat because the laptop brands are used to not having to do much PCB layout or thermals.

    AMD, I heard, seemed less capable, or less interested, or couldn't justify at their quantities, to do the same, which meant their engineering support packages were good for atx mainboards only, and maybe the occasional console.

    This must have changed a while ago, does anyone have the tea?

    • embedding-shape3 days ago |parent

      > and maybe the occasional console.

      To me they seem to be dominating the console scene, doing the CPU and GPU for all consoles from the last two generations, except for Switch and Wii U.

      • mpyne3 days ago |parent

        And even there, AMD did the GPU for the Wii U, that console was an evolution of the Wii (which was itself an evolution to the Gamecube). AMD had acquired the makers of the Wii/Gamecube graphics chip, and also separately designed the Wii U-specific upgrade GPU used for native Wii U games.

        • seabrookmx3 days ago |parent

          Rest in Peace ATI.

    • refulgentis3 days ago |parent

      That’s more done by ex. Compal than shrinking Intel, the myth you could trust that was shattered by their insistence up until 4 months before release date that Haswell(?) was going to hit its thermal envelope and perf targets. In 2018, iirc, that was the beginning of the end. Apple had to ship a MacBook generation that struggled with thermals for 3 years and decided to never again be put in that position. Similarly at other important OEMs.

    • tw043 days ago |parent

      I’m not sure you're going to find anyone here who can personally comment on AMD engineering support, but I can say first hand Asus zephyrus laptops using AMD chips are rock solid.

    • mmis10003 days ago |parent

      > How's AMD's engineering support these days?

      From the recent experience that I buy AMD mini-pc. (minisforun AI HX370) I don't feel it exist. (Because there is no need to) You just plug it into power socket and than it works. (Which is a good thing)

  • brian-armstrong3 days ago

    Windows 10 EOL is probably helping to churn a lot of aging Intel chips out here. I can't imagine anyone in the know is building a new desktop with an Intel anything in it these days, either.

    • rewilder12a day ago |parent

      Windows 10 will be the last msft os I ever use. I rebuilt using AMD CPU/GP booted up Fedora 42 and I have never had to run a single shell command to get anything to work. I don't even notice my OS. Work, games, local models (this one still takes some tweaking but is better), all work fine

    • mmis10003 days ago |parent

      Unless you need to use AutoCAD, their software have garbage level optimization on amd cpu. It's probably the only software you can see an intel i7 series cpu beat amd r9 by a big margin.

      • cyber_kinetista day ago |parent

        Probably they're using the Intel MKL library for their linear algebra (which is severely gimped on AMD - SIMD is disabled and only the scalar fallback code runs).

        If they've wrote SIMD code themselves then the gap between the two shouldn't be big (AMD's are actually better for SIMD nowadays, since the recent models support the AVX-512 instruction set while Intel ended support for that due to the P/E core split fiasco.)

    • wtcactusa day ago |parent

      If you are doing integrated GPU transcoding, Intel is still the best option.

      That’s a bit niche though. But for a NAS is great.

      • cyber_kinetista day ago |parent

        As a side note, Intel's discrete GPUs are also famous for high-quality video transcoding - it was quite popular for streamers who needed a second helper PC only for OBS streaming.

    • colechristensen3 days ago |parent

      How I pick a CPU:

      - Visit https://www.cpubenchmark.net/single-thread/ and pick the fastest CPU under $400

      - Visit https://www.cpubenchmark.net/multithread/ and verify there are no CPUs at a lower cost with a higher score

      It has been, for a long time, the latest generation Intel CPU with a 2xxK or 2xxKF model number these used to be "i7" models now there's just a 7, I'm very vaguely annoyed at the branding change.

      It would be hard for anybody to convince me that there is a better price|performance optimum. I get it, there was a very disappointing generation or two a few years ago, that hasn't put me off.

      The dominance of Apple CPUs might be putting me off both Intel and AMD and consider only buying Apple hardware and maybe even doing something like Linux running on a Mac Mini in addition to my MacOS daily driver.

      • Aurornis3 days ago |parent

        > - Visit https://www.cpubenchmark.net/single-thread/ and pick the fastest CPU under $400

        FYI www.cpubenchmark.com is a running joke for how bad it is. It’s not a good resource.

        There are a few variations of these sites like userbenchmark that have been primarily built for SEO spam and capturing Google visitors who don’t know where to go for good buying advice.

        Buying a CPU isn’t really that complicated. For gaming it’s easy to find gaming benchmarks or buyers guides. For productivity you can check Phoronix or even the GeekBench details in the compiler section if that’s what you’re doing.

        Most people can skip that and just read any buyers guide. There aren’t that many CPU models to choose from on the Pareto front of price and performance.

        • embedding-shape3 days ago |parent

          > For productivity you can check Phoronix or even the GeekBench details in the compiler section

          I guess the reason people prefer something like cpubenchmark, is because it seems way easier to get an overview / see data in aggregate. GeekBench (https://browser.geekbench.com/v6/cpu/multicore) for example just puts a list of all benchmarks, even when the CPU is the same. Not exactly conductive for finding the right CPU.

        • sznio2 days ago |parent

          >FYI www.cpubenchmark.com is a running joke for how bad it is. It’s not a good resource.

          You might be confusing them with UserBenchmark.

          • Lord-Jobo2 days ago |parent

            Userbench is openly mega biased and fudges their own test scores against AMD, it’s so bad it shouldn’t even be listed in search results,

            There are also many criticisms against CPUbennchmark that are much more minor like its over simplified testing leading to weird anomalous score gaps between extremely similar CPU’s.

            For the average consumer, I think cpu benchmark is fine and probably as good as you can ask for without getting into the weeds which defeats the purpose really.

        • IAmGraydon3 days ago |parent

          >FYI www.cpubenchmark.com is a running joke for how bad it is. It’s not a good resource.

          That's not the prevailing opinion at all. Passmark is just fine and does a lot to keep their data solid like taking extra steps to filter overclocked CPUs. Then you go on to recommend GeekBench??? Right...

      • dangus3 days ago |parent

        Flawed way to pick a CPU if you ask me.

        - generic benchmarks don’t pick up unique CPU features nor they pick up real world application performance. For example, Intel has no answer to the X3D V-cache architecture that makes AMD chips better for gaming.

        - You can’t really ignore motherboard cost and the frequency of platform socket changes. AMD has cheaper boards that last longer (as in, they update their sockets less often so you can upgrade chips more and keep your same board)

        - $400 is an arbitrary price ceiling and you’re not looking at dollars per performance unit, you’re just cutting off with a maximum price.

        - In other words, Intel chips are below $400 because they aren’t fast enough to be worth paying $400+ for.

        - If you’re looking for integrated graphics, you’re pretty much always better off with AMD over Intel

        • bfrog3 days ago |parent

          I got a 265kf and motherboard for 350. Plenty fast and saves money for the real issue which is GPU costs. Thankfully B580 is actually a pretty good deal as well at 250 compared to green or red options. Team blue has some good deals out there really if you aren't tied to a team color.

          • 3 days ago |parent
            [deleted]
        • lostlogin3 days ago |parent

          ‘Stupid’ is more than a bit strong. Your points are good and the tone undermines them.

          • dangus3 days ago |parent

            Modified to “flawed”

        • beeflet3 days ago |parent

          I made the mistake of going with intel because of SR-IOV, which they still haven't mainlined to the linux kernel

        • halJordan3 days ago |parent

          When i read "here's how i choose..." At no point did i engage with it under anything other than "this is what some random dude does once every 5 years" Let him pick his cpu how he does it. Youre overreacting, and frankly over emphasizing things that dont matter like needing vcache or avx512 or misapprehending his own price points

          • dangus3 days ago |parent

            How many people who buy desktop DIY systems don’t care about gaming performance?

            That market is like 90% gamers at least.

            3D v-cache is a key feature for that audience. It makes gaming performance significantly better.

        • colechristensen3 days ago |parent

          > $400 is an arbitrary price ceiling and you’re not looking at dollars per performance unit, you’re just cutting off with a maximum price. So if there’s a $430 AMD CPU that’s 20% faster you’re going to forego that better price per performance value just because it’s slightly above your price target.

          My choice of CPU currently has the best value / performance on this benchmark aside from two very old AMD processors which are very slow and just happen to be extremely cheap. No new AMD processors are even remotely close.

          It's also currently $285 no top tier performers are even close except SKUs which are slight variations of the same CPU.

          https://www.cpubenchmark.net/cpu_value_available.html

          > benchmarks don’t pick up unique CPU features nor they pick up real world application performance. For example, Intel has no answer to the X3D V-cache architecture that makes AMD chips better for gaming.

          Happy to be convinced that there's a better benchmark out there, but if you're trying to tell me it's better but in a way that can't be measured, I don't believe you because that's just "bro science".

          > If you’re looking for integrated graphics, you’re pretty much always better off with AMD over Intel

          I never have been looking for integrated graphics, sometimes I have bought the CPU with it just because it was a little cheaper.

          > You can’t really ignore motherboard cost and the frequency of platform socket changes. AMD has cheaper boards that last longer (as in, they update their sockets less often so you can upgrade chips more and keep your same board)

          I've always bought a new motherboard with a CPU and either repurposed, sold, or given away the old CPU/motherboard combination which seems like a much better use of money. The last one went to somebody's little brother. The one before that is my NAS. There's not a meaningful difference to comparable motherboards to me, particularly when the competing AMD CPUs are nearly double the cost or more.

          • dangusa day ago |parent

            It’s hard to take you seriously if you’re going to claim equivalent AMD processors cost double or more.

            Your example of tossing your motherboard away is not a very good one here. That was your choice to act illogically. My AMD AM4 motherboard started with a Ryzen 1600, 3600, and now runs a 5600X3D.

            Basically I’ve had this same motherboard for something like 6 or 7 years and the performance difference between a Ryzen 1600 and 5600X3D is completely wild. I’ve had no need to buy a new board for the better part of a decade. If you’re buying a new board with every processor purchase that’s a huge cost difference.

            When I say that generic benchmarks are bad I mean that cpu benchmarks like the one you are just now linking are bad. You need more practical benchmarks like in-game FPS, how long a turn takes in Stellaris, how long it takes to encode a video or open an ZIP file, etc.

            That is where the X3D chips play in as well. You might be able to buy an Intel chip with more cores and better productivity performance, but if you’re eyeing gaming performance like I imagine most desktop DIY builders are, you’d rather get better gaming oriented performance and sacrifice some productivity performance.

            If you are gaming and buy a 9800X3D, Intel literally doesn’t not make anything faster at any price. You can offer Intel $5,000 and they won’t have anything to sell you that goes faster at playing games.

            At lower price points, AMD still ends up making a lot of sense for their long-supported sockets, low cost boards, better power/heat efficiency, and X3D chips performing well in gaming applications.

        • tester7563 days ago |parent

          >For example, Intel has no answer to the X3D V-cache architecture that makes AMD chips better for gaming.

          So, it should be visible in gaming benchmarks, right?

          >- If you’re looking for integrated graphics, you’re pretty much always better off with AMD over Intel

          What? Lunar Lake CPU has strong iGPU

          • sosborn3 days ago |parent

            https://gamersnexus.net/cpus/rip-intel-amd-ryzen-7-9800x3d-c...

      • khannn3 days ago |parent

        I seriously want a Mac, but I hate Apple's pricing and stinginess with RAM/Storage sizes.

        • Mistletoe3 days ago |parent

          Just buy an old one. Unless you are doing some sort of cutting edge work, an old one works fine. It's crazy how cheap they are. I assume because Apple users always like to churn to the newest thing.

          • khannn3 days ago |parent

            I see the current base Mac Mini going for $499 new, but that's 16gb of unified ram and a 256 ssd. I'm currently using 17.5gb of memory on win11, but most of that is Brave with a ton of extensions loaded with many tabs. I'd be using the Mac for typical office stuff with some occasional programming probably with JetBrains IDEs. I'd like to do some AI stuff too, my current laptops are way too slow.

        • mptest3 days ago |parent

          it does feel like, when you click the, "pay 400$ more for a 30$ hardware upgrade" button, that tim apple himself is laughing at me knowing their siren song has already worked and I am at their mercy, wallet open...

          • khannn3 days ago |parent

            Running 40gb of RAM like a madman on my 2yr old Ryzen laptop for which the upgrade cost me $44.

            • vee-kay3 days ago |parent

              Running 32GB RAM and 1TB SATA SSD with Windows 10, like a mad scientist, on my thus upgraded 15-years-old Sony Viao laptop.. SATA and SDRAM are backwards compatibible so a couple of years ago, I put in a new 1 TB SATA SSD drive in the old SATA1 slot, and two cheap DDR4 3200+ Mhz SDRAM chips in the RAM slots; I can upgrade again a few years later). This Sony Viao notebook (for it is a cute little laptop) now purrs like a Jaguar waiting to be unleashed. Dual booting Windows 10 and Mint Linux - OS boots in few seconds - and everything feels so snappy to work there.

              Meanwhile, my Apple Mac Mini 2012 (Intel CPU) - which needed extraordinary efforts by me to make it triple boot MacOs, Windows 10 and Linux (trust Apple to make it hard to install other OSes on an Intel CPU PC) - is slow and fussy because of its meagre RAM and old HDD (not SSD). But the Apple service center refused to upgrade this Mac Mini to new RAM and new SSD, citing Apple policies to not allow such upgrades. Apple has made it quite hard to custom upgrade such iDevices, so this little PC is lying unused in my cupboard, waiting for the rainy day when I'll get the courage to tinker it by myself to upgrade it. And even if I did upgrade the hardware, this Mac Mini can only be upgraded to MacOS Catalina, and it won't get security upgrades, because Apple has stopped supporting it.

              P.S.: I hate Apple.

              • deaux2 days ago |parent

                > And even if I did upgrade the hardware, this Mac Mini can only be upgraded to MacOS Catalina, and it won't get security upgrades, because Apple has stopped supporting it.

                Your comment mostly makes sense but this is a weird mention when Windows is even worse on this now, Win11 not supporting much more recent machines.

              • khannn3 days ago |parent

                I don't even want to fall down the rabbit hole of installing MacOS on a normal laptop again and my old 2014 Thinkpad with 8gb of ram plus 256gb ssd isn't going to light the world on fire performance wise.

              • vardump2 days ago |parent

                I wish PCs had a unified GPU with 400..1000 GB/s bandwidth to the main memory. Up to 256 GB (or even 512 GB in Mac Studio). It's nice for AI. Thus staying on Macs, at least for now.

        • andrewmcwatters3 days ago |parent

          You’re not missing out on a lot. Coming from someone who has used their products for many years now. Their products have more compromises and trade-offs now than they did during Apple’s Intel era.

          What you will tangibly miss is low noise, low power draw hardware and very, very specific workloads being faster than the cutting edge AMD/Nvidia stack people are using today.

          • LeFantome3 days ago |parent

            I would also like to hear about those compromises. I have been wanting this hardware and would like to know what I don’t know.

            • linguae3 days ago |parent

              I have a work-issued M3 MacBook Pro, and at home my daily drivers are a Ryzen 9 3900 PC (still on Windows 10) and a Framework 13 laptop with a Ryzen 5 7640U running Windows 11. The hardware on my MacBook Pro is fantastic; I get amazing battery life that lasts far longer than my Framework 13, and the performance is excellent. I also love my MacBook Pro's build quality.

              However, the reason my personal laptop is a Framework 13 and not a MacBook Pro is because I value upgradability and user-servicability. My Framework has 32GB of RAM, and I could upgrade it to 64GB at a later date. Its SSD, currently 1TB, is also upgradable. I miss the days of my 2006 Core Duo MacBook, which had user-serviceable RAM and storage. My Ryzen 9 3900 replaced a 2013 Mac Pro.

              Additionally, macOS doesn't spark the same type of joy that it used to; I used to use Macs as my personal daily drivers from 2006 to 2022. While macOS is less annoying than Windows to me, and while I love some of the bundled apps like Preview.app and Dictionary.app, the annoyances have grown over the years, such as needing to click a security prompt each time I run lldb on a freshly-compiled program. I also do not like the UI directions that macOS has been taking during the Tim Cook era; I didn't like the changes made during Yosemite (though I was able to live with them) and I don't plan to upgrade from Sequoia to Tahoe until I have to for security reasons.

              Apple's ARM hardware is appealing enough to me that I'd love to purchase a M4 Mac Mini to have a powerful, inexpensive, low-power ARM device to play with. It would be a great Linux or FreeBSD system, except due to the hardware being undocumented, the only OS that can run on the M4 Mac Mini for now is macOS. It's a shame; Apple could probably sell more Macs if they at least documented enough to make it easier for developers of alternative operating systems to write drivers for them.

          • klelatti3 days ago |parent

            Genuine question: what compromises are you referring to here?

          • andrewmcwatters2 days ago |parent

            [dead]

      • Aeolun3 days ago |parent

        Huh, my method involves the same thing but filtering out all the Intel stuff before selecting the best AMD version.

      • brian-armstrong3 days ago |parent

        Why would anyone use such an arbitrary method when you could have a 9800x3D for $40 more?

        • colechristensen3 days ago |parent

          A 9800X3D is $479, my present choice of intel processor is $275.

          Is it that much better? Show me.

          • pixelpoet3 days ago |parent

            So much sass for such a googleable thing, good grief.

            • colechristensen2 days ago |parent

              A bunch of people telling me I'm wrong and that I should google why...

              I mean just don't say anything if what you're trying to add is just "go look it up yourself"

              • pixelpoet2 days ago |parent

                And now you're being bossy? Good day, sir.

                • amypetrik82 days ago |parent

                  Do you have a source for that?

                  • pixelpoet2 days ago |parent

                    For the 7800x3d and 9800x3d being really good CPUs? I worked on Cinebench and its rendering engine, so I'll often go by its single- and multi-threaded results, and in a past life I worked on Indigo Renderer and find IndigoBench still works great too: https://indigorenderer.com/indigobench

      • redox993 days ago |parent

        That's probably the worst benchmark you could choose.

        • nerdsniper3 days ago |parent

          What's a better one where i can sort all cpus?

          • homebrewer3 days ago |parent

            Depends on what you're doing; I'm mostly interested in typical developer workloads, so I've been relying on

            https://openbenchmarking.org/test/pts/build-linux-kernel-1.1...

            There are lots more tests in the sidebar.

  • braiamp3 days ago

    I find interesting that despite many years of being reminded that DYI market doesn't represent a significant portion of these sales... we are still thinking that individual customers are the one driving the consumption. The one driving this are big OEMs like Dell, HP, Lenovo, etc.

    • mmis10003 days ago |parent

      > The one driving this are big OEMs like Dell, HP, Lenovo, etc.

      That's why monopoly is a bad thing. It allows manufacturer to pressure the vendor. The reason amd don't sells much in laptop market is extremely simple. Because you can't buy it. These vendors usually have far less version of amd laptop than intel one. And those are usually sells out quickly.

      • rincebrain2 days ago |parent

        My experiences with trying AMD CPU laptops is that they're not good.

        Every time I've tried it in the last 10 years, it's felt like I was teleported into the late 90s PC era - weird bugs in specific drivers that you can find lots of reports of for this specific model and no resolution, heat management that feels like someone in a basement strung things together in 5 minutes and never tested it again, strange failures in "plug and play" support for USB devices that work on every other machine flawlessly with the same cable and device, and don't get me started on Bluetooth. (My favorite ever might be the time that attempting to pair a specific pair of headphones to the laptop shut off every USB port, reproducibly, apparently because the BT adapter was connected over the USB M.2 pins to the root hub, and was crashing in firmware, so both Windows and Linux did the same style of dance of "try to reset it, once that fails, go up a level and turn off the complex to make sure other things keep working"...except up one level was the root.) (Though, to be fair to AMD, that was an Intel BT/wifi chip in an AMD laptop...)

        I really want to like and recommend AMD mobile hardware, but every time I've tried it has been a shitshow without fail.

        • user_783214 hours ago |parent

          I think this is to an extent also based on the OEM and the level they go to/put in the effort. My AMD framework has been as "plug and play" as my previous Intel laptops (minus me using the wrong version of AMD Adrenalin and their naming scheme being a terrible mess, but that's with non-OEM drivers.)

        • mmis10002 days ago |parent

          This is honestly not my experience though. I'd like to know what vendor and model are you using (and get so much problems). Also I don't have experience about heat management issue either. On the other hand, the intel mac I used before is just underpowered and have giant issues about heat management. Every single mac-pro in my office apartmentmet have an early battery fail at really low cycle count. (And it's also just underpowered thus make working painful)

          • user_783214 hours ago |parent

            I replied to the earlier comment too, but wanted to agree - my AMD (framework 13) has been about as seamless as reasonably possible (Microsoft shenanigans not withstanding.)

    • Hendrikto2 days ago |parent

      And who buys from OEMs? We do have a lot of market power as consumers.

  • varispeed3 days ago

    I had to buy two laptops recently, so I got Intel's 9 ultra 285k and Ryzen AI 9. The latter on paper should be slower, but it's a night and day difference. Intel's laptop sounds like a hairdryer when opening a browser tab. Ryzen's fans are far gentler on the ears and trigger less often. Still both laptops are league below even my old M1.

    • betaby3 days ago |parent

      Are any of those laptops Linux compatible?

      • esseph2 days ago |parent

        If you want Linux compatible, buy a Lenovo ThinkPad

      • teaearlgraycold2 days ago |parent

        M1 is

  • shmerl3 days ago

    Can anyone explain what prevents AMD from making x86_64 chips competitive with ARM on the lower end like in mobile phones? I doubt it's about ISA.

    • axiolite3 days ago |parent

      Just price, I'd say. AMD / Intel are used to a certain margin on their products, and the low barrier to entry to create ARM CPUs, and fierce competition from giants like Broadcom, keeps margins very thin in this market.

      The original smart phones like the Nokia Communicator 9110i were x86 based.

      AMD previously had very impressive low-power CPUs, like the Geode, running under 1-watt.

      Intel took another run at it with Atom, and were able to manage x86 phones (eg: Asus Zenphone) slightly better than contemporary ARM based devices, but the price for their silicon was quite a bit higher than ARM competitors. And Intel had to sink so much money into Atom, in an attempt to dominate the phone/tablet market, that they couldn't be happy just eeking out a small sliver of the market by only being slightly better at a significantly premium price.

      • aurareturn2 days ago |parent

          Just price, I'd say.
        
        I don't think it is price. Intel has had a bigger R&D budget for CPU designs than Apple. If you mean manufacturing price, I also doubt this since AMD and Intel chips are often physically bigger than Apple chips in die size but still slower and less efficient. See M4 Pro vs AMD's Strix Halo as an example where Apple's chip is smaller, faster, more efficient.
        • adrian_b2 days ago |parent

          I have not seen any evidence that Apple's chip is smaller, faster and more efficient.

          Apple's CPU cores have been typically significantly bigger than any other CPU cores made with the same manufacturing process. This did not matter for Apple, because they do not sell them to others and because they have always used denser CMOS processes than the others.

          Apple's CPUs have much better energy efficiency than any others when running a single-threaded application. This is due to having a much higher IPC, e.g. up to 50% higher, and a correspondingly lower clock frequency.

          On the other hand, the energy-efficiency when running multithreaded applications has always been very close to Intel/AMD, the differences being explained by Apple having earlier access to the up-to-date manufacturing processes.

          Besides efficiency in single-threaded applications, the other point where Apple wins in efficiency is in the total system efficiency, because the Apple devices typically have lower idle power consumption than the competition, due to the integrated system design and the use of high-quality components, e.g. efficient displays. This better total system efficiency is what leads to longer battery lifetimes, not a better CPU efficiency.

          The Apple CPUs are fast for the kind of applications needed by most home users, but for applications that have greater demands for computational performance, e.g. with big numbers or with array operations, they are inferior to the AMD/Intel CPUs with AVX-512.

          • aurareturna day ago |parent

            You say you've never seen evidence that Apple's chips are smaller, faster, more efficient but you confidently proclaim that Apple CPU cores are typically bigger on the same node.

            Where is your source?

            There's plenty of die shots showing that Apple P cores are either smaller or around the same size as AMD and Intel P cores. Plenty of people on Reddit have done the analysis as well.

      • shmerl3 days ago |parent

        I see, but why others like Qualcomm are doing it then? They are OK with low margins?

        • ACCount372 days ago |parent

          Qualcomm has a massive "value add" because they own the modem. As well as a doom stack of patents on all things cellular.

          You need a modem if you want to make a smartphone. And Qualcomm makes sure to, first, make some parts of the modem a part of their SoC, and second, never give a better deal on a standalone modem than on a modem and SoC combo.

          Sure, AMD could make their own modem, but it took Apple ages to develop a modem in-house. And AMD could partner with someone like Mediatek and use their hardware - but, again, that would require Mediatek to prop up their competition in SoC space, so, don't expect good deals.

          • shmerl2 days ago |parent

            Not every scenario for such chips is a smartphone, but as you said, AMD could as well develop their own modem.

            I would prefer them to start with WiFi though, since Intel made their latest chips impossible to use with AMD CPUs.

            • ACCount372 days ago |parent

              The problem is whether it's worth doing. As opposed to: putting the same amount of effort into CPU/GPU/NPU development and getting a better return.

            • axiolite2 days ago |parent

              > AMD could as well develop their own modem.

              That didn't work out well when Intel tried it.

              • shmerl2 days ago |parent

                What exactly went wrong?

                • axiolite2 days ago |parent

                  https://www.xda-developers.com/intel-leaving-5g-business-com...

                  https://www.extremetech.com/mobile/302822-intel-blames-qualc...

                  • shmerl2 days ago |parent

                    Seems more like a symptom of Intel's general issues, not of this being useless. But who knows.

                    I agree though that Qualcomm is causing a lot of anti-competitive problems.

    • wmf3 days ago |parent

      Their lowest end chips are probably competitive already. I think x86 support was removed from Android though.

      • shmerl3 days ago |parent

        So why did for example Valve decide to use Qualcomm Snapdragon for Steam Frame and not some AMD APU?

        • kube-system3 days ago |parent

          I have seen speculation that mobile app architecture compatibility was part of it

          • shmerl3 days ago |parent

            I see. But aren't they emulating x86_64 on ARM64 there anyway? Can't they emulate ARM64 on x86_64 the same way?

            • rincebrain2 days ago |parent

              Yes, in theory.

              But there's not a huge market (and therefore, FOSS dev time spent on it) for emulating AArch64 on x86 the way there is x86 on AArch64, so if your options are to build your own AArch64 emulation for x86 (or drop a fortune into an existing FOSS option), or building something based on AArch64 and using the existing x86 emulation implementations, one of these has much more predictable costs and outcomes today.

              • shmerl2 days ago |parent

                Sounds like a project for Valve to back then.

                • rincebrain2 days ago |parent

                  They could, but why?

                  If an ARM device both suits the goals and has lower risk, there's little upside other than forcing the project to exist.

                  And since there's very few pieces of AArch64-exclusive software that Valve is trying to support, that's not a goal that benefits the project.

                  (If I were guessing without doing much research, Switch emulators might be the largest investment of effort in open source on x86 systems running AArch64 things performantly, but that's certainly not a market segment Valve is targeting, so...)

                  • shmerl2 days ago |parent

                    To have flexibility and not being tied to CPU choice due to simply lacking software side. Besides, such kind of project would be beneficial to more scenarios than their immediate need.

                    They didn't need to back a bunch of projects they backed (radv / aco are a big example where you could claim there was redundancy so they weren't strictly necessary), but results paid off even to the point where AMD is dropping their amdvlk in favor of using radv/aco.

                    They didn't need to strictly speaking back zink either (OpenGL on Vulkan), but they decided to and it gives them a long term ability to have OpenGL even if native drivers will be gone, as well as an upside of bringing up OpenGL on any Vulkan available target out of the box.

                    It's just something in their style to do when there seems to be an obvious gap.

  • TinkersW3 days ago

    Nova lake looks potentially pretty good, AVX512/APX and very very high core count, so maybe we will see AMD have some competition next year.

  • hereme8883 days ago

    This could swing so hard with sudden geopolitical triggers. I also see Intel positioning itself very strongly for its next generation chips.

    • general14653 days ago |parent

      Unless they will do something stupid like damaging their 13th and 14th generation of processors by usage and then going great lengths to deny it, until finally being forced to fix it.

      https://www.digitec.ch/en/page/intel-has-a-big-problem-unsta...

      • hereme8882 days ago |parent

        I hear the frustration. I have an intel 13th, and I hope it lasts long enough.

    • MangoCoffee3 days ago |parent

      Isn't TSMC Arizona and Japan the hedge for such geopolitical changes?

      Their best engineers are probably still going to be in Taiwan, but with the rate at which TSMC is building fabs overseas, it shouldn't matter much.

      • hereme8882 days ago |parent

        That fact completely skipped my mind. You're right.

    • fulafel2 days ago |parent

      Intel seems vulnerable to Trump tariffs, would seem more likely than TSMC getting into trouble.

    • lostlogin3 days ago |parent

      Are you saying that Intel is well positioned if Trump let’s China think it can invade Taiwan without consequences?

      If so, that’s a hell of a way for Intel to secure its future.

      • mschuster913 days ago |parent

        Intel is screwed because their foundry is nowhere near up to speed but AMD has zero alternatives to TSMC in Taiwan so they got it worse - at least for now, I don't think TSMC Arizona is ready yet.

        • kcb3 days ago |parent

          TSMC in Arizona has been at full scale production of 4nm chips for a while now.

    • rewilder12a day ago |parent

      using what fab? lol

  • 647182836613 days ago

    Intel has better and more developed virtualization, security features, and other hardware features. AMD seems to make what feels like an MVP that can do the core functionality, but lacks the extra 20% that makes the better product.

    • ACCount372 days ago |parent

      And when has anyone ever needed those features?

      I'm serious. Doesn't seem like those are useful at anything short of high end server market. And even there the benefits are questionable.

    • esseph2 days ago |parent

      This is actually not true.

      https://onidel.com/amd-sev-snp-vs-intel-tdx-vps/

  • MBCook3 days ago

    I'd love to see a market share chart going back far far more. At least to the middle of the 90s or so.

    I'm very impressed though. I had no idea there were near 1/3 of the desktop market. Good for them.

    • flomo3 days ago |parent

      I know when AMD had the K8/Opteron, they were obviously doing really well, but their marketshare didn't really change because they were capacity-limited.

  • ksec2 days ago

    This is not the first time I criticise AMD haven't been doing enough to steal market share. If Intel is doing so bad right now and has been so for 5 years, and AMD could only take 20 - 30% market share. AMD really needs to think about their execution..

  • bee_rider3 days ago

    I imagine it would be kind of hard to switch away from Intel in the workstation/cluster space.

    Like you have to replace OneAPI, which sounds easy because it’s just one thing, but like do you really want to replace BLAS, LAPACK, MPI, ifort/icc… and then you still need to find a sparse matrix solver…

    • MITSardine2 days ago |parent

      What do you mean by this? I've been using those libraries on mac ARM and AMD processors, are you referring to intel-specific implementations? How about the sparse matrix solver, what do you use?

    • squidgyhead3 days ago |parent

      That is really more of a switch from CUDA to HIP; for most HPC applications, cpu speed isn't the question any more.

  • stevefan19992 days ago

    Here's one nice thing about AMD is that there is znver4 and znver5 support baked in from CachyOS, so any Zen 4 laptop (7000, 8000 series) and Zen 5 (Strix Halo AI Max) would get good performance early on. I got a 8745HS laptop for just $400 and I swapped the 1T and 32GB RAM for 2x2T and 64GB RAM, and switched to CachyOS. Except for a weird keyboard issue when resuming from sleep, and some Arch kernel shenanigans, I got no problem so far.

  • polski-g3 days ago

    AMD chips are just as fast but with lower thermal output. Why would anyone use Intel at this point?

    • ahartmetz3 days ago |parent

      Possibly lower idle power consumption. Intel chips seem to be doing better there. Anything else, Intel is at best up to par with AMD.

    • AbuAssar3 days ago |parent

      availability

  • snovymgodym3 days ago

    (On desktop systems)

    • cmovq3 days ago |parent

      On data center as well. I think AMD rightly decided to focus on larger chips for data center instead of consumer laptops where margins are tiny in comparison and growth has been slow for a few years.

      • jauntywundrkind3 days ago |parent

        In general AMD seems to not want anything to do with down-market parts.

        They still have great laptop & desktop parts, in fact they're essentially the same parts as servers (with less Core Complex Die (CCD) chiplets and simpler IO Die)! Their embedded chips, mobile chips are all the same chiplets too!!

        And there's some APU parts that are more consumer focused, which have been quite solid. And now Strix Halo, which were it not for DDR5 prices shooting to the moon, would be incredible prosumer APU.

        Where AMD is just totally missing is low end. There's nothing like the Intel N100/N97/N150, which is a super ragingly popular chip for consumer appliances like NAS. I'm hoping their Sound Wave design is real, materializes, offers something a bit more affordable than their usual.

        The news at the end of October was that their new low end line up is going to be old Zen2 & Zen3 chips. That's mostly fine, still an amazing chip, just not quite as fast & efficient. But not a lot no small AMD parts. https://wccftech.com/amd-prepares-rebadged-zen-2-ryzen-10-an...

        It's crazy how AMD has innovated by building far far less designs than the past. There's not a bunch of different chips designed for different price points, the whole range across all markets (for cpus) is the same core, the same ~3 designs, variously built out.

        I do wish AMD would have a better low end story. The Steam Deck is such a killer machine and no one else can make anything with such a clear value, because no one else can buy a bunch of slightly weird old chips for cheap, have to buy much more expensive mainline chips. I really wish there were some smaller interesting APUs available.

        • iknowstuff3 days ago |parent

          Damn I love the strix halo. the framework desktop idles at 10W and has modern standby consuming less than 1W, but fully connected so an xbox controller can wake it over bluetooth etc.

          My 3080 sffpc eats 70W idle and 400W under load.

          Game performance is roughly the same from a normie point of view.

          • rubatuga3 days ago |parent

            How did you get Bluetooth wake working?!

            • p_l3 days ago |parent

              That's the true magic of "modern standby".

              The OS can just leave BT on and still get interrupt and service it.

          • zackify3 days ago |parent

            I have a 7840u framework and it idles around 7-8w with not much happening.

        • init2null3 days ago |parent

          The Intel video encoding pipeline alone is worth going Intel on the low end. Those low-power devices simply need better transcoding support than AMD can currently provide.

          • jauntywundrkind3 days ago |parent

            Updating this post. Found the review I was looking for!

            Newest RDNA4 fixes a pretty weak encoder performance for game streaming, is competitive. Unfortunately (at release at least) av1 is still pretty weak. https://youtu.be/kkf7q4L5xl8

            One thing noted is AMD seems to have really good output at lower bandwidth (~4min mark). Would be nice to have even deeper dives into this. And also whether or not the quality changes over time with driver updates would be curious to know. One of the comments details how already a bunch of the asks in this video (split frame encoding, improved av1) landed 1mo after the video. Hopefully progress continues for rdna4! https://youtube.com/watch?v=kkf7q4L5xl8&lc=UgzYN-iSC7N097XZi...

        • overfeed3 days ago |parent

          > It's crazy how AMD has innovated by building far far less designs than the past. There's not a bunch of different chips designed for different price points, the whole range across all markets (for cpus) is the same core, the same ~3 designs, variously built out.

          AMD bet the farm on the chiplet architecture, and their risky bet has paid off in a big way. Intel's fortunately timed stumbling helped, but AMD ultimately made the right call about core-scaling at a time when most games and software titles were not written to take advantage of multicore parallelism. IMO, AMD deserves much more than the 25% marketshare, as Zen chips deliver amazing value.

        • 3 days ago |parent
          [deleted]
        • toast03 days ago |parent

          > Their embedded chips, mobile chips are all the same chiplets too!!

          Depends on where in embedded, but the laptop and APU chips are monolithic, not chiplet based.

      • embedding-shape3 days ago |parent

        I don't get the feeling that they've focused anywhere in particular (and maybe rightly so), they're in everything from low-powered consoles to high powered workstations and data centers, and seemingly everywhere in-between those too.

    • MBCook3 days ago |parent

      No, also laptops. The article had a chart for that too. It's all systems.

  • Neywiny3 days ago

    Goodness I still can't stand his articles. For me, my understanding of the situation was that everything before maybe Ryzen 2-3000 was like "meh, it's good enough". You can actually see a bump in Q1 2017 when Ryzen first came out. I really hoped to see annotated graphs, long term analysis, etc.