HNNewShowAskJobs
Built with Tanstack Start
Google Revisits JPEG XL in Chromium After Earlier Removal(windowsreport.com)
213 points by eln1 3 days ago | 99 comments
  • ksec3 days ago

    While being a big supporter of JPEG-XL on HN, I just want to note AV2 is coming out soon, which should further improve the image compression. ( Edit: Also worth pointing out current JPEG-XL encoder is no where near its maximum potential in terms of quality / compression ratio )

    But JPEG-XL is being quite widely used now, from PDF, medical images, camera lossless, as well as being evaluated in different stage of cinema / artist workflow production. Hopefully the rust decoder will be ready soon.

    And from the wording, it seems to imply Google Chrome will officially support anything from AOM.

    • Nanopolygon2 days ago |parent

      AVIF/AV1 is a codec that encodes both lossy and lossless files very slowly. JXL is significantly faster than AVIF. But AVIF provides better image quality than JXL even at lower settings. However, AV2 will require much more power and system resources for a small bandwidth gain.

      • spartanatreyu2 days ago |parent

        > But AVIF provides better image quality than JXL even at lower settings.

        I don't think that's strictly true.

        The conventional reporting has been that JXL works better at regular web sizes, but AVIF starts to edge out at very low quality settings.

        However, the quality per size between the two is so close that there are comparisons showing JXL winning even where AVIF is supposed to out perform JXL. (e.g. https://tonisagrista.com/blog/2023/jpegxl-vs-avif/)

        Even at the point where AVIF should shine: when low bandwidth is important, JXL supports progressive decoding (AVIF is still trying to add this) so the user will see the images sooner with JXL rather than AVIF.

        ---

        There is one part where AVIF does beat JXL hands down, and that's animation (which makes sense considering AVIF comes from the modern AV1 video codec). However, any time you would want an animation in a file, you're better off just using a video codec anyway.

        • ksec2 days ago |parent

          To be fair, those comparison image size aren't small enough. Had it been 30 - 50% of those tested size AVIF should have the advantage.

          But then the question is should we even be presenting this level of quality. Or is it enough. I guess that is a different set of questions.

    • snvzz2 days ago |parent

      >medical images

      Isn't JPEG-XL a lossy codec?

      • ksec2 days ago |parent

        JPEG-XL is both a lossy and lossless codec. It is already being used in Camera DNG format, making the RAW image smaller.

        While lossy codec is hard to compare and up for debate. JPEG-XL is actually better as a lossless codec in terms of compression ratio and compression complexity. There is only one other codec that beats it but it is not open source.

        • cipehr2 days ago |parent

          What is the non-open source codec?

          • Nanopolygon2 days ago |parent

            HALIC is by far the best lossless codec in terms of speed/compression ratio. If lossy mode were similarly available, we might not be discussing all these issues. I think he stopped developing HALIC for a long time due to lack of interest.

            Its developer is also developing HALAC (High Availability Lossless Audio Compression). He recently released the source code for the first version of HALAC. And I don't think anyone cared.

          • ksec2 days ago |parent

            HALIC (High Availability Lossless Image Compression)

            https://news.ycombinator.com/item?id=38990568

      • MagnumOpus2 days ago |parent

        It has both lossy and lossless modes.

        • snvzz2 days ago |parent

          Good to hear.

          I sure hope they came up with a good, clear system to distinguish them.

          • lxgr2 days ago |parent

            As in, a clear way to detect whether a given file is lossy or lossless?

            I was thinking that too, but on the other hand, even a lossless file can't guarantee that its contents aren't the result of going through a lossy intermediate format, such as a screenshot created from a JPEG.

            • snvzz2 days ago |parent

              I meant like a filename convention, and tags in the file itself.

              • 1497652 days ago |parent

                There is some sort of tag, jxlinfo can tell you if a file is "lossy" or "(possibly) lossless".

              • stavros2 days ago |parent

                Presumably you can look at the file and tell which mode is used, though why would you care to know from the filename?

                • crazygringo2 days ago |parent

                  I find it incredibly helpful to know that .jpg is lossy and .png is lossless.

                  There are so many reasons why it's almost hard to know where to begin. But it's basically the same reason why it's helpful for some documents to end in .docx and others to end in .xlsx. It tells you what kind of data is inside.

                  And at least for me, for standard 24-bit RGB images, the distinction between lossy and lossless is much more important than between TIFF and PNG, or between JPG and HEIC. Knowing whether an image is degraded or not is the #1 important fact about an image for me, before anything else. It says so much about what the file is for and not for -- how I should or shouldn't edit it, what kind of format and compression level is suitable for saving after editing, etc.

                  After that comes whether it's animated or not, which is why .apng is so helpful to distinguish it from .png.

                  There's a good reason Microsoft Office documents aren't all just something like .msox, with an internal tag indicating whether they're a text document or a spreadsheet or a presentation. File extensions carry semantic meaning around the type of data they contain, and it's good practice to choose extensions that communicate the most important conceptual distinctions.

                  • lxgr2 days ago |parent

                    > Knowing whether an image is degraded or not is the #1 important fact about an image for me

                    But how can you know that from the fact that it's currently losslessly encoded? People take screenshots of JPEGs all the time.

                    > After that comes whether it's animated or not, which is why .apng is so helpful to distinguish it from .png.

                    That is a useful distinction in my view, and there's some precedent for solutions, such as how Office files containing macros having an "m" added to their file extension.

                    • crazygringo2 days ago |parent

                      Obviously nothing prevents people from taking PNG screenshots of JPEGs. You can make a PNG out of an out-of-focus camera image too. But at least I know the format itself isn't adding any additional degradation over whatever the source was.

                      And in my case I'm usually dealing with a known workflow. I know where the files originally come from, whether .raw or .ai or whatever. It's very useful to know that every .jpg file is meant for final distribution, whereas every .png file is part of an intermediate workflow where I know quality won't be lost. When they all have the same extension, it's easy to get confused about which stage a certain file belongs to, and accidentally mix up assets.

                  • ksec2 days ago |parent

                    >I find it incredibly helpful to know that .jpg is lossy and .png is lossless.

                    Unfortunately we have been through this discussion and author of JPEG-XL strongly disagree with this. I understand where they are coming from, but for me I agree with you it would have been easier to have the two separated in naming and extensions.

                  • stavros2 days ago |parent

                    But JPEG has a lossless mode as well. How do you distinguish between the two now?

                    This is an arbitrary distinction, for example then why do mp3 and ogg (vorbis) have different extensions? They're both lossy audio formats, so by that requirement, the extension should be the same.

                    Otherwise, we should distinguish between bitrates with different extensions, eg mp3128, mp3192, etc.

                    • crazygringo2 days ago |parent

                      In theory JPEG has a lossless mode (in the standard), but it's not supported by most applications (not even libjpeg) so it might as well not exist. I've certainly never come across a lossless JPEG file in the wild.

                      Filenames also of course try to indicate technical compatibility as to what applications can open them, which is why .mp3 and .ogg are different -- although these days, extensions like .mkv and .mp4 tell you nothing about what's in them, or whether your video player can play a specific file.

                      At the end of the day it's just trying to achieve a good balance. Obviously including the specific bitrate in a file extension goes too far.

                  • nothrabannosir2 days ago |parent

                    Legacy. It’s how things used to be done. Just like Unix permissions, shared filesystem, drive letters in the file system root, prefixing urls with the protocol, including security designators in the protocol name…

                    Be careful to ascribe reason to established common practices; it can lead to tunnel vision. Computing is filled with standards which are nothing more than “whatever the first guy came up with”.

                    https://en.wikipedia.org/wiki/Appeal_to_tradition

                    Just because metadata is useful doesn’t mean it needs to live in the filename.

                    • Ukv2 days ago |parent

                      If the alternative was putting the information in some hypothetical file attribute with similar or greater level of support/availability (like for filtering across various search engines and file managers) then I'd agree there's no reason to keep it in the file extension in particular, but I feel the alternative here is just not really having it available in such a way at all (instead just an internal tag particular to the JXL format).

                  • account42a day ago |parent

                    > .png is lossless

                    pngquant and similar tools disagree

                    • jorvia day ago |parent

                      Well yeah, you can turn any lossless format lossy by introducing an intermediate step that discards some amount of information. You can't practically turn a lossy format into a lossless format by introducing a lossless intermediate step.

                      Although, if you're purely speaking perceptually, magic like RAISR comes pretty close.

                    • basilgohara day ago |parent

                      pngquant does the lossy conversion, not the PNG format.

      • andybak2 days ago |parent

        Surely something close to perceptually lossless is sufficient for most use cases?

        • AlotOfReading2 days ago |parent

          Think of all the use cases where the output is going to be ingested by another machine. You don't know that "perceptually lossless" as designed for normal human eyeballs on normal screens in normal lighting environments is going to contain all the information an ML system will use. You want to preserve data as long as possible, until you make an active choice to throw it away. Even the system designer may not know whether it's appropriate to throw that information away, for example if they're designing digital archival systems and having to consider future users who aren't available to provide requirements.

    • eviks2 days ago |parent

      > AV2 .... further improve the image compression. ( Edit: Also worth pointing out current JPEG-XL encoder is no where near its maximum potential in terms of quality / compression ratio

      But at what cost? From this the en/decoding speed (links below) is much higher for those advanced video codecs, so for various lower powered devices they wouldn't be very suitable?

      Also, can we expect "near max potential" with AV2/near future or is it an ever-unachievable goal that shouldn't stop adding "non-max" codecs?

      https://res.cloudinary.com/cloudinary-marketing/image/upload...

      https://cloudinary.com/blog/time_for_next_gen_codecs_to_deth...

      • jaffathecakea day ago |parent

        Fwiw, JPEG XL takes around 2.5x the time to decode as an equivalent AVIF, and has worse compression https://jakearchibald.com/2025/present-and-future-of-progres...

        • eviks19 hours ago |parent

          Interesting, looks like another opportunity for Chrome to avoid the Safari mistake

          > slow. There's some suggestion that the Apple implementation is running on a single core, so maybe there's room for improvement.

          Though their own old attempt was even worse

          > of the old behind-a-flag Chromium JPEG XL decoder, and it's over 500% slower (6x) to decode than AVIF.

  • charcircuit3 days ago

    Here are the direct links:

    blink-dev mailing list

    https://groups.google.com/a/chromium.org/g/blink-dev/c/WjCKc...

    Tracking Bug (reopened)

    https://issues.chromium.org/issues/40168998

    • IshKebab2 days ago |parent

      Yeah note that Google only said they're now open to the possibility, as long as it is written in Rust (rightly so).

      The patch at the end of that thread uses a C++ implementation so it is a dead end.

      • surajrmal2 days ago |parent

        Rick specifically said commitment for long term maintenance and meeting usual standards for shipping. The implementation was abandoned in favor of a new one using rust, so not necessarily a dead end.

        • IshKebab2 days ago |parent

          I meant the C++ patch is a dead end; not JPEG XL support in general. Seems like there's a Rust library that will have to be used instead.

  • caminanteblanco3 days ago

    My introduction to JPEG-XL was by 2kliksphillip on YouTube, he has a few really good analyses on this topic, including this video: https://youtu.be/FlWjf8asI4Y

  • eviks2 days ago

    Maybe they'll do it right this time

    > The team explained that other platforms moved ahead. Safari supports JPEG XL, and Windows 11 users can add native support through an image extension from Microsoft Store. The format is also confirmed for use in PDF documents.

    glad those folks didn't listen to "the format is dead since the biggest browser doesn't support it" (and shame on Firefox for not doing the same)

  • jiggawatts3 days ago

    2026 is nearly upon us, and Google, Microsoft, and Apple remain steadfast in the refusal to ever allow anyone to share wide-gamut or HDR images.

    Every year, I go on a rant about how my camera can take HDR images natively, but the only way to share these with a wider audience is to convert them to a slideshow and make a Rec.2020 HDR movie that I upload to YouTube.

    It's absolutely bonkers to me that we've all collectively figured out how to stream a Hollywood movie to a pocket device over radio with a quality exceeding that of a typical cinema theatre, but these multi-trillion market cap corporations have all utterly failed to allow users to reliably send a still image with the same quality to each other!

    Any year now, maybe in 2030s, someone will get around to a ticket that is currently at position 11,372 down the list below thousands of internal bullshit that nobody needed done, rearranging a dashboard nobody has ever opened, or whatever, and get around to letting computers be used for images. You know, utilising the screen, the only part billions of users ever look at, with their human eyes.

    I can't politely express my disgust at the ineptitude, the sloth, the foot dragging, the uncaring unprofessionalism of people that get paid more annually then I get in a decade who are all too distracted making Clippy 2.0 instead of getting right the most utterly fundamental aspect of consumer computing.

    If I could wave a magic wand, I would force a dev team from each of these companies to remain locked in a room until this was sorted out.

    • n8cpdx3 days ago |parent

      I’m wondering if HDR means something different to me, because I see HDR images all the time. I can share HDR images via phones (this seems to be the default behavior on iPhone/Mac messages), I can see HDR PNG stills on the web (https://github.com/swankjesse/hdr-emojis), I can see wide gamut P3 images on the web as well (https://webkit.org/blog-files/color-gamut/).

      What am I missing?

      • jiggawatts3 days ago |parent

        > I can share HDR images via phones

        Sure, me too! I can take a HDR P3 gamut picture with my iPhone and share it with all my friends and relatives... that have iPhones.

        What I cannot do is take a picture with a $4000 Nikon DSLR and share it in the same way... unless I also buy a Mac so I can encode it in the magic Apple-only format[1] that works... for Mac and IOS users. I have a Windows PC. Linux users are similarly out in the cold.

        This situation so incredibly bad that I can pop the SD card of my camera into an reader plugged into my iPhone, process the RAW image on the iPhone with the Lightroom iPhone app in full, glorious HDR... and then be unable to export the HDR image onto the same device for viewing because oh-my-fucking-god-why!?

        [1] They claim it is a standards-compliant HEIF file. No, it isn't. That's a filthy lie. My camera produces a HDR HEIF file natively, in-body. Everything opens it just fine, except all Apple ecosystem devices. I suspect the only way to get Apple to budge is to sue them for false advertising. But... sigh... they'll just change their marketing to remove "HEIF" and move on.

        • gen2brain3 days ago |parent

          Not that I disagree, but HEIF is a container format. What is inside that container is essential. HEIC in HEIF, AVIF in HEIF, etc.

          • jiggawatts3 days ago |parent

            Sure, but Apple doesn't fully support HEIC either.

            They support only a very specific subset of it, in a particular combination.

            Some Apple apps can open third-party HEIC-in-HEIF files, and even display the image correctly, but if you try anything more "complex", it'll start failing. Simply forwarding the image to someone else will result in thumbnails looking weirdly corrupted, brightness shifting, etc...

            I've even seen outright crashes, hangs, visible memory corruption, etc...

            I bet there's at least one exploitable security vulnerability in this code!

          • spider-marioa day ago |parent

            There’s more to it than that. Canon and Apple do HDR HEIC in mutually incompatible ways.

            https://www.dpreview.com/articles/8980170510/how-hdr-tvs-cou...

            > HEIF/HEIC is a broad standard, and the files from Canon and Apple are not cross-compatible with one another

    • mirsadm3 days ago |parent

      It is incredibly annoying that instead of adopting JpegXL they decided to use UltraHDR. A giant hack which works very poorly.

      • lxgr2 days ago |parent

        That's backwards compatibility for you.

        I think Ultra HDR (and Apple's take on it, ISO 21496-1) make a lot of sense in a scenario where shipping alternate formats/codecs is not viable because renderer capabilities are not known or vary, similarly to how HDR was implemented on Blu-Ray 4K discs with the backwards-compatible Dolby Vision profiles.

        It's also possible to do what Apple has done for HEIC on iOS: Store the modern format, convert to the best-known supported format at export/sharing time.

      • jiggawatts3 days ago |parent

        > A giant hack which works very poorly.

        Indeed. I tried every possible export format from Adobe Lightroom including JPG + HDR gainmaps, and it looks... potato.

        With a narrow gamut like sRGB it looks only slightly better than JPG, but with a wider gamut you get terrible posterization. People's faces turn grey and green and blue skies get bands across them.

        Meanwhile my iPhone creates flawless 10-bit Dolby Vision video with the press of a button that I can share with anyone without it turning into a garbled mess.

        Just last week I checked up on the "state of the art" for HDR still image sharing with Gemini Deep Research and after ten minutes of trawling through obscure forum posts it came back with a blunt "No".

        We've figured out how to make machines think, but not how to exchange pictures in the quality that my 12-year-old DSLR is capable of capturing!

        ... unless I make a YouTube video with the images. That -- and only that -- works!

      • JyrkiAlakuijalaa day ago |parent

        JPEG XL supports UltraHDR.

        JPEG XL's normal HDR capabilities were not harmed in the process when UltraHDR was added.

        It was added for reaching parity with JPEG1 and HEIF/AVIF for the needs of UltraHDR developers and believers.

    • alwillis15 hours ago |parent

      > 2026 is nearly upon us, and Google, Microsoft, and Apple remain steadfast in the refusal to ever allow anyone to share wide-gamut or HDR images.

      It appears we’re getting closer to being able to exchange HDR images [1]:

      [1]: https://gregbenzphotography.com/hdr-photos/iso-21496-1-gain-...

    • zozbot2343 days ago |parent

      Just use PNG: https://www.w3.org/TR/png-3/ (for HDR content, see the cICP, mDCV and cLLI chunks; also note that PNG supports up to 16-bit channel depth out of the box).

    • lxgr2 days ago |parent

      > 2026 is nearly upon us, and Google, Microsoft, and Apple remain steadfast in the refusal to ever allow anyone to share wide-gamut or HDR images.

      Huh? Safari seems to render HDR JPEG XLs without any issues these days (e.g. [1]), and supports wide gamut in even more formats as far a I remember.

      [1] https://jpegxl.info/resources/hdr-test-page.html

      • jiggawatts2 days ago |parent

        "Share" is the key word in my rant. I know spotty support exists here and there for one format or another.

        The problem is that I can't, in general widely share a HDR image and have it be correctly displayed via ordinary chat applications, social media, email, or what have you. If it works at all, it only works with that One Particular Format in One Specific Scenario.

        If you disagree, find me something "trivial", such as a photo sharing site that supports HDR image uploads and those images are viewable as wide-gamut HDR on mobile devices, desktops, etc... without any endpoint ever displaying the image incorrectly such a very dark, very bright, or shifted colors.

    • charcircuit2 days ago |parent

      The web has supported 16 bit pngs for decades. This is enough bits for more dynamic range than a human eye with a fixed pupil size.

    • geocar3 days ago |parent

      > the only way to share these with a wider audience is to convert them to a slideshow and make a Rec.2020 HDR movie that I upload to YouTube

      i understand some of this frustration, but really you just have to use ffmpeg to convert it to a web format (which can be done by ffmpeg.js running in a service worker if your cpu is expensive) and spell <img as <video muted autoplay playsinline which is only a little annoying

      > I can't politely express my disgust at the ineptitude, the sloth, the foot dragging, the uncaring unprofessionalism of people that get paid more annually then I get in a decade who are all too distracted making Clippy 2.0 instead of getting right the most utterly fundamental aspect of consumer computing.

      hear hear

      > If I could wave a magic wand, I would force a dev team from each of these companies to remain locked in a room until this was sorted out.

      i can think of a few better uses for such a wand...

      • jiggawatts3 days ago |parent

        > <img as <video muted autoplay playsinline which is only a little annoying

        Doesn't work for sharing images in text messages, social media posts, email, Teams, Wikipedia, etc...

        > i can think of a few better uses for such a wand...

        We all have our priorities.

    • baq3 days ago |parent

      I wish I could upvote this multiple times. Spot on, the situation is completely batshit bonkers insane.

    • swed4202 days ago |parent

      > It's absolutely bonkers to me that we've all collectively figured out how to stream a Hollywood movie to a pocket device over radio with a quality exceeding that of a typical cinema theatre, but these multi-trillion market cap corporations have all utterly failed to allow users to reliably send a still image with the same quality to each other!

      You act like this is some kind of mistake or limit of technology, but really it's an obvious intentional business decision.

      Under late stage capitalism, it'd be weird if this wasn't the case in 2026.

      Address the underlying issue, or don't be surprised by the race to the bottom.

      • lxgr2 days ago |parent

        This theory utterly fails Hanlon's razor (or whatever the organizational/societal equivalent is).

        On one hand, there have been (and still are!) several competing HDR formats for videos (HDR+, Dolby Vision, "plain" HLG, Dolby Vision in HLG etc.), and it tooks years for a winner to pull ahead – that race just started earlier, and the set of stakeholders is different (and arguably a bit smaller) than that for still images.

        On the other hand, there are also several still image HDR formats competing with each other right now (JPEG with depth map metadata, i.e. Ultra HDR and ISO 21496-1, Apple's older custom metadata, HEIF, AVIF, JPEG XL...), and JPEG XL isn't the clear winner yet.

        Format wars are messy, and always have been. Yes, to some extent they are downstream of the lack of a central standardization body, but there's no anti-HDR cabal anywhere. If anything, it's the opposite – new AV formats requiring new hardware is just about the best thing that can happen to device manufacturers.

    • fingerlocks3 days ago |parent

      What are you talking about? You extract 3 exposure values from the raw camera buffer and merge and tone map them manually into a single HDR image. The final exported image format may not have the full supported color space, but that’s on you. Apple uses the P3 space by default.

      This has been supported by both Apple and third party apps for over a decade. I’ve implemented it myself.

      • jiggawatts3 days ago |parent

        That's not HDR. That's pretend HDR in an SDR file, an artistic effect, nothing more.

        Actual HDR needs at least 10 bits per channel and a modern display with peak brightness far in excess of traditional monitors. Ideally over 1,000 nits compared to typical LCD brightness of about 200.

        You also don't need "three pictures". That was a hack used for the oldest digital cameras that had about 8 bits of precision in their analog to digital converters (ADC). Even my previous camera had a 14-bit ADC and in practice could capture about 12.5 bits of dynamic range, which is plenty for HDR imaging.

        Lightroom can now edit and export images in "true" HDR, basically the same as a modern HDR10 or Dolby Vision movie.

        The problem is that the only way to share the exported HDR images is to convert them to a movie file format, and share them as a slide show.

        There is no widely compatible still image format that can preserve 10-bit-per-channel colours, wide-gamut, and HDR metadata.

        • alwillis2 days ago |parent

          > Actual HDR needs at least 10 bits per channel and a modern display with peak brightness far in excess of traditional monitors. Ideally over 1,000 nits compared to typical LCD brightness of about 200.

          In the Apple Silicon era, the MacBook Pro has a 1,000 nit display, with peak brightness at 1,600 nits when displaying HDR content.

          Affinity Studio [1] also supports editing and exporting "true" HDR images.

          [1]: https://www.affinity.studio

          • jiggawatts2 days ago |parent

            I have a 4K HDR OLED plugged into my Windows PC that works just fine for editing and viewing my photos.

            I have no way, in general, to share those photos with you, not without knowing ahead of time what software you’re using. I’ll also have to whip up a web server with custom HTML and a bunch of hacks to encode my images that will work for you but not my friends with Android phones or Linux PCs.

        • fingerlocks2 days ago |parent

          I never mentioned a file format. These operations are performed on the raw buffer, there is no hack. There is no minimum bit depth for HDR (except for maybe 2) that’s just silly. High dynamic range images just remap the physical light waves to match human perception, but collecting those waves can be done at any resolution or bit depth.

          I wrote camera firmware. I’ve implemented HDR on the both the firmware level, and later at the higher client level when devices became faster. You’re either overloading terminology to the point where we are just talking past each other, or you’re very confused.

          • oktoberpaard2 days ago |parent

            What you are taking about is also called HDR, but has nothing to do with what the other person is talking about. The other person is talking about the still image equivalent of HDR video formats. When displayed on an HDR capable monitor, it will map the brightest parts of the image to the extended headroom of the monitor instead of tone mapping it to be displayed on a standard SDR monitor. So to be even more clear: it defines brightness levels beyond what is normally 100%.

            • 2 days ago |parent
              [deleted]
            • fingerlocks2 days ago |parent

              Even when HDR tone mapping in real time, such as a game engine or raw video feed, you would still be merging two or four multi-sampled tile memory blocks into a single output image. This is not fundamentally different, just a fancier pipeline on modern GPUs. And it’s completely unrelated to OPs rant about stupid developers preventing them for sharing their HDR images or whatever.

              • oktoberpaarda day ago |parent

                HDR photos taken on iOS or Android devices are displayed as SDR images when opened on Windows. The gain map that they contain (see ISO 21496-1) is ignored. Before the ISO standard it didn’t even work between iOS and Android. This is what OP’s frustration is about.

  • AshleysBrain3 days ago

    I think the article is slightly misleading: it says "Google has resumed work on JPEG XL", but I don't think they have - their announcement only says they "would welcome contributions" to implement JPEG XL support. In other words, Google won't do it themselves, but their new position is they're now willing to allow someone else to do the work.

    • jmgao3 days ago |parent

      Describing it as 'Google' is misleading, because different arms of the company might as well be completely different companies. The Chrome org seems to have had the same stance as Firefox with regards to JPEG XL: "we don't want to add 100,000 lines of multithreaded C++ because it's a giant gaping security risk", and the JPEG XL team (in a completely separate org) is addressing those concerns by implementing a Rust version. I'd guess that needing the "commitment to long-term maintenance" is Chrome fighting with Google Research or whatever about long-term headcount allocation towards support: Chrome doesn't want the JPEG XL team to launch and abandon JPEG XL in chrome and leaving Chrome engineers to deal with the fallout.

    • jonsneyers3 days ago |parent

      It's technically correct. Googlers (at Google Research Zurich) have been working on jxl-rs, a Rust implementation of JPEG XL. Google Research has been involved in JPEG XL from the beginning, both in the design of the codec and in the implementation of libjxl and now jxl-rs.

      But until now, the position of other Googlers (in the Chrome team) was that they didn't want to have JPEG XL support in Chrome. And that changed now. Which is a big deal.

      • 3 days ago |parent
        [deleted]
    • IshKebab2 days ago |parent

      Yes and they will also only accept it if the library is written in Rust. The patch to add support that is in the thread, and referenced in the article uses libjxl which is C++ and therefore cannot be used.

  • adzm3 days ago

    jxl-rs https://github.com/libjxl/jxl-rs was referenced as a possibility; what library is Safari using for jpegxl?

    • JimDabell3 days ago |parent

      libjxl:

      https://github.com/libjxl/libjxl

      https://github.com/WebKit/WebKit/blob/7879cb55638ec765dc033d...

      • gsneddersa day ago |parent

        The second link isn't applicable to Apple's WebKit ports — it's entirely built via Xcode.

        https://github.com/WebKit/WebKit/blob/39386f4547897c89c510d0... defines USE_JPEGXL only for macOS < 14 (which aren't actually supported any more!).

        All the in-tree JPEG XL support, e.g., https://github.com/WebKit/WebKit/blob/39386f4547897c89c510d0... is behind a "USE(JPEGXL)" ifdef — so none of that is compiled in.

        Instead, it's using what Apple ships at a system level, in Image I/O.

        https://github.com/WebKit/WebKit/blob/39386f4547897c89c510d0... defines HAVE_JPEGXL for recent versions of Apple's OSes. https://github.com/WebKit/WebKit/commit/932073284e4c73ce9884... is the commit which added this — there's really not much there, because it's just setting the define and adding it to the allowlist of image types.

        And yeah, currently I believe this is libjxl — or a fork thereof — hence the inclusion of libjxl in the Acknowledgements.rtf file on macOS.

  • particlo2 days ago

    if you wanna compare jxl vs avif by taking photos yourself and have an android phone then try this APK https://github.com/particlo/camataca i thought jxl was better by looking at its website benchmarks but then after trying it myself i find jxl generates ugly blocky artifacts

  • mikae13 days ago

    The final piece of the JPEG XL puzzle!

    • free_bip3 days ago |parent

      It's a huge piece for sure, but not the only one. For example, Firefox and Windows both don't support it out of the box currently. Firefox requires nightly or an extension, and on Windows you need to download support from the Microsoft store.

      • zinekeller3 days ago |parent

        > on Windows you need to download support from the Microsoft store.

        To be really fair, on Windows:

        - H.264 is the only guaranteed (modern-ish) video codec (HEVC, VP9, AV1 is not built-in unless the device manufacturer bothered to do it)

        - JPEG, GIF, and PNG are the only guaranteed (widely-used) image codecs (HEIF, AVIF, and JXL is also not built-in)

        - MP3 and AAC are the only guaranteed (modern-ish) audio codecs (Opus is another module)

        ... and all of them are widely used when Windows 7 was released (before the modern codecs) so probably modules are now the modern Windows Method™ for codecs.

        Note on pre-8 HEVC support: the codec (when not on VLC or other software bundling their own codecs) is often on that CyberLink Bluray player, not a built-in one.

      • JyrkiAlakuijala3 days ago |parent

        Would PDF 2.0 (which also depends JPEG XL and Brotli) put pressure on Firefox and Windows to add more easy to use support?

        • GuB-42a day ago |parent

          Brotli? Is it still relevant now that we have Zstandard?

          Zstandard is much faster in just about every benchmark, sometimes Brotli has a small edge when it comes to compression ratio, but if you go for compression ratio over speed, LZMA2 beats them both.

          Both Zstandard (zstd) and LZMA2 (xz) are widely supported, I think better supported than Brotli outside of HTTP.

          • JyrkiAlakuijalaa day ago |parent

            Brotli decompresses 3-5x faster than LZMA2 and is within 0.6 % of the compression density, and much better for short documents.

            ZStandard decompresses ~2x faster than Brotli but is 5 % less dense in compression density, and even less dense for short documents or documents where the static dictionary can be used.

            Brotli is not slow to decompress -- generally a little faster then deflate through zlib.

            Last time I measured, Brotli had ~2x smaller binary size than zstd (dec+enc).

            • GuB-42a day ago |parent

              Straight from the horse's mouth!

              The thing is that Brotli is clearly optimized for the web (it even has a built-in dictionary), and ZStandard is more generic, being used for tar archives and the likes, I wonder how PDF would fit in here.

        • jchw3 days ago |parent

          I don't think so: JPEG 2000, as far as I know, isn't generally supported for web use in web browsers, but it is supported in PDF.

          • alwillisa day ago |parent

            > I don't think so: JPEG 2000, as far as I know, isn't generally supported for web use in web browsers, but it is supported in PDF.

            Safari supported JPEG 2000 since 2010 but removed support last year [1].

            [1]: https://bugs.webkit.org/show_bug.cgi?id=178758

          • RobotToaster2 days ago |parent

            So Firefox (or others) can't open a pdf with a embedded jpeg-2000/XL? Or does pdf.js somehow support it?

            • lxgr2 days ago |parent

              Seems like it: https://github.com/mozilla/pdf.js.openjpeg

              This test renders correctly in Firefox, in any case: https://sources.debian.org/data/main/p/pdf2djvu/0.9.18.2-2/t...

            • jchw2 days ago |parent

              Apparently I really flubbed my wording for this comment. I'm saying they do support it inside of PDF, just not elsewhere in the web platform.

          • fmajid3 days ago |parent

            JPEG-XL is recommended as the preferred format for HDR content for PDFs, so it’s more likely to be encountered:

            https://www.theregister.com/2025/11/10/another_chance_for_jp...

            • bmicraft2 days ago |parent

              I'm not convinced HDR PDFs will be a common thing anytime soon, even without this chicken and egg problem of support

            • jchw3 days ago |parent

              What I mean to say is, I believe browsers do support JPEG 2000 in PDF, just not on the web.

              • Zardoz843 days ago |parent

                the last time that I check it, I find that I need to convert to Jpeg to show the image in browsers.

                • jchw2 days ago |parent

                  A *PDF* with embedded JPEG 2000 data should, as far as I know, decode in modern browser PDF viewers. PDF.js and PDFium both are using OpenJPEG. But despite that, browsers don't currently support JPEG 2000 in general.

                  I'm saying this to explain how JPEG XL support in PDF isn't a silver bullet. Browsers already support image formats in PDF that are not supported outside of PDF.

    • viktorcode2 days ago |parent

      A large and important piece, but not the final. If it will remain web-only codec, that is no Android and iOS support for taking photos in JPEG XL, then the web media will still be dominated with JPEGs.

      • kasabali13 hours ago |parent

        Samsung is claimed to be supporting it: https://cloudinary.com/blog/samsung-now-supports-dng-1-7-inc...

  • lousken3 days ago

    It is absolutely insane that google has not implemented this yet. They implement all sort of unimportant stuff but not the most critical image format of this decade, what a joke

    • theandrewbailey3 days ago |parent

      And the things they do implement, they kill 8 or so years later.

      https://killedbygoogle.com/

      • lxgr2 days ago |parent

        If all goes well (which is anything but guaranteed), JPEG XL will take off sufficiently to make any future deprecation as unthinkable as e.g. deprecating GIF rendering support.

  • _ache_2 days ago

    It's a little step but a step forward. JXL is on part with AVIF and WebP2 most of the time but is very much better to share photography.

    There is no reason to block its adoption.

  • cgfjtynzdrfht2 days ago

    How quickly things turn. Hard to not support it given chrome wants to support PDF natively.