HNNewShowAskJobs
Built with Tanstack Start
Automating Image Compression(ramijames.com)
19 points by ramijames 4 days ago | 9 comments
  • blopker2 days ago

    This post greatly over simplifies how many issues come up optimizing images. Also, the Github workflow doesn't commit the optimized images back to git, so this would have to run before packaging and would have to run on every image, not just new images added.

    Ideally, images are compressed _before_ getting committed to git. The other issue is that compression can leave images looking broken. Any compressed image should be verified before deploying. Using lossless encoders is safer. However, even then, many optimizers will strip ICC profile data which will make colors look off or washed out (especially if the source is HDR).

    Finally, use webp. It's supported everywhere and doesn't have all the downsides of png and jpg. It's not worth it to deploy these older formats anymore. Jpgxl is ever better, but support will take a while.

    Anyway, I made an ImageOptim clone that supports webp encoding a while ago[0]. I usually just chuck any images in there first, then commit them.

    [0]: https://github.com/blopker/alic

    • butvacuum2 days ago |parent

      There's one thing JPEG has the edge on- true Progressive loading.

      If you're clever you can use fetch freqests to render a thumbnail based off the actual image by manually parsing the JPEG and stopping after some amount of detail. I'm more than a little suprised that no self-hosted photo solution uses this in any capacity (at least when I last checked).

      • blopker18 hours ago |parent

        Google answers this question in the FAQ: https://developers.google.com/speed/webp/faq#does_webp_suppo...

        But in my experience, webp is better enough that the whole file loads around the same time the jpg progressive loading kicks in. Given that progressive jpgs are larger than non progressive (so not a 'free' feature), jpg is just a waste of bandwidth at this point.

    • tatersolida day ago |parent

      Where do you store high quality original images in case future edits or recompression with better codecs are needed? Generation loss is a thing.

      I view the high-quality originals as “source” and resized+optimized images as “compiled” binaries. You generally want source in Git, not your compiled binaries.

      • blopker18 hours ago |parent

        As always, it really depends on what the source is. Often images are created with some software like Photoshop, would you commit the psd file to git? If you're a photographer, would you commit 20mb+ raw image files? Might make sense for a few images, but git is just not the right solution for binary data in general. Every modification has to duplicate the entire file. This makes working with the repo unpleasant very quickly.

        In general, I recommend people back up binary files to cloud storage, like S3, and only commit optimized, deployment ready assets to git. There's also GitLFS, but it's clucky to use.

      • ramijamesa day ago |parent

        I like this take. I tend to agree.

  • treavorpasan2 days ago

    No trying to diss, but Figma literally has a drop down change the image quality thus the file size for PNG.

    https://help.figma.com/hc/en-us/articles/13402894554519-Expo...

    • ramijames2 days ago |parent

      You're looking at JPGs. PNGs do not have that dropdown.

  • TacticalCoder2 days ago

    TFA for me gives the same link for the "Compressed using pngquant - 59,449 bytes" as the first image: "test1.png" and it's about 191 KiB.

    I think it's a copy/paste error for replacing the link with test3.png instead of test1.png gives the correct file.

    For the curious ones here are the result of compressing both losslessly with WEBP:

        195558 test1.png
        102750 test1.webp
    
         59449 test3.png
         38304 test3.webp
    
    P.S: that picture is called a "test card": https://en.wikipedia.org/wiki/Test_card