“Optimal” size of a JPEG image in terms of its dimensions












10















I plan to write a script that will scan 100,000+ JPEG images and re-compress them if they are "too big" in terms of file size. Scripting is the easy part, but I am not sure how to categorize an image as being "too big".



For example there is a 2400x600px image with a file size of 1.81MB. Photoshop's save for web command creates a 540KB file at 60 quality and same dimensions. This is about 29% of original size.



Now I am thinking about using these numbers as a guideline. Something like 540KB / (2,400 * 600 / 1,000,000) = 375KB per megapixel. Any image larger than this is considered big. Is this the correct approach or is there a better one?



Edit 1: the images need to be optimized for display on websites.



Edit 2: I can determine the desired output quality by experimenting, I need to know if the images are big in terms of file size w.r.t dimensions and need to be saved in lower quality.










share|improve this question




















  • 3





    What quality to choose when converting to JPG?

    – xiota
    Jan 9 at 12:00






  • 1





    xiota's first comment should be the answer! btw, what is your priority? if for some reason you just need small files, the quality may suffer sometimes. it is easy to create unreasonably big jpeg files with no perceivable gain in quality. detecting and recompressing such images is a good idea, simply use the jpeg quality setting, like xiota said.

    – szulat
    Jan 9 at 12:36






  • 17





    "Optimal" for what purpose? Even saying 'web usage' is a little broad these days. Are the anticipated viewers going to be looking at the images on a compact phone? A larger smartphone? A pad or tablet? A notebook? A large computer monitor? A 60" 8K TV? A jumbotron?

    – Michael C
    Jan 9 at 14:35






  • 2





    Possible duplicate of What quality to choose when converting to JPG?

    – Hueco
    Jan 10 at 16:53






  • 2





    If the scripting is the easy part, here's what I'd try out in your situation: set a numerically defined limit to which the compressed image is allowed to differ from the original (e.g. sum of luminosity difference of every pixel). Start with lower quality (like 60), export, and if the difference from original is too high, export again with higher quality until your quality condition is satisfied (you may need to tweak the calculation - use exponential scale or something more fancy to get the best result).

    – Pavel
    Jan 10 at 17:06
















10















I plan to write a script that will scan 100,000+ JPEG images and re-compress them if they are "too big" in terms of file size. Scripting is the easy part, but I am not sure how to categorize an image as being "too big".



For example there is a 2400x600px image with a file size of 1.81MB. Photoshop's save for web command creates a 540KB file at 60 quality and same dimensions. This is about 29% of original size.



Now I am thinking about using these numbers as a guideline. Something like 540KB / (2,400 * 600 / 1,000,000) = 375KB per megapixel. Any image larger than this is considered big. Is this the correct approach or is there a better one?



Edit 1: the images need to be optimized for display on websites.



Edit 2: I can determine the desired output quality by experimenting, I need to know if the images are big in terms of file size w.r.t dimensions and need to be saved in lower quality.










share|improve this question




















  • 3





    What quality to choose when converting to JPG?

    – xiota
    Jan 9 at 12:00






  • 1





    xiota's first comment should be the answer! btw, what is your priority? if for some reason you just need small files, the quality may suffer sometimes. it is easy to create unreasonably big jpeg files with no perceivable gain in quality. detecting and recompressing such images is a good idea, simply use the jpeg quality setting, like xiota said.

    – szulat
    Jan 9 at 12:36






  • 17





    "Optimal" for what purpose? Even saying 'web usage' is a little broad these days. Are the anticipated viewers going to be looking at the images on a compact phone? A larger smartphone? A pad or tablet? A notebook? A large computer monitor? A 60" 8K TV? A jumbotron?

    – Michael C
    Jan 9 at 14:35






  • 2





    Possible duplicate of What quality to choose when converting to JPG?

    – Hueco
    Jan 10 at 16:53






  • 2





    If the scripting is the easy part, here's what I'd try out in your situation: set a numerically defined limit to which the compressed image is allowed to differ from the original (e.g. sum of luminosity difference of every pixel). Start with lower quality (like 60), export, and if the difference from original is too high, export again with higher quality until your quality condition is satisfied (you may need to tweak the calculation - use exponential scale or something more fancy to get the best result).

    – Pavel
    Jan 10 at 17:06














10












10








10


3






I plan to write a script that will scan 100,000+ JPEG images and re-compress them if they are "too big" in terms of file size. Scripting is the easy part, but I am not sure how to categorize an image as being "too big".



For example there is a 2400x600px image with a file size of 1.81MB. Photoshop's save for web command creates a 540KB file at 60 quality and same dimensions. This is about 29% of original size.



Now I am thinking about using these numbers as a guideline. Something like 540KB / (2,400 * 600 / 1,000,000) = 375KB per megapixel. Any image larger than this is considered big. Is this the correct approach or is there a better one?



Edit 1: the images need to be optimized for display on websites.



Edit 2: I can determine the desired output quality by experimenting, I need to know if the images are big in terms of file size w.r.t dimensions and need to be saved in lower quality.










share|improve this question
















I plan to write a script that will scan 100,000+ JPEG images and re-compress them if they are "too big" in terms of file size. Scripting is the easy part, but I am not sure how to categorize an image as being "too big".



For example there is a 2400x600px image with a file size of 1.81MB. Photoshop's save for web command creates a 540KB file at 60 quality and same dimensions. This is about 29% of original size.



Now I am thinking about using these numbers as a guideline. Something like 540KB / (2,400 * 600 / 1,000,000) = 375KB per megapixel. Any image larger than this is considered big. Is this the correct approach or is there a better one?



Edit 1: the images need to be optimized for display on websites.



Edit 2: I can determine the desired output quality by experimenting, I need to know if the images are big in terms of file size w.r.t dimensions and need to be saved in lower quality.







image-quality jpeg file-size






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jan 11 at 6:14







Salman A

















asked Jan 9 at 11:19









Salman ASalman A

15917




15917








  • 3





    What quality to choose when converting to JPG?

    – xiota
    Jan 9 at 12:00






  • 1





    xiota's first comment should be the answer! btw, what is your priority? if for some reason you just need small files, the quality may suffer sometimes. it is easy to create unreasonably big jpeg files with no perceivable gain in quality. detecting and recompressing such images is a good idea, simply use the jpeg quality setting, like xiota said.

    – szulat
    Jan 9 at 12:36






  • 17





    "Optimal" for what purpose? Even saying 'web usage' is a little broad these days. Are the anticipated viewers going to be looking at the images on a compact phone? A larger smartphone? A pad or tablet? A notebook? A large computer monitor? A 60" 8K TV? A jumbotron?

    – Michael C
    Jan 9 at 14:35






  • 2





    Possible duplicate of What quality to choose when converting to JPG?

    – Hueco
    Jan 10 at 16:53






  • 2





    If the scripting is the easy part, here's what I'd try out in your situation: set a numerically defined limit to which the compressed image is allowed to differ from the original (e.g. sum of luminosity difference of every pixel). Start with lower quality (like 60), export, and if the difference from original is too high, export again with higher quality until your quality condition is satisfied (you may need to tweak the calculation - use exponential scale or something more fancy to get the best result).

    – Pavel
    Jan 10 at 17:06














  • 3





    What quality to choose when converting to JPG?

    – xiota
    Jan 9 at 12:00






  • 1





    xiota's first comment should be the answer! btw, what is your priority? if for some reason you just need small files, the quality may suffer sometimes. it is easy to create unreasonably big jpeg files with no perceivable gain in quality. detecting and recompressing such images is a good idea, simply use the jpeg quality setting, like xiota said.

    – szulat
    Jan 9 at 12:36






  • 17





    "Optimal" for what purpose? Even saying 'web usage' is a little broad these days. Are the anticipated viewers going to be looking at the images on a compact phone? A larger smartphone? A pad or tablet? A notebook? A large computer monitor? A 60" 8K TV? A jumbotron?

    – Michael C
    Jan 9 at 14:35






  • 2





    Possible duplicate of What quality to choose when converting to JPG?

    – Hueco
    Jan 10 at 16:53






  • 2





    If the scripting is the easy part, here's what I'd try out in your situation: set a numerically defined limit to which the compressed image is allowed to differ from the original (e.g. sum of luminosity difference of every pixel). Start with lower quality (like 60), export, and if the difference from original is too high, export again with higher quality until your quality condition is satisfied (you may need to tweak the calculation - use exponential scale or something more fancy to get the best result).

    – Pavel
    Jan 10 at 17:06








3




3





What quality to choose when converting to JPG?

– xiota
Jan 9 at 12:00





What quality to choose when converting to JPG?

– xiota
Jan 9 at 12:00




1




1





xiota's first comment should be the answer! btw, what is your priority? if for some reason you just need small files, the quality may suffer sometimes. it is easy to create unreasonably big jpeg files with no perceivable gain in quality. detecting and recompressing such images is a good idea, simply use the jpeg quality setting, like xiota said.

– szulat
Jan 9 at 12:36





xiota's first comment should be the answer! btw, what is your priority? if for some reason you just need small files, the quality may suffer sometimes. it is easy to create unreasonably big jpeg files with no perceivable gain in quality. detecting and recompressing such images is a good idea, simply use the jpeg quality setting, like xiota said.

– szulat
Jan 9 at 12:36




17




17





"Optimal" for what purpose? Even saying 'web usage' is a little broad these days. Are the anticipated viewers going to be looking at the images on a compact phone? A larger smartphone? A pad or tablet? A notebook? A large computer monitor? A 60" 8K TV? A jumbotron?

– Michael C
Jan 9 at 14:35





"Optimal" for what purpose? Even saying 'web usage' is a little broad these days. Are the anticipated viewers going to be looking at the images on a compact phone? A larger smartphone? A pad or tablet? A notebook? A large computer monitor? A 60" 8K TV? A jumbotron?

– Michael C
Jan 9 at 14:35




2




2





Possible duplicate of What quality to choose when converting to JPG?

– Hueco
Jan 10 at 16:53





Possible duplicate of What quality to choose when converting to JPG?

– Hueco
Jan 10 at 16:53




2




2





If the scripting is the easy part, here's what I'd try out in your situation: set a numerically defined limit to which the compressed image is allowed to differ from the original (e.g. sum of luminosity difference of every pixel). Start with lower quality (like 60), export, and if the difference from original is too high, export again with higher quality until your quality condition is satisfied (you may need to tweak the calculation - use exponential scale or something more fancy to get the best result).

– Pavel
Jan 10 at 17:06





If the scripting is the easy part, here's what I'd try out in your situation: set a numerically defined limit to which the compressed image is allowed to differ from the original (e.g. sum of luminosity difference of every pixel). Start with lower quality (like 60), export, and if the difference from original is too high, export again with higher quality until your quality condition is satisfied (you may need to tweak the calculation - use exponential scale or something more fancy to get the best result).

– Pavel
Jan 10 at 17:06










8 Answers
8






active

oldest

votes


















6














On average, JPEG's sweet spot is around one bit per pixel.



This will of course vary depending on image content, because certain types of graphics (e.g. flat areas and smooth gradients) compress better than others (noise, text), so it's not a robust method to apply blindly to every image.



You also have a problem of not having an uncompressed reference image to compare with, so you don't really know for sure what's the current quality of the images you have, and how much more you can lower the quality to be still acceptable. The quality can be guessed to a certain extent from quantization tables in JPEGs, but it's not a reliable method either (specifically, ImageMagick's quality judgement is very incorrect for JPEGs with custom, optimized quantization tables).



Having said that, there is a reasonable practical approach:




  1. Pick a maximum JPEG quality setting you're happy with (somewhere in 70 to 85 range).

  2. Recompress images to that quality level.

  3. If the recompressed image is smaller by more than ~10%, then keep the recompressed image.


It's important not to pick merely the smaller file size, and require a significant drop in file size instead. That's because recompression of JPEG tends to always drop the file size slighly due to loss of detail caused by lossy nature of JPEG and conversion to 8-bit RGB, so small drops in file size can have disproportionally large drop in quality that isn't worth it.






share|improve this answer





















  • 3





    This is exactly what I did in the end. I used the one bit per pixel as a guide to filter out 30,000 images out of 100,000+ and re-compressed them using imagemagick with 85% quality. If the resulting image was more than 50% smaller then I kept the new one. It worked in my case because the "big images" were created using Photoshop using 100% quality. The other 70,000+ images were OK-ish in terms of filesize and re-compressing them did not generate enough savings (percentage wise) or there was a noticeable loss in quality.

    – Salman A
    Jan 14 at 11:20





















28














The size of files compressed with JPEG vary depending on the complexity of the image. Trying the control the file sizes the way you describe will result in highly variable perceived image quality.



Consider the following options instead:




  • The good-enough approach. Use a quality setting that you find acceptable, like 75. Compare the size of the result with the original image, and keep the smaller file. See What quality to choose when converting to JPG?


  • Use a JPEG minimizer, like JPEGmini or jpeg-recompress from jpeg-archive. They are essentially designed to do what you seem to be trying to do, but with more awareness of JPEG algorithm internals.


  • Generate thumbnails of various sizes, as Nathancahill suggests, from a web-developer perspective.







share|improve this answer





















  • 7





    Or if you want to go "extreme" on the JPEG minimisation, guetzli. Do note the memory and time requirements.

    – Philip Kendall
    Jan 9 at 13:19






  • 2





    I tried guetzli, but wasn't very impressed. It's very slow and only reduces sizes by about 20-30%. With jpeg-recompress, files can be reduced 80% with the smallfry algorithm.

    – xiota
    Jan 9 at 13:22



















18














No. This is a wrong approach.



File size in pixels, yes, has something to do with the final weight, but it is not the only factor.



Make a test. Take a completely white file of the same 2400x600px, and save it as JPG.



Now take a photo of a forest (same 2400x600px) with lots of details and save it. This file will be larger using the same compression settings.



The final size depends on these 3 factors:




  • Pixel Size

  • Compression settings

  • Content (Detail and complexity of the image)


So you can not and should not define the weight based on pixel size.





But I understand your problem.



Without analyzing the current compression of the image, it is hard to define the "optimal" weight (which is relative to the observer, or usage of the images)



You probably can define a compression setting and recompress "all of them". I don't know if you want to do that before "uploading", which probably will save you more time than the saved skipping some of them.



There are some tools that analyze an image and calculates the current compression ratio. But I doubt it is that important.






share|improve this answer


























  • I understand the part about white image vs forest image. Would you suggest that I take a random sample of images, re-save them using photoshop (70 quality) and use the largest pixel:filesize ratio as reference? I am guessing those with lower ratio would be those with less detail.

    – Salman A
    Jan 9 at 12:54













  • Regarding your last phrase. The compression ratio is actually roughly what OP is calculating since it's jpeg size / raw size and raw size = pixel size * number of pixel, pixel size being 3 octets for a 24bit RGB color space. And as you say yourself, this metric is not enough to determine if an image is sufficiently compressed.

    – zakinster
    Jan 9 at 15:41








  • 9





    @SalmanA No, I'd suggest that you drop this approach altogether. JPEGs are as large as they need to be to give the specified quality. Your proposal of seeing how big the largest image in your sample is at 70% quality is just choosing a level of image complexity and saying "Anything more complex than that is too complex and will be degraded." However, if almost all images are smaller than this threshold at 70% quality, what is the problem with having a small number of "too big" files?

    – David Richerby
    Jan 9 at 18:33











  • This seems to correspond to a conclusion I came to when I was considering an approach to determine which of a series of pictures of an identical subject but different resolutions and quality was the "best" (e.g. closest to the original) image.

    – Michael
    Jan 10 at 2:32



















10














Web developer here. Here's how I'd approach this:



1. Determine the displayed image dimensions and required screen resolutions.



Your first task is determining what pixel sizes the images will be displayed at. Are they product photos in an online store? A photo gallery? User profile photos? Multiple different sizes? Make a list of the pixel dimensions you'll need. Check if you'll need @2x images for high-resolution screens like recent phones and tablets.



2. Use a thumbnail script to create new image files.



These are called thumbnail scripts but can be used for a lot more than just thumbnails. There are many scripts out there or you can write your own. By not resizing the original files, you can do it over if you make a mistake in your script or realize down the road that you need a higher resolution image. A common practice is to specify a suffix in the output filename. For example:



lena.jpg (Original, 2000x3000)
lena-thumb.jpg (100x150)
lena-thumb@2x.jpg (200x300)
lena-product.jpg (400x600)
lena-product@2x.jpg (800x1200)


3. Compress.



The thumbnail script should specify the jpg compression when you cut the new image files. However, there are other minifiers out there that might cut down the file size even further.






share|improve this answer
























  • That is how will handle this in the future: ask the photographers to place high-res originals in a directory, then use a script to generate smaller sizes (various size thumbnails and larger ones for desktop and mobile) and place them under www with url rewriting. But right now I do not have access to originals.

    – Salman A
    Jan 11 at 7:17



















6














While @Rafael's answer has explained the JPEG compression ins and out, I will try to answer your web and upload problematic.



Using an image in a website (for design or content) will dictate some imperatives : what my image will be used for ? Logo, cover photo, thumbnail, photo in a blog post, fullscreen photo for a gallery... Also, if you use it for multiple purpose (e.g. a photo and its gallery thumbnail), you want to decline it in all sizes required. However, unless you are building your very own website, most nowadays web service will generate smaller size images from your bigger picture to use in-site.



Now that you know your image purpose, the website (or CMS or front-end Framework) will always require a maximum size in pixels for your image to comply with. Logos might be 600x600px max, background cover might be 1280x720px max, content photo for fullscreen display 1920x1080 or camera native resolution for absolute detail conservation. Check the correct size from the website you want to upload to.
You want to match at least on of the maximum pixel size required, depending on the ratio you want to achieve. Beware, some service will crop and stretch your image if the aspect ratio is not the same. In that case, you'll have to recrop your image to fit the required max size and ratio.



Then, the website may impose a file size limit (or may not, depending on the image purpose). Regarding page loading time, the lighter the better. In your example of a high resolution image at 2400x600px, 300 to 500kB is a totally fine size for loading time. Content pictures (such as photos) can be heavier if the image purpose requires it (e.g. fullscreen display), up to the native resolution of your camera if necessary.
If no indication is given, file size limit can be hard to guess, as it can depend on audience equipment (mobile, desktop...), audience country network quality...
For maximum quality and service, treat photo one by one to get the minimum file size without visible artifacts. For convenience or speed processing, script resize by using an overall satisfying compression level (around 70 should be fine). Or eventually find an in-between where you process your flat color images together with a high compression level and your heavy detailed images in a second batch with a lower compression level. @xiota's answer might also be the tool you need. Set your own standard here.



TL;DR the image purpose on the website is key for resize/compression amount.






share|improve this answer































    3














    What you are calculating is the average compressed size of an image pixel, if you divide that by the raw pixel size (usually 3 octets for a 24bit RGB), you get the compression ratio.



    This is a good metric that gives you information about the compression state of the image, but it's not enough to judge if the image is sufficiently compressed or not because the compression ratio does not depends only on the compression profile (algorithm = JPEG, quality = 60/100) but also on the compression potential of the image : different images with the same raw size and same compression profile will yield different jpeg size because images are more or less easy to compress (a blank image is very easy to compress, white noise is not).



    Because of this and because "last used" quality profile is not stored in this image (neither in metadata or jpeg header structure), the most used approach when re-publishing images with a target size/quality profile is actually to just recompress (and potentially resize) everything (automatically) regardless of the initial state of the image.



    Yes you may recompress when it's not necessary, yes you may even lose space if recompressing with a higher quality profile but those are edge cases and in the large scale it's the easiest thing to do to ensure a target quality profile. Of course, you only want to do that once in order not gradually degrade the images, and you should probably store two image library: the initial "untouched" one and the "to be published/recompressed" one.



    There are a lot of tools that exist to recompress a bunch of files, you can also script your own and using the right technical stack (C++ and libjpeg mostly) it can be pretty damn fast even for >100k files.



    If you want to implement a smarter/more complex process, you could try experimenting with an iterative re-compress/compare size logic to estimate the original quality profile (re-compressing with the same quality should yield roughly the same size, with a high quality should slightly increase the size and with a lower quality should significantly decrease the size). This would, of course, consume a lot more CPU power.






    share|improve this answer


























    • JPG images usually sub-sample the chroma with 4:2:2 or 4:2:0 (en.wikipedia.org/wiki/Chroma_subsampling#4:2:2), so the "raw" pixels that JPG is compressing has 2x or 4x as many luma pixels as each chroma channel. (Halved horizontally and maybe also vertically). You might want to take that into account when considering "how compressed" an image is. But yeah, as you say that's not a great metric across unknown image contents.

      – Peter Cordes
      Jan 10 at 3:40











    • +1 for rescaling. At some point, you get better image quality by downscaling than by reducing the bits per pixel even further. Unlike modern video codecs like h.264 or h.265 (which can signal the decoder to do more smoothing and deblocking) or the still-image version, HEIF, which is an HEVC(h.265) I-frame, JPEG doesn't have any of that and will just get blocky with lots of ringing artifacts if you starve it of bits. So you need to downscale instead of just reducing quality if you have very high resolution input images.

      – Peter Cordes
      Jan 10 at 3:44





















    2














    For example there is a 2400x600px image with a file size of 1.81MB.
    Photoshop's save for web command creates a 540KB file at 60 quality and same dimensions.
    This is about 29% of original size.


    The original uncompressed size is 2400 x 600 x 3 = 4,320,000 bytes (4.1 MB), because 24 bit color is always three bytes of RGB data per pixel. There is no way around this absolute truth.



    However, JPG size also depends on image detail. Large smooth areas (like sky or painted walls) compress better, but areas of greater detail (like a tree full of leaves) do not compress as well. So there is no absolute numerical indicator.



    But 540 KB is 0.540/4.1 = 13% of the 4.1 MB original size.
    It might be 29% of previous JPG size, but it is 13% of original uncompressed size.
    So that is 1/8 of original uncompressed size, which is usually considered "decent" quality. Not optimum, not maximum quality, but generally decent, perhaps good enough for some uses.
    Just saying, it is already small.



    A larger JPG file is better image quality, and smaller is less image quality. You have to decide what is good enough, but JPG is never "too big", since image quality decreases with JPG compression. 24 bit color has three bytes per pixel uncompressed.



    So the decision is if you want it small or if you want it good.



    But making an existing JPG larger is still worse, since more JPG artifacts are added, and once small, the data is changed, and it will Never get better.



    JPG artifacts typically show two ways, as visible 8x8 pixel blocks of one color in the smooth areas with no detail, or as visible rough edges around the detail edges.



    If editing and re-saving a JPG, additional JPG artifacts are added. If that is required, it is good practice to always re-save to match the original compression setting.






    share|improve this answer
























    • The 4.1 MB number is only true if there is no compression at all, however, even a JPEG with perfect quality can have a smaller file size due to lossless compression.

      – Marv
      Jan 9 at 15:35













    • Yes, that is why I called it "uncompressed", which is how every digital image starts out, which is of course the actual and original size of the data, which is why it is important. Yes, even highest level JPG 100 is compressed substantially smaller, not lossless. Lossless JPG is a misnomer. We have no programs offering it. Its uses call it something else (Wikipedia says DNG and some Raw). However JPEG2 can offer lossless compression, but which has other issues, for example web browsers do not support showing JPEG2, and photo print shops likely do not accept it.

      – WayneF
      Jan 9 at 17:57













    • There is no way around this absolute truth. ... except for chroma sub-sampling, which JPEG uses. JPEG compresses in YUV colorspace (brightness + two colour components), not RGB. Usually 4:2:2 or 4:2:0, reducing the number of pixels in each of the two chroma channels by 2x or 4x. en.wikipedia.org/wiki/Chroma_subsampling#4:2:2. After transforming from RGB to YUV and sub-sampling, that color resolution information is totally gone, and not part of what JPEG is spending bits to encode. If you want to look at bits/pixel, it should be in the color format of the JPEG you're considering.

      – Peter Cordes
      Jan 10 at 3:50








    • 2





      Come on, read the text. The second absolute truth is that it specifically said and referred to "uncompressed" and said that 24 bit color was always three bytes per pixel. :)

      – WayneF
      Jan 10 at 7:15



















    0














    Photoshop's "Save for Web" is actually a pretty good compromise between file size and quality, so unless you have more specific requirements, you should go with that. A typical advice for web developers is to stick to 50-70% quality range. Of course, there are exceptions: you will want 90-95% quality on a company logo which has to look great at all times (or even convert it to a lossless format), and go as low as 30% on a large but barely visible page background.



    Also don't forget to rescale your images. A 2400x600 picture will look great on a 4K display, but will be rescaled on smaller screens, wasting data bandwidth with no visual improvement for the user. Check the website template you will be using to find out the optimal width for the images. Typically, at the time of writing that will be somewhere around 1200-1300 pixels (see most popular resolution here).



    Remember to keep the originals of pictures you convert to Web quality. If you'll ever need to rework or print this material, you'll regret to only have it in 60% quality and 1 Mpix resolution.






    share|improve this answer

























      Your Answer








      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "61"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: false,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: null,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphoto.stackexchange.com%2fquestions%2f104118%2foptimal-size-of-a-jpeg-image-in-terms-of-its-dimensions%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      8 Answers
      8






      active

      oldest

      votes








      8 Answers
      8






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      6














      On average, JPEG's sweet spot is around one bit per pixel.



      This will of course vary depending on image content, because certain types of graphics (e.g. flat areas and smooth gradients) compress better than others (noise, text), so it's not a robust method to apply blindly to every image.



      You also have a problem of not having an uncompressed reference image to compare with, so you don't really know for sure what's the current quality of the images you have, and how much more you can lower the quality to be still acceptable. The quality can be guessed to a certain extent from quantization tables in JPEGs, but it's not a reliable method either (specifically, ImageMagick's quality judgement is very incorrect for JPEGs with custom, optimized quantization tables).



      Having said that, there is a reasonable practical approach:




      1. Pick a maximum JPEG quality setting you're happy with (somewhere in 70 to 85 range).

      2. Recompress images to that quality level.

      3. If the recompressed image is smaller by more than ~10%, then keep the recompressed image.


      It's important not to pick merely the smaller file size, and require a significant drop in file size instead. That's because recompression of JPEG tends to always drop the file size slighly due to loss of detail caused by lossy nature of JPEG and conversion to 8-bit RGB, so small drops in file size can have disproportionally large drop in quality that isn't worth it.






      share|improve this answer





















      • 3





        This is exactly what I did in the end. I used the one bit per pixel as a guide to filter out 30,000 images out of 100,000+ and re-compressed them using imagemagick with 85% quality. If the resulting image was more than 50% smaller then I kept the new one. It worked in my case because the "big images" were created using Photoshop using 100% quality. The other 70,000+ images were OK-ish in terms of filesize and re-compressing them did not generate enough savings (percentage wise) or there was a noticeable loss in quality.

        – Salman A
        Jan 14 at 11:20


















      6














      On average, JPEG's sweet spot is around one bit per pixel.



      This will of course vary depending on image content, because certain types of graphics (e.g. flat areas and smooth gradients) compress better than others (noise, text), so it's not a robust method to apply blindly to every image.



      You also have a problem of not having an uncompressed reference image to compare with, so you don't really know for sure what's the current quality of the images you have, and how much more you can lower the quality to be still acceptable. The quality can be guessed to a certain extent from quantization tables in JPEGs, but it's not a reliable method either (specifically, ImageMagick's quality judgement is very incorrect for JPEGs with custom, optimized quantization tables).



      Having said that, there is a reasonable practical approach:




      1. Pick a maximum JPEG quality setting you're happy with (somewhere in 70 to 85 range).

      2. Recompress images to that quality level.

      3. If the recompressed image is smaller by more than ~10%, then keep the recompressed image.


      It's important not to pick merely the smaller file size, and require a significant drop in file size instead. That's because recompression of JPEG tends to always drop the file size slighly due to loss of detail caused by lossy nature of JPEG and conversion to 8-bit RGB, so small drops in file size can have disproportionally large drop in quality that isn't worth it.






      share|improve this answer





















      • 3





        This is exactly what I did in the end. I used the one bit per pixel as a guide to filter out 30,000 images out of 100,000+ and re-compressed them using imagemagick with 85% quality. If the resulting image was more than 50% smaller then I kept the new one. It worked in my case because the "big images" were created using Photoshop using 100% quality. The other 70,000+ images were OK-ish in terms of filesize and re-compressing them did not generate enough savings (percentage wise) or there was a noticeable loss in quality.

        – Salman A
        Jan 14 at 11:20
















      6












      6








      6







      On average, JPEG's sweet spot is around one bit per pixel.



      This will of course vary depending on image content, because certain types of graphics (e.g. flat areas and smooth gradients) compress better than others (noise, text), so it's not a robust method to apply blindly to every image.



      You also have a problem of not having an uncompressed reference image to compare with, so you don't really know for sure what's the current quality of the images you have, and how much more you can lower the quality to be still acceptable. The quality can be guessed to a certain extent from quantization tables in JPEGs, but it's not a reliable method either (specifically, ImageMagick's quality judgement is very incorrect for JPEGs with custom, optimized quantization tables).



      Having said that, there is a reasonable practical approach:




      1. Pick a maximum JPEG quality setting you're happy with (somewhere in 70 to 85 range).

      2. Recompress images to that quality level.

      3. If the recompressed image is smaller by more than ~10%, then keep the recompressed image.


      It's important not to pick merely the smaller file size, and require a significant drop in file size instead. That's because recompression of JPEG tends to always drop the file size slighly due to loss of detail caused by lossy nature of JPEG and conversion to 8-bit RGB, so small drops in file size can have disproportionally large drop in quality that isn't worth it.






      share|improve this answer















      On average, JPEG's sweet spot is around one bit per pixel.



      This will of course vary depending on image content, because certain types of graphics (e.g. flat areas and smooth gradients) compress better than others (noise, text), so it's not a robust method to apply blindly to every image.



      You also have a problem of not having an uncompressed reference image to compare with, so you don't really know for sure what's the current quality of the images you have, and how much more you can lower the quality to be still acceptable. The quality can be guessed to a certain extent from quantization tables in JPEGs, but it's not a reliable method either (specifically, ImageMagick's quality judgement is very incorrect for JPEGs with custom, optimized quantization tables).



      Having said that, there is a reasonable practical approach:




      1. Pick a maximum JPEG quality setting you're happy with (somewhere in 70 to 85 range).

      2. Recompress images to that quality level.

      3. If the recompressed image is smaller by more than ~10%, then keep the recompressed image.


      It's important not to pick merely the smaller file size, and require a significant drop in file size instead. That's because recompression of JPEG tends to always drop the file size slighly due to loss of detail caused by lossy nature of JPEG and conversion to 8-bit RGB, so small drops in file size can have disproportionally large drop in quality that isn't worth it.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Jan 12 at 23:27

























      answered Jan 9 at 18:22









      KornelKornel

      1764




      1764








      • 3





        This is exactly what I did in the end. I used the one bit per pixel as a guide to filter out 30,000 images out of 100,000+ and re-compressed them using imagemagick with 85% quality. If the resulting image was more than 50% smaller then I kept the new one. It worked in my case because the "big images" were created using Photoshop using 100% quality. The other 70,000+ images were OK-ish in terms of filesize and re-compressing them did not generate enough savings (percentage wise) or there was a noticeable loss in quality.

        – Salman A
        Jan 14 at 11:20
















      • 3





        This is exactly what I did in the end. I used the one bit per pixel as a guide to filter out 30,000 images out of 100,000+ and re-compressed them using imagemagick with 85% quality. If the resulting image was more than 50% smaller then I kept the new one. It worked in my case because the "big images" were created using Photoshop using 100% quality. The other 70,000+ images were OK-ish in terms of filesize and re-compressing them did not generate enough savings (percentage wise) or there was a noticeable loss in quality.

        – Salman A
        Jan 14 at 11:20










      3




      3





      This is exactly what I did in the end. I used the one bit per pixel as a guide to filter out 30,000 images out of 100,000+ and re-compressed them using imagemagick with 85% quality. If the resulting image was more than 50% smaller then I kept the new one. It worked in my case because the "big images" were created using Photoshop using 100% quality. The other 70,000+ images were OK-ish in terms of filesize and re-compressing them did not generate enough savings (percentage wise) or there was a noticeable loss in quality.

      – Salman A
      Jan 14 at 11:20







      This is exactly what I did in the end. I used the one bit per pixel as a guide to filter out 30,000 images out of 100,000+ and re-compressed them using imagemagick with 85% quality. If the resulting image was more than 50% smaller then I kept the new one. It worked in my case because the "big images" were created using Photoshop using 100% quality. The other 70,000+ images were OK-ish in terms of filesize and re-compressing them did not generate enough savings (percentage wise) or there was a noticeable loss in quality.

      – Salman A
      Jan 14 at 11:20















      28














      The size of files compressed with JPEG vary depending on the complexity of the image. Trying the control the file sizes the way you describe will result in highly variable perceived image quality.



      Consider the following options instead:




      • The good-enough approach. Use a quality setting that you find acceptable, like 75. Compare the size of the result with the original image, and keep the smaller file. See What quality to choose when converting to JPG?


      • Use a JPEG minimizer, like JPEGmini or jpeg-recompress from jpeg-archive. They are essentially designed to do what you seem to be trying to do, but with more awareness of JPEG algorithm internals.


      • Generate thumbnails of various sizes, as Nathancahill suggests, from a web-developer perspective.







      share|improve this answer





















      • 7





        Or if you want to go "extreme" on the JPEG minimisation, guetzli. Do note the memory and time requirements.

        – Philip Kendall
        Jan 9 at 13:19






      • 2





        I tried guetzli, but wasn't very impressed. It's very slow and only reduces sizes by about 20-30%. With jpeg-recompress, files can be reduced 80% with the smallfry algorithm.

        – xiota
        Jan 9 at 13:22
















      28














      The size of files compressed with JPEG vary depending on the complexity of the image. Trying the control the file sizes the way you describe will result in highly variable perceived image quality.



      Consider the following options instead:




      • The good-enough approach. Use a quality setting that you find acceptable, like 75. Compare the size of the result with the original image, and keep the smaller file. See What quality to choose when converting to JPG?


      • Use a JPEG minimizer, like JPEGmini or jpeg-recompress from jpeg-archive. They are essentially designed to do what you seem to be trying to do, but with more awareness of JPEG algorithm internals.


      • Generate thumbnails of various sizes, as Nathancahill suggests, from a web-developer perspective.







      share|improve this answer





















      • 7





        Or if you want to go "extreme" on the JPEG minimisation, guetzli. Do note the memory and time requirements.

        – Philip Kendall
        Jan 9 at 13:19






      • 2





        I tried guetzli, but wasn't very impressed. It's very slow and only reduces sizes by about 20-30%. With jpeg-recompress, files can be reduced 80% with the smallfry algorithm.

        – xiota
        Jan 9 at 13:22














      28












      28








      28







      The size of files compressed with JPEG vary depending on the complexity of the image. Trying the control the file sizes the way you describe will result in highly variable perceived image quality.



      Consider the following options instead:




      • The good-enough approach. Use a quality setting that you find acceptable, like 75. Compare the size of the result with the original image, and keep the smaller file. See What quality to choose when converting to JPG?


      • Use a JPEG minimizer, like JPEGmini or jpeg-recompress from jpeg-archive. They are essentially designed to do what you seem to be trying to do, but with more awareness of JPEG algorithm internals.


      • Generate thumbnails of various sizes, as Nathancahill suggests, from a web-developer perspective.







      share|improve this answer















      The size of files compressed with JPEG vary depending on the complexity of the image. Trying the control the file sizes the way you describe will result in highly variable perceived image quality.



      Consider the following options instead:




      • The good-enough approach. Use a quality setting that you find acceptable, like 75. Compare the size of the result with the original image, and keep the smaller file. See What quality to choose when converting to JPG?


      • Use a JPEG minimizer, like JPEGmini or jpeg-recompress from jpeg-archive. They are essentially designed to do what you seem to be trying to do, but with more awareness of JPEG algorithm internals.


      • Generate thumbnails of various sizes, as Nathancahill suggests, from a web-developer perspective.








      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Jan 11 at 14:50

























      answered Jan 9 at 12:52









      xiotaxiota

      9,59631653




      9,59631653








      • 7





        Or if you want to go "extreme" on the JPEG minimisation, guetzli. Do note the memory and time requirements.

        – Philip Kendall
        Jan 9 at 13:19






      • 2





        I tried guetzli, but wasn't very impressed. It's very slow and only reduces sizes by about 20-30%. With jpeg-recompress, files can be reduced 80% with the smallfry algorithm.

        – xiota
        Jan 9 at 13:22














      • 7





        Or if you want to go "extreme" on the JPEG minimisation, guetzli. Do note the memory and time requirements.

        – Philip Kendall
        Jan 9 at 13:19






      • 2





        I tried guetzli, but wasn't very impressed. It's very slow and only reduces sizes by about 20-30%. With jpeg-recompress, files can be reduced 80% with the smallfry algorithm.

        – xiota
        Jan 9 at 13:22








      7




      7





      Or if you want to go "extreme" on the JPEG minimisation, guetzli. Do note the memory and time requirements.

      – Philip Kendall
      Jan 9 at 13:19





      Or if you want to go "extreme" on the JPEG minimisation, guetzli. Do note the memory and time requirements.

      – Philip Kendall
      Jan 9 at 13:19




      2




      2





      I tried guetzli, but wasn't very impressed. It's very slow and only reduces sizes by about 20-30%. With jpeg-recompress, files can be reduced 80% with the smallfry algorithm.

      – xiota
      Jan 9 at 13:22





      I tried guetzli, but wasn't very impressed. It's very slow and only reduces sizes by about 20-30%. With jpeg-recompress, files can be reduced 80% with the smallfry algorithm.

      – xiota
      Jan 9 at 13:22











      18














      No. This is a wrong approach.



      File size in pixels, yes, has something to do with the final weight, but it is not the only factor.



      Make a test. Take a completely white file of the same 2400x600px, and save it as JPG.



      Now take a photo of a forest (same 2400x600px) with lots of details and save it. This file will be larger using the same compression settings.



      The final size depends on these 3 factors:




      • Pixel Size

      • Compression settings

      • Content (Detail and complexity of the image)


      So you can not and should not define the weight based on pixel size.





      But I understand your problem.



      Without analyzing the current compression of the image, it is hard to define the "optimal" weight (which is relative to the observer, or usage of the images)



      You probably can define a compression setting and recompress "all of them". I don't know if you want to do that before "uploading", which probably will save you more time than the saved skipping some of them.



      There are some tools that analyze an image and calculates the current compression ratio. But I doubt it is that important.






      share|improve this answer


























      • I understand the part about white image vs forest image. Would you suggest that I take a random sample of images, re-save them using photoshop (70 quality) and use the largest pixel:filesize ratio as reference? I am guessing those with lower ratio would be those with less detail.

        – Salman A
        Jan 9 at 12:54













      • Regarding your last phrase. The compression ratio is actually roughly what OP is calculating since it's jpeg size / raw size and raw size = pixel size * number of pixel, pixel size being 3 octets for a 24bit RGB color space. And as you say yourself, this metric is not enough to determine if an image is sufficiently compressed.

        – zakinster
        Jan 9 at 15:41








      • 9





        @SalmanA No, I'd suggest that you drop this approach altogether. JPEGs are as large as they need to be to give the specified quality. Your proposal of seeing how big the largest image in your sample is at 70% quality is just choosing a level of image complexity and saying "Anything more complex than that is too complex and will be degraded." However, if almost all images are smaller than this threshold at 70% quality, what is the problem with having a small number of "too big" files?

        – David Richerby
        Jan 9 at 18:33











      • This seems to correspond to a conclusion I came to when I was considering an approach to determine which of a series of pictures of an identical subject but different resolutions and quality was the "best" (e.g. closest to the original) image.

        – Michael
        Jan 10 at 2:32
















      18














      No. This is a wrong approach.



      File size in pixels, yes, has something to do with the final weight, but it is not the only factor.



      Make a test. Take a completely white file of the same 2400x600px, and save it as JPG.



      Now take a photo of a forest (same 2400x600px) with lots of details and save it. This file will be larger using the same compression settings.



      The final size depends on these 3 factors:




      • Pixel Size

      • Compression settings

      • Content (Detail and complexity of the image)


      So you can not and should not define the weight based on pixel size.





      But I understand your problem.



      Without analyzing the current compression of the image, it is hard to define the "optimal" weight (which is relative to the observer, or usage of the images)



      You probably can define a compression setting and recompress "all of them". I don't know if you want to do that before "uploading", which probably will save you more time than the saved skipping some of them.



      There are some tools that analyze an image and calculates the current compression ratio. But I doubt it is that important.






      share|improve this answer


























      • I understand the part about white image vs forest image. Would you suggest that I take a random sample of images, re-save them using photoshop (70 quality) and use the largest pixel:filesize ratio as reference? I am guessing those with lower ratio would be those with less detail.

        – Salman A
        Jan 9 at 12:54













      • Regarding your last phrase. The compression ratio is actually roughly what OP is calculating since it's jpeg size / raw size and raw size = pixel size * number of pixel, pixel size being 3 octets for a 24bit RGB color space. And as you say yourself, this metric is not enough to determine if an image is sufficiently compressed.

        – zakinster
        Jan 9 at 15:41








      • 9





        @SalmanA No, I'd suggest that you drop this approach altogether. JPEGs are as large as they need to be to give the specified quality. Your proposal of seeing how big the largest image in your sample is at 70% quality is just choosing a level of image complexity and saying "Anything more complex than that is too complex and will be degraded." However, if almost all images are smaller than this threshold at 70% quality, what is the problem with having a small number of "too big" files?

        – David Richerby
        Jan 9 at 18:33











      • This seems to correspond to a conclusion I came to when I was considering an approach to determine which of a series of pictures of an identical subject but different resolutions and quality was the "best" (e.g. closest to the original) image.

        – Michael
        Jan 10 at 2:32














      18












      18








      18







      No. This is a wrong approach.



      File size in pixels, yes, has something to do with the final weight, but it is not the only factor.



      Make a test. Take a completely white file of the same 2400x600px, and save it as JPG.



      Now take a photo of a forest (same 2400x600px) with lots of details and save it. This file will be larger using the same compression settings.



      The final size depends on these 3 factors:




      • Pixel Size

      • Compression settings

      • Content (Detail and complexity of the image)


      So you can not and should not define the weight based on pixel size.





      But I understand your problem.



      Without analyzing the current compression of the image, it is hard to define the "optimal" weight (which is relative to the observer, or usage of the images)



      You probably can define a compression setting and recompress "all of them". I don't know if you want to do that before "uploading", which probably will save you more time than the saved skipping some of them.



      There are some tools that analyze an image and calculates the current compression ratio. But I doubt it is that important.






      share|improve this answer















      No. This is a wrong approach.



      File size in pixels, yes, has something to do with the final weight, but it is not the only factor.



      Make a test. Take a completely white file of the same 2400x600px, and save it as JPG.



      Now take a photo of a forest (same 2400x600px) with lots of details and save it. This file will be larger using the same compression settings.



      The final size depends on these 3 factors:




      • Pixel Size

      • Compression settings

      • Content (Detail and complexity of the image)


      So you can not and should not define the weight based on pixel size.





      But I understand your problem.



      Without analyzing the current compression of the image, it is hard to define the "optimal" weight (which is relative to the observer, or usage of the images)



      You probably can define a compression setting and recompress "all of them". I don't know if you want to do that before "uploading", which probably will save you more time than the saved skipping some of them.



      There are some tools that analyze an image and calculates the current compression ratio. But I doubt it is that important.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Jan 9 at 12:37

























      answered Jan 9 at 12:22









      RafaelRafael

      13.9k12242




      13.9k12242













      • I understand the part about white image vs forest image. Would you suggest that I take a random sample of images, re-save them using photoshop (70 quality) and use the largest pixel:filesize ratio as reference? I am guessing those with lower ratio would be those with less detail.

        – Salman A
        Jan 9 at 12:54













      • Regarding your last phrase. The compression ratio is actually roughly what OP is calculating since it's jpeg size / raw size and raw size = pixel size * number of pixel, pixel size being 3 octets for a 24bit RGB color space. And as you say yourself, this metric is not enough to determine if an image is sufficiently compressed.

        – zakinster
        Jan 9 at 15:41








      • 9





        @SalmanA No, I'd suggest that you drop this approach altogether. JPEGs are as large as they need to be to give the specified quality. Your proposal of seeing how big the largest image in your sample is at 70% quality is just choosing a level of image complexity and saying "Anything more complex than that is too complex and will be degraded." However, if almost all images are smaller than this threshold at 70% quality, what is the problem with having a small number of "too big" files?

        – David Richerby
        Jan 9 at 18:33











      • This seems to correspond to a conclusion I came to when I was considering an approach to determine which of a series of pictures of an identical subject but different resolutions and quality was the "best" (e.g. closest to the original) image.

        – Michael
        Jan 10 at 2:32



















      • I understand the part about white image vs forest image. Would you suggest that I take a random sample of images, re-save them using photoshop (70 quality) and use the largest pixel:filesize ratio as reference? I am guessing those with lower ratio would be those with less detail.

        – Salman A
        Jan 9 at 12:54













      • Regarding your last phrase. The compression ratio is actually roughly what OP is calculating since it's jpeg size / raw size and raw size = pixel size * number of pixel, pixel size being 3 octets for a 24bit RGB color space. And as you say yourself, this metric is not enough to determine if an image is sufficiently compressed.

        – zakinster
        Jan 9 at 15:41








      • 9





        @SalmanA No, I'd suggest that you drop this approach altogether. JPEGs are as large as they need to be to give the specified quality. Your proposal of seeing how big the largest image in your sample is at 70% quality is just choosing a level of image complexity and saying "Anything more complex than that is too complex and will be degraded." However, if almost all images are smaller than this threshold at 70% quality, what is the problem with having a small number of "too big" files?

        – David Richerby
        Jan 9 at 18:33











      • This seems to correspond to a conclusion I came to when I was considering an approach to determine which of a series of pictures of an identical subject but different resolutions and quality was the "best" (e.g. closest to the original) image.

        – Michael
        Jan 10 at 2:32

















      I understand the part about white image vs forest image. Would you suggest that I take a random sample of images, re-save them using photoshop (70 quality) and use the largest pixel:filesize ratio as reference? I am guessing those with lower ratio would be those with less detail.

      – Salman A
      Jan 9 at 12:54







      I understand the part about white image vs forest image. Would you suggest that I take a random sample of images, re-save them using photoshop (70 quality) and use the largest pixel:filesize ratio as reference? I am guessing those with lower ratio would be those with less detail.

      – Salman A
      Jan 9 at 12:54















      Regarding your last phrase. The compression ratio is actually roughly what OP is calculating since it's jpeg size / raw size and raw size = pixel size * number of pixel, pixel size being 3 octets for a 24bit RGB color space. And as you say yourself, this metric is not enough to determine if an image is sufficiently compressed.

      – zakinster
      Jan 9 at 15:41







      Regarding your last phrase. The compression ratio is actually roughly what OP is calculating since it's jpeg size / raw size and raw size = pixel size * number of pixel, pixel size being 3 octets for a 24bit RGB color space. And as you say yourself, this metric is not enough to determine if an image is sufficiently compressed.

      – zakinster
      Jan 9 at 15:41






      9




      9





      @SalmanA No, I'd suggest that you drop this approach altogether. JPEGs are as large as they need to be to give the specified quality. Your proposal of seeing how big the largest image in your sample is at 70% quality is just choosing a level of image complexity and saying "Anything more complex than that is too complex and will be degraded." However, if almost all images are smaller than this threshold at 70% quality, what is the problem with having a small number of "too big" files?

      – David Richerby
      Jan 9 at 18:33





      @SalmanA No, I'd suggest that you drop this approach altogether. JPEGs are as large as they need to be to give the specified quality. Your proposal of seeing how big the largest image in your sample is at 70% quality is just choosing a level of image complexity and saying "Anything more complex than that is too complex and will be degraded." However, if almost all images are smaller than this threshold at 70% quality, what is the problem with having a small number of "too big" files?

      – David Richerby
      Jan 9 at 18:33













      This seems to correspond to a conclusion I came to when I was considering an approach to determine which of a series of pictures of an identical subject but different resolutions and quality was the "best" (e.g. closest to the original) image.

      – Michael
      Jan 10 at 2:32





      This seems to correspond to a conclusion I came to when I was considering an approach to determine which of a series of pictures of an identical subject but different resolutions and quality was the "best" (e.g. closest to the original) image.

      – Michael
      Jan 10 at 2:32











      10














      Web developer here. Here's how I'd approach this:



      1. Determine the displayed image dimensions and required screen resolutions.



      Your first task is determining what pixel sizes the images will be displayed at. Are they product photos in an online store? A photo gallery? User profile photos? Multiple different sizes? Make a list of the pixel dimensions you'll need. Check if you'll need @2x images for high-resolution screens like recent phones and tablets.



      2. Use a thumbnail script to create new image files.



      These are called thumbnail scripts but can be used for a lot more than just thumbnails. There are many scripts out there or you can write your own. By not resizing the original files, you can do it over if you make a mistake in your script or realize down the road that you need a higher resolution image. A common practice is to specify a suffix in the output filename. For example:



      lena.jpg (Original, 2000x3000)
      lena-thumb.jpg (100x150)
      lena-thumb@2x.jpg (200x300)
      lena-product.jpg (400x600)
      lena-product@2x.jpg (800x1200)


      3. Compress.



      The thumbnail script should specify the jpg compression when you cut the new image files. However, there are other minifiers out there that might cut down the file size even further.






      share|improve this answer
























      • That is how will handle this in the future: ask the photographers to place high-res originals in a directory, then use a script to generate smaller sizes (various size thumbnails and larger ones for desktop and mobile) and place them under www with url rewriting. But right now I do not have access to originals.

        – Salman A
        Jan 11 at 7:17
















      10














      Web developer here. Here's how I'd approach this:



      1. Determine the displayed image dimensions and required screen resolutions.



      Your first task is determining what pixel sizes the images will be displayed at. Are they product photos in an online store? A photo gallery? User profile photos? Multiple different sizes? Make a list of the pixel dimensions you'll need. Check if you'll need @2x images for high-resolution screens like recent phones and tablets.



      2. Use a thumbnail script to create new image files.



      These are called thumbnail scripts but can be used for a lot more than just thumbnails. There are many scripts out there or you can write your own. By not resizing the original files, you can do it over if you make a mistake in your script or realize down the road that you need a higher resolution image. A common practice is to specify a suffix in the output filename. For example:



      lena.jpg (Original, 2000x3000)
      lena-thumb.jpg (100x150)
      lena-thumb@2x.jpg (200x300)
      lena-product.jpg (400x600)
      lena-product@2x.jpg (800x1200)


      3. Compress.



      The thumbnail script should specify the jpg compression when you cut the new image files. However, there are other minifiers out there that might cut down the file size even further.






      share|improve this answer
























      • That is how will handle this in the future: ask the photographers to place high-res originals in a directory, then use a script to generate smaller sizes (various size thumbnails and larger ones for desktop and mobile) and place them under www with url rewriting. But right now I do not have access to originals.

        – Salman A
        Jan 11 at 7:17














      10












      10








      10







      Web developer here. Here's how I'd approach this:



      1. Determine the displayed image dimensions and required screen resolutions.



      Your first task is determining what pixel sizes the images will be displayed at. Are they product photos in an online store? A photo gallery? User profile photos? Multiple different sizes? Make a list of the pixel dimensions you'll need. Check if you'll need @2x images for high-resolution screens like recent phones and tablets.



      2. Use a thumbnail script to create new image files.



      These are called thumbnail scripts but can be used for a lot more than just thumbnails. There are many scripts out there or you can write your own. By not resizing the original files, you can do it over if you make a mistake in your script or realize down the road that you need a higher resolution image. A common practice is to specify a suffix in the output filename. For example:



      lena.jpg (Original, 2000x3000)
      lena-thumb.jpg (100x150)
      lena-thumb@2x.jpg (200x300)
      lena-product.jpg (400x600)
      lena-product@2x.jpg (800x1200)


      3. Compress.



      The thumbnail script should specify the jpg compression when you cut the new image files. However, there are other minifiers out there that might cut down the file size even further.






      share|improve this answer













      Web developer here. Here's how I'd approach this:



      1. Determine the displayed image dimensions and required screen resolutions.



      Your first task is determining what pixel sizes the images will be displayed at. Are they product photos in an online store? A photo gallery? User profile photos? Multiple different sizes? Make a list of the pixel dimensions you'll need. Check if you'll need @2x images for high-resolution screens like recent phones and tablets.



      2. Use a thumbnail script to create new image files.



      These are called thumbnail scripts but can be used for a lot more than just thumbnails. There are many scripts out there or you can write your own. By not resizing the original files, you can do it over if you make a mistake in your script or realize down the road that you need a higher resolution image. A common practice is to specify a suffix in the output filename. For example:



      lena.jpg (Original, 2000x3000)
      lena-thumb.jpg (100x150)
      lena-thumb@2x.jpg (200x300)
      lena-product.jpg (400x600)
      lena-product@2x.jpg (800x1200)


      3. Compress.



      The thumbnail script should specify the jpg compression when you cut the new image files. However, there are other minifiers out there that might cut down the file size even further.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Jan 9 at 23:30









      nathancahillnathancahill

      20114




      20114













      • That is how will handle this in the future: ask the photographers to place high-res originals in a directory, then use a script to generate smaller sizes (various size thumbnails and larger ones for desktop and mobile) and place them under www with url rewriting. But right now I do not have access to originals.

        – Salman A
        Jan 11 at 7:17



















      • That is how will handle this in the future: ask the photographers to place high-res originals in a directory, then use a script to generate smaller sizes (various size thumbnails and larger ones for desktop and mobile) and place them under www with url rewriting. But right now I do not have access to originals.

        – Salman A
        Jan 11 at 7:17

















      That is how will handle this in the future: ask the photographers to place high-res originals in a directory, then use a script to generate smaller sizes (various size thumbnails and larger ones for desktop and mobile) and place them under www with url rewriting. But right now I do not have access to originals.

      – Salman A
      Jan 11 at 7:17





      That is how will handle this in the future: ask the photographers to place high-res originals in a directory, then use a script to generate smaller sizes (various size thumbnails and larger ones for desktop and mobile) and place them under www with url rewriting. But right now I do not have access to originals.

      – Salman A
      Jan 11 at 7:17











      6














      While @Rafael's answer has explained the JPEG compression ins and out, I will try to answer your web and upload problematic.



      Using an image in a website (for design or content) will dictate some imperatives : what my image will be used for ? Logo, cover photo, thumbnail, photo in a blog post, fullscreen photo for a gallery... Also, if you use it for multiple purpose (e.g. a photo and its gallery thumbnail), you want to decline it in all sizes required. However, unless you are building your very own website, most nowadays web service will generate smaller size images from your bigger picture to use in-site.



      Now that you know your image purpose, the website (or CMS or front-end Framework) will always require a maximum size in pixels for your image to comply with. Logos might be 600x600px max, background cover might be 1280x720px max, content photo for fullscreen display 1920x1080 or camera native resolution for absolute detail conservation. Check the correct size from the website you want to upload to.
      You want to match at least on of the maximum pixel size required, depending on the ratio you want to achieve. Beware, some service will crop and stretch your image if the aspect ratio is not the same. In that case, you'll have to recrop your image to fit the required max size and ratio.



      Then, the website may impose a file size limit (or may not, depending on the image purpose). Regarding page loading time, the lighter the better. In your example of a high resolution image at 2400x600px, 300 to 500kB is a totally fine size for loading time. Content pictures (such as photos) can be heavier if the image purpose requires it (e.g. fullscreen display), up to the native resolution of your camera if necessary.
      If no indication is given, file size limit can be hard to guess, as it can depend on audience equipment (mobile, desktop...), audience country network quality...
      For maximum quality and service, treat photo one by one to get the minimum file size without visible artifacts. For convenience or speed processing, script resize by using an overall satisfying compression level (around 70 should be fine). Or eventually find an in-between where you process your flat color images together with a high compression level and your heavy detailed images in a second batch with a lower compression level. @xiota's answer might also be the tool you need. Set your own standard here.



      TL;DR the image purpose on the website is key for resize/compression amount.






      share|improve this answer




























        6














        While @Rafael's answer has explained the JPEG compression ins and out, I will try to answer your web and upload problematic.



        Using an image in a website (for design or content) will dictate some imperatives : what my image will be used for ? Logo, cover photo, thumbnail, photo in a blog post, fullscreen photo for a gallery... Also, if you use it for multiple purpose (e.g. a photo and its gallery thumbnail), you want to decline it in all sizes required. However, unless you are building your very own website, most nowadays web service will generate smaller size images from your bigger picture to use in-site.



        Now that you know your image purpose, the website (or CMS or front-end Framework) will always require a maximum size in pixels for your image to comply with. Logos might be 600x600px max, background cover might be 1280x720px max, content photo for fullscreen display 1920x1080 or camera native resolution for absolute detail conservation. Check the correct size from the website you want to upload to.
        You want to match at least on of the maximum pixel size required, depending on the ratio you want to achieve. Beware, some service will crop and stretch your image if the aspect ratio is not the same. In that case, you'll have to recrop your image to fit the required max size and ratio.



        Then, the website may impose a file size limit (or may not, depending on the image purpose). Regarding page loading time, the lighter the better. In your example of a high resolution image at 2400x600px, 300 to 500kB is a totally fine size for loading time. Content pictures (such as photos) can be heavier if the image purpose requires it (e.g. fullscreen display), up to the native resolution of your camera if necessary.
        If no indication is given, file size limit can be hard to guess, as it can depend on audience equipment (mobile, desktop...), audience country network quality...
        For maximum quality and service, treat photo one by one to get the minimum file size without visible artifacts. For convenience or speed processing, script resize by using an overall satisfying compression level (around 70 should be fine). Or eventually find an in-between where you process your flat color images together with a high compression level and your heavy detailed images in a second batch with a lower compression level. @xiota's answer might also be the tool you need. Set your own standard here.



        TL;DR the image purpose on the website is key for resize/compression amount.






        share|improve this answer


























          6












          6








          6







          While @Rafael's answer has explained the JPEG compression ins and out, I will try to answer your web and upload problematic.



          Using an image in a website (for design or content) will dictate some imperatives : what my image will be used for ? Logo, cover photo, thumbnail, photo in a blog post, fullscreen photo for a gallery... Also, if you use it for multiple purpose (e.g. a photo and its gallery thumbnail), you want to decline it in all sizes required. However, unless you are building your very own website, most nowadays web service will generate smaller size images from your bigger picture to use in-site.



          Now that you know your image purpose, the website (or CMS or front-end Framework) will always require a maximum size in pixels for your image to comply with. Logos might be 600x600px max, background cover might be 1280x720px max, content photo for fullscreen display 1920x1080 or camera native resolution for absolute detail conservation. Check the correct size from the website you want to upload to.
          You want to match at least on of the maximum pixel size required, depending on the ratio you want to achieve. Beware, some service will crop and stretch your image if the aspect ratio is not the same. In that case, you'll have to recrop your image to fit the required max size and ratio.



          Then, the website may impose a file size limit (or may not, depending on the image purpose). Regarding page loading time, the lighter the better. In your example of a high resolution image at 2400x600px, 300 to 500kB is a totally fine size for loading time. Content pictures (such as photos) can be heavier if the image purpose requires it (e.g. fullscreen display), up to the native resolution of your camera if necessary.
          If no indication is given, file size limit can be hard to guess, as it can depend on audience equipment (mobile, desktop...), audience country network quality...
          For maximum quality and service, treat photo one by one to get the minimum file size without visible artifacts. For convenience or speed processing, script resize by using an overall satisfying compression level (around 70 should be fine). Or eventually find an in-between where you process your flat color images together with a high compression level and your heavy detailed images in a second batch with a lower compression level. @xiota's answer might also be the tool you need. Set your own standard here.



          TL;DR the image purpose on the website is key for resize/compression amount.






          share|improve this answer













          While @Rafael's answer has explained the JPEG compression ins and out, I will try to answer your web and upload problematic.



          Using an image in a website (for design or content) will dictate some imperatives : what my image will be used for ? Logo, cover photo, thumbnail, photo in a blog post, fullscreen photo for a gallery... Also, if you use it for multiple purpose (e.g. a photo and its gallery thumbnail), you want to decline it in all sizes required. However, unless you are building your very own website, most nowadays web service will generate smaller size images from your bigger picture to use in-site.



          Now that you know your image purpose, the website (or CMS or front-end Framework) will always require a maximum size in pixels for your image to comply with. Logos might be 600x600px max, background cover might be 1280x720px max, content photo for fullscreen display 1920x1080 or camera native resolution for absolute detail conservation. Check the correct size from the website you want to upload to.
          You want to match at least on of the maximum pixel size required, depending on the ratio you want to achieve. Beware, some service will crop and stretch your image if the aspect ratio is not the same. In that case, you'll have to recrop your image to fit the required max size and ratio.



          Then, the website may impose a file size limit (or may not, depending on the image purpose). Regarding page loading time, the lighter the better. In your example of a high resolution image at 2400x600px, 300 to 500kB is a totally fine size for loading time. Content pictures (such as photos) can be heavier if the image purpose requires it (e.g. fullscreen display), up to the native resolution of your camera if necessary.
          If no indication is given, file size limit can be hard to guess, as it can depend on audience equipment (mobile, desktop...), audience country network quality...
          For maximum quality and service, treat photo one by one to get the minimum file size without visible artifacts. For convenience or speed processing, script resize by using an overall satisfying compression level (around 70 should be fine). Or eventually find an in-between where you process your flat color images together with a high compression level and your heavy detailed images in a second batch with a lower compression level. @xiota's answer might also be the tool you need. Set your own standard here.



          TL;DR the image purpose on the website is key for resize/compression amount.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Jan 9 at 17:01









          jihemsjihems

          1836




          1836























              3














              What you are calculating is the average compressed size of an image pixel, if you divide that by the raw pixel size (usually 3 octets for a 24bit RGB), you get the compression ratio.



              This is a good metric that gives you information about the compression state of the image, but it's not enough to judge if the image is sufficiently compressed or not because the compression ratio does not depends only on the compression profile (algorithm = JPEG, quality = 60/100) but also on the compression potential of the image : different images with the same raw size and same compression profile will yield different jpeg size because images are more or less easy to compress (a blank image is very easy to compress, white noise is not).



              Because of this and because "last used" quality profile is not stored in this image (neither in metadata or jpeg header structure), the most used approach when re-publishing images with a target size/quality profile is actually to just recompress (and potentially resize) everything (automatically) regardless of the initial state of the image.



              Yes you may recompress when it's not necessary, yes you may even lose space if recompressing with a higher quality profile but those are edge cases and in the large scale it's the easiest thing to do to ensure a target quality profile. Of course, you only want to do that once in order not gradually degrade the images, and you should probably store two image library: the initial "untouched" one and the "to be published/recompressed" one.



              There are a lot of tools that exist to recompress a bunch of files, you can also script your own and using the right technical stack (C++ and libjpeg mostly) it can be pretty damn fast even for >100k files.



              If you want to implement a smarter/more complex process, you could try experimenting with an iterative re-compress/compare size logic to estimate the original quality profile (re-compressing with the same quality should yield roughly the same size, with a high quality should slightly increase the size and with a lower quality should significantly decrease the size). This would, of course, consume a lot more CPU power.






              share|improve this answer


























              • JPG images usually sub-sample the chroma with 4:2:2 or 4:2:0 (en.wikipedia.org/wiki/Chroma_subsampling#4:2:2), so the "raw" pixels that JPG is compressing has 2x or 4x as many luma pixels as each chroma channel. (Halved horizontally and maybe also vertically). You might want to take that into account when considering "how compressed" an image is. But yeah, as you say that's not a great metric across unknown image contents.

                – Peter Cordes
                Jan 10 at 3:40











              • +1 for rescaling. At some point, you get better image quality by downscaling than by reducing the bits per pixel even further. Unlike modern video codecs like h.264 or h.265 (which can signal the decoder to do more smoothing and deblocking) or the still-image version, HEIF, which is an HEVC(h.265) I-frame, JPEG doesn't have any of that and will just get blocky with lots of ringing artifacts if you starve it of bits. So you need to downscale instead of just reducing quality if you have very high resolution input images.

                – Peter Cordes
                Jan 10 at 3:44


















              3














              What you are calculating is the average compressed size of an image pixel, if you divide that by the raw pixel size (usually 3 octets for a 24bit RGB), you get the compression ratio.



              This is a good metric that gives you information about the compression state of the image, but it's not enough to judge if the image is sufficiently compressed or not because the compression ratio does not depends only on the compression profile (algorithm = JPEG, quality = 60/100) but also on the compression potential of the image : different images with the same raw size and same compression profile will yield different jpeg size because images are more or less easy to compress (a blank image is very easy to compress, white noise is not).



              Because of this and because "last used" quality profile is not stored in this image (neither in metadata or jpeg header structure), the most used approach when re-publishing images with a target size/quality profile is actually to just recompress (and potentially resize) everything (automatically) regardless of the initial state of the image.



              Yes you may recompress when it's not necessary, yes you may even lose space if recompressing with a higher quality profile but those are edge cases and in the large scale it's the easiest thing to do to ensure a target quality profile. Of course, you only want to do that once in order not gradually degrade the images, and you should probably store two image library: the initial "untouched" one and the "to be published/recompressed" one.



              There are a lot of tools that exist to recompress a bunch of files, you can also script your own and using the right technical stack (C++ and libjpeg mostly) it can be pretty damn fast even for >100k files.



              If you want to implement a smarter/more complex process, you could try experimenting with an iterative re-compress/compare size logic to estimate the original quality profile (re-compressing with the same quality should yield roughly the same size, with a high quality should slightly increase the size and with a lower quality should significantly decrease the size). This would, of course, consume a lot more CPU power.






              share|improve this answer


























              • JPG images usually sub-sample the chroma with 4:2:2 or 4:2:0 (en.wikipedia.org/wiki/Chroma_subsampling#4:2:2), so the "raw" pixels that JPG is compressing has 2x or 4x as many luma pixels as each chroma channel. (Halved horizontally and maybe also vertically). You might want to take that into account when considering "how compressed" an image is. But yeah, as you say that's not a great metric across unknown image contents.

                – Peter Cordes
                Jan 10 at 3:40











              • +1 for rescaling. At some point, you get better image quality by downscaling than by reducing the bits per pixel even further. Unlike modern video codecs like h.264 or h.265 (which can signal the decoder to do more smoothing and deblocking) or the still-image version, HEIF, which is an HEVC(h.265) I-frame, JPEG doesn't have any of that and will just get blocky with lots of ringing artifacts if you starve it of bits. So you need to downscale instead of just reducing quality if you have very high resolution input images.

                – Peter Cordes
                Jan 10 at 3:44
















              3












              3








              3







              What you are calculating is the average compressed size of an image pixel, if you divide that by the raw pixel size (usually 3 octets for a 24bit RGB), you get the compression ratio.



              This is a good metric that gives you information about the compression state of the image, but it's not enough to judge if the image is sufficiently compressed or not because the compression ratio does not depends only on the compression profile (algorithm = JPEG, quality = 60/100) but also on the compression potential of the image : different images with the same raw size and same compression profile will yield different jpeg size because images are more or less easy to compress (a blank image is very easy to compress, white noise is not).



              Because of this and because "last used" quality profile is not stored in this image (neither in metadata or jpeg header structure), the most used approach when re-publishing images with a target size/quality profile is actually to just recompress (and potentially resize) everything (automatically) regardless of the initial state of the image.



              Yes you may recompress when it's not necessary, yes you may even lose space if recompressing with a higher quality profile but those are edge cases and in the large scale it's the easiest thing to do to ensure a target quality profile. Of course, you only want to do that once in order not gradually degrade the images, and you should probably store two image library: the initial "untouched" one and the "to be published/recompressed" one.



              There are a lot of tools that exist to recompress a bunch of files, you can also script your own and using the right technical stack (C++ and libjpeg mostly) it can be pretty damn fast even for >100k files.



              If you want to implement a smarter/more complex process, you could try experimenting with an iterative re-compress/compare size logic to estimate the original quality profile (re-compressing with the same quality should yield roughly the same size, with a high quality should slightly increase the size and with a lower quality should significantly decrease the size). This would, of course, consume a lot more CPU power.






              share|improve this answer















              What you are calculating is the average compressed size of an image pixel, if you divide that by the raw pixel size (usually 3 octets for a 24bit RGB), you get the compression ratio.



              This is a good metric that gives you information about the compression state of the image, but it's not enough to judge if the image is sufficiently compressed or not because the compression ratio does not depends only on the compression profile (algorithm = JPEG, quality = 60/100) but also on the compression potential of the image : different images with the same raw size and same compression profile will yield different jpeg size because images are more or less easy to compress (a blank image is very easy to compress, white noise is not).



              Because of this and because "last used" quality profile is not stored in this image (neither in metadata or jpeg header structure), the most used approach when re-publishing images with a target size/quality profile is actually to just recompress (and potentially resize) everything (automatically) regardless of the initial state of the image.



              Yes you may recompress when it's not necessary, yes you may even lose space if recompressing with a higher quality profile but those are edge cases and in the large scale it's the easiest thing to do to ensure a target quality profile. Of course, you only want to do that once in order not gradually degrade the images, and you should probably store two image library: the initial "untouched" one and the "to be published/recompressed" one.



              There are a lot of tools that exist to recompress a bunch of files, you can also script your own and using the right technical stack (C++ and libjpeg mostly) it can be pretty damn fast even for >100k files.



              If you want to implement a smarter/more complex process, you could try experimenting with an iterative re-compress/compare size logic to estimate the original quality profile (re-compressing with the same quality should yield roughly the same size, with a high quality should slightly increase the size and with a lower quality should significantly decrease the size). This would, of course, consume a lot more CPU power.







              share|improve this answer














              share|improve this answer



              share|improve this answer








              edited Jan 9 at 16:17

























              answered Jan 9 at 15:55









              zakinsterzakinster

              1313




              1313













              • JPG images usually sub-sample the chroma with 4:2:2 or 4:2:0 (en.wikipedia.org/wiki/Chroma_subsampling#4:2:2), so the "raw" pixels that JPG is compressing has 2x or 4x as many luma pixels as each chroma channel. (Halved horizontally and maybe also vertically). You might want to take that into account when considering "how compressed" an image is. But yeah, as you say that's not a great metric across unknown image contents.

                – Peter Cordes
                Jan 10 at 3:40











              • +1 for rescaling. At some point, you get better image quality by downscaling than by reducing the bits per pixel even further. Unlike modern video codecs like h.264 or h.265 (which can signal the decoder to do more smoothing and deblocking) or the still-image version, HEIF, which is an HEVC(h.265) I-frame, JPEG doesn't have any of that and will just get blocky with lots of ringing artifacts if you starve it of bits. So you need to downscale instead of just reducing quality if you have very high resolution input images.

                – Peter Cordes
                Jan 10 at 3:44





















              • JPG images usually sub-sample the chroma with 4:2:2 or 4:2:0 (en.wikipedia.org/wiki/Chroma_subsampling#4:2:2), so the "raw" pixels that JPG is compressing has 2x or 4x as many luma pixels as each chroma channel. (Halved horizontally and maybe also vertically). You might want to take that into account when considering "how compressed" an image is. But yeah, as you say that's not a great metric across unknown image contents.

                – Peter Cordes
                Jan 10 at 3:40











              • +1 for rescaling. At some point, you get better image quality by downscaling than by reducing the bits per pixel even further. Unlike modern video codecs like h.264 or h.265 (which can signal the decoder to do more smoothing and deblocking) or the still-image version, HEIF, which is an HEVC(h.265) I-frame, JPEG doesn't have any of that and will just get blocky with lots of ringing artifacts if you starve it of bits. So you need to downscale instead of just reducing quality if you have very high resolution input images.

                – Peter Cordes
                Jan 10 at 3:44



















              JPG images usually sub-sample the chroma with 4:2:2 or 4:2:0 (en.wikipedia.org/wiki/Chroma_subsampling#4:2:2), so the "raw" pixels that JPG is compressing has 2x or 4x as many luma pixels as each chroma channel. (Halved horizontally and maybe also vertically). You might want to take that into account when considering "how compressed" an image is. But yeah, as you say that's not a great metric across unknown image contents.

              – Peter Cordes
              Jan 10 at 3:40





              JPG images usually sub-sample the chroma with 4:2:2 or 4:2:0 (en.wikipedia.org/wiki/Chroma_subsampling#4:2:2), so the "raw" pixels that JPG is compressing has 2x or 4x as many luma pixels as each chroma channel. (Halved horizontally and maybe also vertically). You might want to take that into account when considering "how compressed" an image is. But yeah, as you say that's not a great metric across unknown image contents.

              – Peter Cordes
              Jan 10 at 3:40













              +1 for rescaling. At some point, you get better image quality by downscaling than by reducing the bits per pixel even further. Unlike modern video codecs like h.264 or h.265 (which can signal the decoder to do more smoothing and deblocking) or the still-image version, HEIF, which is an HEVC(h.265) I-frame, JPEG doesn't have any of that and will just get blocky with lots of ringing artifacts if you starve it of bits. So you need to downscale instead of just reducing quality if you have very high resolution input images.

              – Peter Cordes
              Jan 10 at 3:44







              +1 for rescaling. At some point, you get better image quality by downscaling than by reducing the bits per pixel even further. Unlike modern video codecs like h.264 or h.265 (which can signal the decoder to do more smoothing and deblocking) or the still-image version, HEIF, which is an HEVC(h.265) I-frame, JPEG doesn't have any of that and will just get blocky with lots of ringing artifacts if you starve it of bits. So you need to downscale instead of just reducing quality if you have very high resolution input images.

              – Peter Cordes
              Jan 10 at 3:44













              2














              For example there is a 2400x600px image with a file size of 1.81MB.
              Photoshop's save for web command creates a 540KB file at 60 quality and same dimensions.
              This is about 29% of original size.


              The original uncompressed size is 2400 x 600 x 3 = 4,320,000 bytes (4.1 MB), because 24 bit color is always three bytes of RGB data per pixel. There is no way around this absolute truth.



              However, JPG size also depends on image detail. Large smooth areas (like sky or painted walls) compress better, but areas of greater detail (like a tree full of leaves) do not compress as well. So there is no absolute numerical indicator.



              But 540 KB is 0.540/4.1 = 13% of the 4.1 MB original size.
              It might be 29% of previous JPG size, but it is 13% of original uncompressed size.
              So that is 1/8 of original uncompressed size, which is usually considered "decent" quality. Not optimum, not maximum quality, but generally decent, perhaps good enough for some uses.
              Just saying, it is already small.



              A larger JPG file is better image quality, and smaller is less image quality. You have to decide what is good enough, but JPG is never "too big", since image quality decreases with JPG compression. 24 bit color has three bytes per pixel uncompressed.



              So the decision is if you want it small or if you want it good.



              But making an existing JPG larger is still worse, since more JPG artifacts are added, and once small, the data is changed, and it will Never get better.



              JPG artifacts typically show two ways, as visible 8x8 pixel blocks of one color in the smooth areas with no detail, or as visible rough edges around the detail edges.



              If editing and re-saving a JPG, additional JPG artifacts are added. If that is required, it is good practice to always re-save to match the original compression setting.






              share|improve this answer
























              • The 4.1 MB number is only true if there is no compression at all, however, even a JPEG with perfect quality can have a smaller file size due to lossless compression.

                – Marv
                Jan 9 at 15:35













              • Yes, that is why I called it "uncompressed", which is how every digital image starts out, which is of course the actual and original size of the data, which is why it is important. Yes, even highest level JPG 100 is compressed substantially smaller, not lossless. Lossless JPG is a misnomer. We have no programs offering it. Its uses call it something else (Wikipedia says DNG and some Raw). However JPEG2 can offer lossless compression, but which has other issues, for example web browsers do not support showing JPEG2, and photo print shops likely do not accept it.

                – WayneF
                Jan 9 at 17:57













              • There is no way around this absolute truth. ... except for chroma sub-sampling, which JPEG uses. JPEG compresses in YUV colorspace (brightness + two colour components), not RGB. Usually 4:2:2 or 4:2:0, reducing the number of pixels in each of the two chroma channels by 2x or 4x. en.wikipedia.org/wiki/Chroma_subsampling#4:2:2. After transforming from RGB to YUV and sub-sampling, that color resolution information is totally gone, and not part of what JPEG is spending bits to encode. If you want to look at bits/pixel, it should be in the color format of the JPEG you're considering.

                – Peter Cordes
                Jan 10 at 3:50








              • 2





                Come on, read the text. The second absolute truth is that it specifically said and referred to "uncompressed" and said that 24 bit color was always three bytes per pixel. :)

                – WayneF
                Jan 10 at 7:15
















              2














              For example there is a 2400x600px image with a file size of 1.81MB.
              Photoshop's save for web command creates a 540KB file at 60 quality and same dimensions.
              This is about 29% of original size.


              The original uncompressed size is 2400 x 600 x 3 = 4,320,000 bytes (4.1 MB), because 24 bit color is always three bytes of RGB data per pixel. There is no way around this absolute truth.



              However, JPG size also depends on image detail. Large smooth areas (like sky or painted walls) compress better, but areas of greater detail (like a tree full of leaves) do not compress as well. So there is no absolute numerical indicator.



              But 540 KB is 0.540/4.1 = 13% of the 4.1 MB original size.
              It might be 29% of previous JPG size, but it is 13% of original uncompressed size.
              So that is 1/8 of original uncompressed size, which is usually considered "decent" quality. Not optimum, not maximum quality, but generally decent, perhaps good enough for some uses.
              Just saying, it is already small.



              A larger JPG file is better image quality, and smaller is less image quality. You have to decide what is good enough, but JPG is never "too big", since image quality decreases with JPG compression. 24 bit color has three bytes per pixel uncompressed.



              So the decision is if you want it small or if you want it good.



              But making an existing JPG larger is still worse, since more JPG artifacts are added, and once small, the data is changed, and it will Never get better.



              JPG artifacts typically show two ways, as visible 8x8 pixel blocks of one color in the smooth areas with no detail, or as visible rough edges around the detail edges.



              If editing and re-saving a JPG, additional JPG artifacts are added. If that is required, it is good practice to always re-save to match the original compression setting.






              share|improve this answer
























              • The 4.1 MB number is only true if there is no compression at all, however, even a JPEG with perfect quality can have a smaller file size due to lossless compression.

                – Marv
                Jan 9 at 15:35













              • Yes, that is why I called it "uncompressed", which is how every digital image starts out, which is of course the actual and original size of the data, which is why it is important. Yes, even highest level JPG 100 is compressed substantially smaller, not lossless. Lossless JPG is a misnomer. We have no programs offering it. Its uses call it something else (Wikipedia says DNG and some Raw). However JPEG2 can offer lossless compression, but which has other issues, for example web browsers do not support showing JPEG2, and photo print shops likely do not accept it.

                – WayneF
                Jan 9 at 17:57













              • There is no way around this absolute truth. ... except for chroma sub-sampling, which JPEG uses. JPEG compresses in YUV colorspace (brightness + two colour components), not RGB. Usually 4:2:2 or 4:2:0, reducing the number of pixels in each of the two chroma channels by 2x or 4x. en.wikipedia.org/wiki/Chroma_subsampling#4:2:2. After transforming from RGB to YUV and sub-sampling, that color resolution information is totally gone, and not part of what JPEG is spending bits to encode. If you want to look at bits/pixel, it should be in the color format of the JPEG you're considering.

                – Peter Cordes
                Jan 10 at 3:50








              • 2





                Come on, read the text. The second absolute truth is that it specifically said and referred to "uncompressed" and said that 24 bit color was always three bytes per pixel. :)

                – WayneF
                Jan 10 at 7:15














              2












              2








              2







              For example there is a 2400x600px image with a file size of 1.81MB.
              Photoshop's save for web command creates a 540KB file at 60 quality and same dimensions.
              This is about 29% of original size.


              The original uncompressed size is 2400 x 600 x 3 = 4,320,000 bytes (4.1 MB), because 24 bit color is always three bytes of RGB data per pixel. There is no way around this absolute truth.



              However, JPG size also depends on image detail. Large smooth areas (like sky or painted walls) compress better, but areas of greater detail (like a tree full of leaves) do not compress as well. So there is no absolute numerical indicator.



              But 540 KB is 0.540/4.1 = 13% of the 4.1 MB original size.
              It might be 29% of previous JPG size, but it is 13% of original uncompressed size.
              So that is 1/8 of original uncompressed size, which is usually considered "decent" quality. Not optimum, not maximum quality, but generally decent, perhaps good enough for some uses.
              Just saying, it is already small.



              A larger JPG file is better image quality, and smaller is less image quality. You have to decide what is good enough, but JPG is never "too big", since image quality decreases with JPG compression. 24 bit color has three bytes per pixel uncompressed.



              So the decision is if you want it small or if you want it good.



              But making an existing JPG larger is still worse, since more JPG artifacts are added, and once small, the data is changed, and it will Never get better.



              JPG artifacts typically show two ways, as visible 8x8 pixel blocks of one color in the smooth areas with no detail, or as visible rough edges around the detail edges.



              If editing and re-saving a JPG, additional JPG artifacts are added. If that is required, it is good practice to always re-save to match the original compression setting.






              share|improve this answer













              For example there is a 2400x600px image with a file size of 1.81MB.
              Photoshop's save for web command creates a 540KB file at 60 quality and same dimensions.
              This is about 29% of original size.


              The original uncompressed size is 2400 x 600 x 3 = 4,320,000 bytes (4.1 MB), because 24 bit color is always three bytes of RGB data per pixel. There is no way around this absolute truth.



              However, JPG size also depends on image detail. Large smooth areas (like sky or painted walls) compress better, but areas of greater detail (like a tree full of leaves) do not compress as well. So there is no absolute numerical indicator.



              But 540 KB is 0.540/4.1 = 13% of the 4.1 MB original size.
              It might be 29% of previous JPG size, but it is 13% of original uncompressed size.
              So that is 1/8 of original uncompressed size, which is usually considered "decent" quality. Not optimum, not maximum quality, but generally decent, perhaps good enough for some uses.
              Just saying, it is already small.



              A larger JPG file is better image quality, and smaller is less image quality. You have to decide what is good enough, but JPG is never "too big", since image quality decreases with JPG compression. 24 bit color has three bytes per pixel uncompressed.



              So the decision is if you want it small or if you want it good.



              But making an existing JPG larger is still worse, since more JPG artifacts are added, and once small, the data is changed, and it will Never get better.



              JPG artifacts typically show two ways, as visible 8x8 pixel blocks of one color in the smooth areas with no detail, or as visible rough edges around the detail edges.



              If editing and re-saving a JPG, additional JPG artifacts are added. If that is required, it is good practice to always re-save to match the original compression setting.







              share|improve this answer












              share|improve this answer



              share|improve this answer










              answered Jan 9 at 15:18









              WayneFWayneF

              9,8741924




              9,8741924













              • The 4.1 MB number is only true if there is no compression at all, however, even a JPEG with perfect quality can have a smaller file size due to lossless compression.

                – Marv
                Jan 9 at 15:35













              • Yes, that is why I called it "uncompressed", which is how every digital image starts out, which is of course the actual and original size of the data, which is why it is important. Yes, even highest level JPG 100 is compressed substantially smaller, not lossless. Lossless JPG is a misnomer. We have no programs offering it. Its uses call it something else (Wikipedia says DNG and some Raw). However JPEG2 can offer lossless compression, but which has other issues, for example web browsers do not support showing JPEG2, and photo print shops likely do not accept it.

                – WayneF
                Jan 9 at 17:57













              • There is no way around this absolute truth. ... except for chroma sub-sampling, which JPEG uses. JPEG compresses in YUV colorspace (brightness + two colour components), not RGB. Usually 4:2:2 or 4:2:0, reducing the number of pixels in each of the two chroma channels by 2x or 4x. en.wikipedia.org/wiki/Chroma_subsampling#4:2:2. After transforming from RGB to YUV and sub-sampling, that color resolution information is totally gone, and not part of what JPEG is spending bits to encode. If you want to look at bits/pixel, it should be in the color format of the JPEG you're considering.

                – Peter Cordes
                Jan 10 at 3:50








              • 2





                Come on, read the text. The second absolute truth is that it specifically said and referred to "uncompressed" and said that 24 bit color was always three bytes per pixel. :)

                – WayneF
                Jan 10 at 7:15



















              • The 4.1 MB number is only true if there is no compression at all, however, even a JPEG with perfect quality can have a smaller file size due to lossless compression.

                – Marv
                Jan 9 at 15:35













              • Yes, that is why I called it "uncompressed", which is how every digital image starts out, which is of course the actual and original size of the data, which is why it is important. Yes, even highest level JPG 100 is compressed substantially smaller, not lossless. Lossless JPG is a misnomer. We have no programs offering it. Its uses call it something else (Wikipedia says DNG and some Raw). However JPEG2 can offer lossless compression, but which has other issues, for example web browsers do not support showing JPEG2, and photo print shops likely do not accept it.

                – WayneF
                Jan 9 at 17:57













              • There is no way around this absolute truth. ... except for chroma sub-sampling, which JPEG uses. JPEG compresses in YUV colorspace (brightness + two colour components), not RGB. Usually 4:2:2 or 4:2:0, reducing the number of pixels in each of the two chroma channels by 2x or 4x. en.wikipedia.org/wiki/Chroma_subsampling#4:2:2. After transforming from RGB to YUV and sub-sampling, that color resolution information is totally gone, and not part of what JPEG is spending bits to encode. If you want to look at bits/pixel, it should be in the color format of the JPEG you're considering.

                – Peter Cordes
                Jan 10 at 3:50








              • 2





                Come on, read the text. The second absolute truth is that it specifically said and referred to "uncompressed" and said that 24 bit color was always three bytes per pixel. :)

                – WayneF
                Jan 10 at 7:15

















              The 4.1 MB number is only true if there is no compression at all, however, even a JPEG with perfect quality can have a smaller file size due to lossless compression.

              – Marv
              Jan 9 at 15:35







              The 4.1 MB number is only true if there is no compression at all, however, even a JPEG with perfect quality can have a smaller file size due to lossless compression.

              – Marv
              Jan 9 at 15:35















              Yes, that is why I called it "uncompressed", which is how every digital image starts out, which is of course the actual and original size of the data, which is why it is important. Yes, even highest level JPG 100 is compressed substantially smaller, not lossless. Lossless JPG is a misnomer. We have no programs offering it. Its uses call it something else (Wikipedia says DNG and some Raw). However JPEG2 can offer lossless compression, but which has other issues, for example web browsers do not support showing JPEG2, and photo print shops likely do not accept it.

              – WayneF
              Jan 9 at 17:57







              Yes, that is why I called it "uncompressed", which is how every digital image starts out, which is of course the actual and original size of the data, which is why it is important. Yes, even highest level JPG 100 is compressed substantially smaller, not lossless. Lossless JPG is a misnomer. We have no programs offering it. Its uses call it something else (Wikipedia says DNG and some Raw). However JPEG2 can offer lossless compression, but which has other issues, for example web browsers do not support showing JPEG2, and photo print shops likely do not accept it.

              – WayneF
              Jan 9 at 17:57















              There is no way around this absolute truth. ... except for chroma sub-sampling, which JPEG uses. JPEG compresses in YUV colorspace (brightness + two colour components), not RGB. Usually 4:2:2 or 4:2:0, reducing the number of pixels in each of the two chroma channels by 2x or 4x. en.wikipedia.org/wiki/Chroma_subsampling#4:2:2. After transforming from RGB to YUV and sub-sampling, that color resolution information is totally gone, and not part of what JPEG is spending bits to encode. If you want to look at bits/pixel, it should be in the color format of the JPEG you're considering.

              – Peter Cordes
              Jan 10 at 3:50







              There is no way around this absolute truth. ... except for chroma sub-sampling, which JPEG uses. JPEG compresses in YUV colorspace (brightness + two colour components), not RGB. Usually 4:2:2 or 4:2:0, reducing the number of pixels in each of the two chroma channels by 2x or 4x. en.wikipedia.org/wiki/Chroma_subsampling#4:2:2. After transforming from RGB to YUV and sub-sampling, that color resolution information is totally gone, and not part of what JPEG is spending bits to encode. If you want to look at bits/pixel, it should be in the color format of the JPEG you're considering.

              – Peter Cordes
              Jan 10 at 3:50






              2




              2





              Come on, read the text. The second absolute truth is that it specifically said and referred to "uncompressed" and said that 24 bit color was always three bytes per pixel. :)

              – WayneF
              Jan 10 at 7:15





              Come on, read the text. The second absolute truth is that it specifically said and referred to "uncompressed" and said that 24 bit color was always three bytes per pixel. :)

              – WayneF
              Jan 10 at 7:15











              0














              Photoshop's "Save for Web" is actually a pretty good compromise between file size and quality, so unless you have more specific requirements, you should go with that. A typical advice for web developers is to stick to 50-70% quality range. Of course, there are exceptions: you will want 90-95% quality on a company logo which has to look great at all times (or even convert it to a lossless format), and go as low as 30% on a large but barely visible page background.



              Also don't forget to rescale your images. A 2400x600 picture will look great on a 4K display, but will be rescaled on smaller screens, wasting data bandwidth with no visual improvement for the user. Check the website template you will be using to find out the optimal width for the images. Typically, at the time of writing that will be somewhere around 1200-1300 pixels (see most popular resolution here).



              Remember to keep the originals of pictures you convert to Web quality. If you'll ever need to rework or print this material, you'll regret to only have it in 60% quality and 1 Mpix resolution.






              share|improve this answer






























                0














                Photoshop's "Save for Web" is actually a pretty good compromise between file size and quality, so unless you have more specific requirements, you should go with that. A typical advice for web developers is to stick to 50-70% quality range. Of course, there are exceptions: you will want 90-95% quality on a company logo which has to look great at all times (or even convert it to a lossless format), and go as low as 30% on a large but barely visible page background.



                Also don't forget to rescale your images. A 2400x600 picture will look great on a 4K display, but will be rescaled on smaller screens, wasting data bandwidth with no visual improvement for the user. Check the website template you will be using to find out the optimal width for the images. Typically, at the time of writing that will be somewhere around 1200-1300 pixels (see most popular resolution here).



                Remember to keep the originals of pictures you convert to Web quality. If you'll ever need to rework or print this material, you'll regret to only have it in 60% quality and 1 Mpix resolution.






                share|improve this answer




























                  0












                  0








                  0







                  Photoshop's "Save for Web" is actually a pretty good compromise between file size and quality, so unless you have more specific requirements, you should go with that. A typical advice for web developers is to stick to 50-70% quality range. Of course, there are exceptions: you will want 90-95% quality on a company logo which has to look great at all times (or even convert it to a lossless format), and go as low as 30% on a large but barely visible page background.



                  Also don't forget to rescale your images. A 2400x600 picture will look great on a 4K display, but will be rescaled on smaller screens, wasting data bandwidth with no visual improvement for the user. Check the website template you will be using to find out the optimal width for the images. Typically, at the time of writing that will be somewhere around 1200-1300 pixels (see most popular resolution here).



                  Remember to keep the originals of pictures you convert to Web quality. If you'll ever need to rework or print this material, you'll regret to only have it in 60% quality and 1 Mpix resolution.






                  share|improve this answer















                  Photoshop's "Save for Web" is actually a pretty good compromise between file size and quality, so unless you have more specific requirements, you should go with that. A typical advice for web developers is to stick to 50-70% quality range. Of course, there are exceptions: you will want 90-95% quality on a company logo which has to look great at all times (or even convert it to a lossless format), and go as low as 30% on a large but barely visible page background.



                  Also don't forget to rescale your images. A 2400x600 picture will look great on a 4K display, but will be rescaled on smaller screens, wasting data bandwidth with no visual improvement for the user. Check the website template you will be using to find out the optimal width for the images. Typically, at the time of writing that will be somewhere around 1200-1300 pixels (see most popular resolution here).



                  Remember to keep the originals of pictures you convert to Web quality. If you'll ever need to rework or print this material, you'll regret to only have it in 60% quality and 1 Mpix resolution.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Jan 10 at 12:47

























                  answered Jan 10 at 12:26









                  Dmitry GrigoryevDmitry Grigoryev

                  25929




                  25929






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Photography Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphoto.stackexchange.com%2fquestions%2f104118%2foptimal-size-of-a-jpeg-image-in-terms-of-its-dimensions%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      MongoDB - Not Authorized To Execute Command

                      in spring boot 2.1 many test slices are not allowed anymore due to multiple @BootstrapWith

                      How to fix TextFormField cause rebuild widget in Flutter