Working with Images

Dennis Brunnenmeyer dennisb at chronometrics.com
Tue Feb 5 11:36:57 PST 2008


Rant begins...

Well, I've had enough of this nonsensical babble. None of you seem to 
understand what you are talking about when it comes to dealing with 
screenshots and raster images, (a.k.a. bitmapped images) as opposed 
to vector or llne art.

First of all, display devices, whether printers or monitors, have an 
upper limit on their ability to resolve (print or display) image 
detail, which by the way is what "resolution" is a measure 
of...meaningful detail. The best my aging but faithful laser printer 
can do is 600 dpi, while my uppity LCD monitor can display up to 100 
dpi, with its1600 x 1200 native resolution on an LCD panel that is 
exactly 16" wide x 12" tall."  You cannot see nor capture anything 
and create a screenshot image with higher resolution than the display 
device. You cannot print anything with higher resolution than the 
printer can resolve. If you feed a high resolution image to a medium 
resolution printer, it will interpolate (resample) the image down to 
medium resolution quality. It has to, as it cannot put all of that 
information on paper. If you take an very high resolution (total 
pixel count) image of size 4000 x 3000 pixels (12 megapixels) and 
display the full image it on a monitor like mine, you will not see 
all of detail in the image and hence you will not be able to capture 
all of the detail in a screenshot.

Most of you seem to appreciate this, but some of you think you can 
improve resolution by artificial means. No, you cannot.

A true measure of the resolution of an image is the original size of 
the image in total pixels, assuming it is true to begin with. That 
is, assuming a perfect digital camera with a perfect lens and the 
ability to produce a "raw" bitmap (rather than a compressed JPEG 
file), that 12 megapixel CCD image sensor will produce a significant 
improvement in the resulting image over a 2 megapixel CCD sensor. 
That image quality is NOT described by either ppi or dpi. It is a 
function of the number of pixels in the X direction and the number of 
pixels in the Y direction.

Now the plot thickens when I return to the subject of screenshots, 
because if I run my graphics card at 1600 x 1200, the type, icons and 
dialog boxes are uncomfortably small for me to read on the monitor, 
so I set the graphics card to display its images at 1280 x 960 dpi. 
At this point, the maximum image size that can be displayed without 
loss of resolution is now 80 ppi. That's 1280 divided by 16. 
[Unfortunately, since the graphics card's resolution doesn't match 
the native resolution of the LCD panel, the on-screen picture is not 
as crisp as it could be. This is a result of "aliasing" artifacts, 
but that's a topic for a different thread.]

Note that in the above paragraph, I switched from dpi for display 
devices to ppi when describing image size. This is a meature of the 
physical size of a digital image (as printed or displayed) and should 
be described in ppi. The ability of a device to display or print an 
image should be described in dpi, or alternatively, lpi for lines per 
inch, or pixel spacing, as in 0.25mm. There is a tendency to intermix 
this terminology and hence confuse the issues you are discussing.

Now that I have set my graphics card to 1280 x 960 for this monitor, 
the maximum resolution of any image I capture from the screen is 80 
ppi, regardless of whether I capture a whole screen or just a region 
of it. If I set the "resolution" of the screen capture program 
(Snag-It or HyperSnap) to 80 ppi, then the resulting image will be 
the same physical size as it appeared on the screen, 100%. If I set 
the capture "resolution" to 160 ppi, then the image will be half the 
physical size as it appeared on the screen, BUT IT WILL HAVE EXACTLY 
THE SAME NUMBER OF PIXELS. The resolution has not be improved, as no 
more detail has been added.

Upsampling and/or downsampling using any kind of pixel resampling 
(a.k.a. interpolation), whether bicubic or otherwise, ALWAYS removes 
detail from the image. In either case, new pixels are created that 
are some kind of average of the original ones. They're guesses at 
what shoud be there at that point in the image, and not real 
information that wasn't there before. No new detail nor image 
improvement can be added by interpolation.

Now, however, you can re-scale an image in programs like Photoshop by 
keeping the same number of pixels (do not interpolate) and altering 
the size of the image in the X and Y directions equally. For example, 
if I took the 160 ppi screenshot described in the previous paragraph 
and re-scaled it in Photoshop without  resampling the image, and if I 
prescribed a new size of 80 ppi, the resulting image would grow back 
to 100% in size and have still have exactly the same number of pixels 
as before. The resolving power of the image has not changed, and no 
more detail has been provided. This is a correct way to get an image 
to the size you want it in your document. Another way is to import it 
as is and resize it in Frame using the image's corner anchor points 
while holding the Shift key down.

Don't mislead yourselves and others by thinking that the more 
"resolution" in your screenshot capture application you use gives you 
better results, and don't mislead yourselves by thinking you can add 
more resolution by upsampling (or rescaling, for that matter) to a 
different ppi or by adding more artificial pixels.

Now, on another topic, there seems to be a rule of thumb that "most 
SVGA screens are 96dpi." Someone came up with the statement that a 
20" screen with a 1280 x 1024 display is, of course, 96 dpi. That's 
utter nonsense. Given that screen size is measured on the diagonal, 
and assuming the old standard 4:3 aspect ratio, a 20" screen is 16" 
wide and 12" tall...rather like my Samsung LCDs. With 1280 pixels in 
the X (horizontal) direction, the screen resolution is 80 dpi, not 96 
dpi. Any way you manipulate the numbers, 96 dpi is not a result. By 
the way, here I assumed a 4:3 aspect ratio, which is the ratio of 
width to height. If I ran my graphics card at 1280 x 1024, circles 
would be egg-shaped, since that resolution calls for a screen with a 
5:4 aspect ratio. Of course, wide screens have a different aspect 
ratio, but the principles are exactly the same.

I have no idea what David meant by this statement:  "Again, referring 
to my last post, monitor resolution only counts if
capturing an entire screen." Monitor size DOES count if you're trying 
to calculate the resolving power of your monitor in dpi and hence the 
maximum resolution attainable in a screenshot. It's the horizontal 
resolution of your graphics card setting divided by the width of the 
display area in inches or centimeters, or in the example given, 
1280/16 = 80 dpi.

End of rant ...

Flame away...but be sure you know what you are talking about and quit 
misleading others if you don't understand this.

Dennis Brunnenmeyer
***************************************************************************************


At 09:09 AM 2/5/2008, David Creamer wrote:
> > How can SnagIt capture an image at a higher resolution than what the screen
> > is set to?  A 20" screen at 1280 x 1024, for example, is 96 
> DPI.  How do you
> > get 200 DPI out of that?
>
>Screen size (20") is meaningless, only the monitor resolution counts.
>Again, referring to my last post, monitor resolution only counts if
>capturing an entire screen.

Dennis Brunnenmeyer
Director of Engineering
CEDAR RIDGE SYSTEMS
15019 Rattlesnake Road
Grass Valley, CA 95945-8710
Office: (530) 477-9015
Fax:  (530) 477-9085
Mobile: (530) 320-9025
eMail:  dennisb /at/ chronometrics /dot/ com



More information about the framers mailing list