Imaging basics

Before you begin: Lessons are based on community-based, continually updated online sources such as Wikipedia . Relevant terms for this lesson are listed under Topics and presented in a narrative format in the Read about sections. Click on each of the linked items and visit the Wikipedia article to get the most out of the lesson, and then hit the Back button on your browser to return to the lesson.


  • To be familiar with the factors related to image file size
  • To be able to distinguish between lossy and lossless image compression algorithms
  • To understand the principles behind image tiling and pyramids


pixel , resolution , color depth , image editing , lossy compression , lossless compression , compression ratio , image file formats , digital imaging , pyramid

Read about

Pixels: dynamic range, density and color

As discussed previously in the article on digital data , images consist of pixels . Higher density of pixels in a given space produces improved image resolution (i.e. dots per inch ). Each pixel takes on one color. The range of colors represented can be increased by increasing the number of bits assigned to color identities, which is known as color depth (i.e. 8-bit, 16-bit, or 24-bit color).

Digital images are based on the RGB color model , which uses combinations of red, green, and blue to produce a wide spectrum of colors. The human eye is believed to be able to differentiate between 10 million colors. If we are using 24-bit color, 8-bits are assigned to each color "channel" (i.e. red, green, blue), and there are (2^8)^3 possible combinations, producing 16,777,216 colors. This is generally referred to as truecolor , and is used for high-quality images or graphics.

So why don't we just use 24-bit color and high pixel density for all images? The flip side of using high pixel density at the greatest color depth possible is the cost in image size. Bigger image files require more space allocated in memory and storage, and opening, modifying, and sending images over networks may take more time. In pathology, a single glass slide may take up several gigabytes in storage. Given that surgical volumes may run over 50,000 specimens per year, with multiple slides from any one specimen, the space requirements for digitalizing slides may delay wide-spread use of whole-slide imagers. Digital imaging also allows for image editing to be performed relatively easily, which may become a security/authentication consideration in the healthcare setting.

Sampling the Optical Image: Nyquist-Shannon sampling theorem

Nyquist-Shannon sampling theorem


CCD(charge coupled device) and CMOS(complementary metal oxide semiconductor) are image sensors that use two different technologies for capturing images digitally. What is the difference between CCD and CMOS image sensors in a digital camera(from HowStuffWorks)?

Image compression

Because of the space issues involved in storing digital images (discussed above), various image compression algorithms have been created to decrease the size of image files, and utilize various compression ratios . These may be classified into lossy (e.g. JPEG ) or lossless (e.g. PNG ) algorithms. These can reduce redundancy, for example, in areas where all the pixels share the same color.

Image formats

Why are there so many different image file formats ?

Tiling and pyramids

Image tiling and pyramids support display of high-resolution images at high performance levels. At different magnification levels (e.g. of a glass slide), larger areas of an image may be displayed (i.e. "zoomed out") with lower resolutions or when "zooming in" to high resolutions, smaller areas of the image need to be displayed at any given time. Taking advantage of this idea, image tiling and pyramids allow for efficiency in imaging.

Tiling segments an image into small rectangles, called tiles. At the highest resolution (e.g. "100x"), only a portion of an image must appear in view at any given time, and therefore, only a subset of the tiles need to be displayed.

An image pyramid is composed of a base image with a series of successively smaller sub-images, each at a lower resolution from the previous image. In other words, the sub-images represent the same image area at lower resolution levels (i.e. "zooming out").


Online Resources


  1. How is the file size of an image calculated?
  2. Given a certain number of pixels, what determines the resolution of an image?
  3. What is the difference between image tiling and image pyramids?

Advanced courses

Expert corner

Imaging basics | Goals | Topics | Read about | Activities | Online Resources | Questions | Advanced courses | Expert corner This page was last modified by - gsahota gsahota on Sep 27, 2009 9:56 am.