| Term 
 
        | How would you compute the arithmetic mean of an image? |  | Definition 
 
        | 
The arithmetic mean of an image is just its average pixel valueAdd up all the pixel values in the image and then divide this total by the number of pixels in the image.
 
 |  | 
        |  | 
        
        | Term 
 
        | 
What is the variance of an image? How is it computed? |  | Definition 
 
        | The image variance is a measure of the distribution of pixel values within an image. If the pixel values are distributed such that most are close to the mean value, the variance will be small. If pixel values vary widely from the mean, then the variance will be larger.
 (Px,y - mean)2/(XY-1)
 Variance is a measure of overall variation in image pixel values. var = (sum(Pxy*Pxy)/(x*y)) -mean2
 |  | 
        |  | 
        
        | Term 
 
        | What would be the uses of the variance? |  | Definition 
 
        | A guide in adjusting color balance/contrast |  | 
        |  | 
        
        | Term 
 
        | What is the use of arthimetic mean? |  | Definition 
 
        | As a measure of overall brightness of an image useful as a guide in adjusting image/threshold values. |  | 
        |  | 
        
        | Term 
 
        | What is the histogram of an image? |  | Definition 
 
        | A histogram is an additional statistical image measure. It provides additional information
 about the distribution of the image pixel values. It is a tabular representation of the image data which shows how many image pixels have each of the possible pixel values.
 It is a graphical representation of the number of pixels having each possible value in an image.  The x-axis of this 2D bar graph is the pixel value while the y-axis is the number of pixels having each pixel value.
 |  | 
        |  | 
        
        | Term 
 
        | 6. What are some of the uses for a histogram? |  | Definition 
 
        | The graphical representation of a histogram can be useful to quickly understand the range and distribution intensity values in an image. This understanding is helpful in choosing, for example, corrections to image intensity, contrast and color balance. Image software often calculates the histogram data and uses it to make automatic decisions about corrective manipulations of the image.
 A guide in understanding the distribution of image pixel values, as a guide for picking threshold values, as a guide for adjusting pixel values.
 |  | 
        |  | 
        
        | Term 
 
        | 8. What is a point operation on an image? |  | Definition 
 
        | Point operations are functions that are performed on each pixel of an image, one at a time, independent of the other pixels in the image.
 Operates on pixels one at a time. New pixel value is based only on the initial corresponding pixel value, not on any surrounding pixels.
 |  | 
        |  | 
        
        | Term 
 
        | 7. Given only the histogram of an image, how would you compute its arithmetic mean or average value?
 |  | Definition 
 
        | The arithmetic mean of an image is just its average pixel value. The median is the pixel
 value such that one half of the image pixels have values greater than the median, and the half have values below the median.
 mean = sum (i*Ni) / sum(Ni)  where i is the pixel value, Ni is the number of pixels with the value i,  0<i<max pixel value
 |  | 
        |  | 
        
        | Term 
 
        | 10. What kind of point function is needed to invert an image (make the negative image)? Is this operation reversible?
 |  | Definition 
 
        | An inverse function. It inverts an image to produce its negative image. Its transfer
 function curve is linear with a slope of -1.0. Yes they are reversible.
 f(x) = max - x  where max is the maximum possible pixel value.  This operation is reversible
 |  | 
        |  | 
        
        | Term 
 
        | 11. How would you decrease the contrast of an image? Is this operation reversible?
 |  | Definition 
 
        | 
 The image contrast decreasing transfer function is also a linear curve, with a slope less
 than 1.0. No it is not reversible.
 Increase the darkest pixel values and decrease the brightest pixel values.  For example  f(x) = x (max-min)/256 + min   for an 8-bit image
 |  | 
        |  | 
        
        | Term 
 
        | 9. What is the unity point operation on an image? What is its effect? |  | Definition 
 
        | the unity function makes the output image is identical to the input image. Its transfer function curve is linear with a slope of 1:0.
 f(x) = x,  pixel values do not change, no effect on the image
 |  | 
        |  | 
        
        | Term 
 
        | 
 12. What does it mean to threshold an image? Is this operation reversible?
 |  | Definition 
 
        | image input values below some specified threshold value, all output pixels
 are set to black, the minimum value 0. For input image values above the specified threshold, all output pixels are set to white, the maximum value 255. It doesn’t say (that I can find) if it is reversible but I assume it’s not.
 f(x)= 0 for x < Tx
 max x > Tx
 where Tx is the threshold pixel value.  This operation is not reversible.
 |  | 
        |  | 
        
        | Term 
 
        | 11. How would you decrease the contrast of an image? Is this operation reversible? |  | Definition 
 
        | The image contrast decreasing transfer function is also a linear curve, with a slope less
 than 1.0. No it is not reversible.
 Increase the darkest pixel values and decrease the brightest pixel values.  For example  f(x) = x (max-min)/256 + min   for an 8-bit image
 |  | 
        |  | 
        
        | Term 
 
        | 
 15. What does it mean to add two images? To subtract them?
 |  | Definition 
 
        | Arithmetic operations can be used to combine multiple images to create new images. Binary operations combine two images using arithmetic operations to create a resultant image.The arithmetic operation is performed at each x; y pixel location, independent of the other pixel locations in the image.
 Add - on a pixel by pixel basis add the pixel values of two images.
 Subtract - on a pixel by pixel basis subtract the pixel values of two images.
 |  | 
        |  | 
        
        | Term 
 
        | 14. What are arithmetic functions on images |  | Definition 
 
        | Generally involves two or more images.  Operates on a pixel by pixel basis, computing new pixel values by combining corresponding  initial pixel values from multiple images using some arithmetic operation such as addition or subtraction, etc.
 |  | 
        |  | 
        
        | Term 
 
        | 13. How would you make an RGB image more cyan? Is this operation reversible? |  | Definition 
 
        | For color images, typically composed of three primary color components such as red, green, and blue, each color component or color channel can be operated on independently. The color balance of a digital image can be modified by applying independent point transfer functions to each color component.
 Either decrease the red pixel values or increase both the green and blue pixel values.  Generally not reversible.
 |  | 
        |  | 
        
        | Term 
 
        | 15. What does it mean to add two images? To subtract them? |  | Definition 
 
        | Arithmetic operations can be used to combine multiple images to create new images. Binary operations combine two images using arithmetic operations to create a resultant image.The arithmetic operation is performed at each x; y pixel location, independent of the other pixel locations in the image.
 Add - on a pixel by pixel basis add the pixel values of two images.
 Subtract - on a pixel by pixel basis subtract the pixel values of two images.
 |  | 
        |  | 
        
        | Term 
 
        | 17. What is alpha blending of images? |  | Definition 
 
        | new image pixel values Cx,y = Ax,y * alpha + Bx,y * (1-alpha)  where Ax,y and Bx,y are the pixels in the two images to be blended  and alpha is the blending value
 |  | 
        |  | 
        
        | Term 
 
        | 18. What is image compositing? How are mattes used in this process? |  | Definition 
 
        | Arithmetic point operations are the fundamental basis for most image compositing operations. Additional images, called matte images or simply mattes are often used in
 compositing operations. The matte images contain pixel by pixel blending values. These
 blending values are also known as alpha values
 A matte image isan image that contains values to be used as pixel by pixel blending values
 Matte Images are created: One approach is called blue screen (or green screen). A foreground element is filmed in front of a blue or green screen background. Color values are used separate foreground from background.  Pixels where the image is blue (or green) will have a matte value of 0, where they are not blue (or green) the matte value will be 1.
 What is image compositing? What would be a use for image compositing?Using a matte image we can superimpose a foreground element over a background image.  An example would be a weatherman superimposed over a computer generated background weather map. Another example would be placing a synthetic dinosaur in a natural jungle environment.
 |  | 
        |  | 
        
        | Term 
 
        | 19. What are neighborhood image operations? How do they compare to point image operations?
 |  | Definition 
 
        | Neighborhood operations are those that combine information from a region or neighborhood of input image pixels to generate each new output image pixel.
 Neighborhood operations make use of pixel values from an image region around the pixel of interest.  Point operations only use pixel values corresponding to the single pixel of interest.
 |  | 
        |  | 
        
        | Term 
 
        | 
 16. What is image averaging? What would be a use for image averaging?
 |  | Definition 
 
        |   On a pixel by pixel basis add the pixel values of several images, then divide these pixel sums by the number of images averaged.  This can have the effect of reducing noise in image sequences such as successive video images.
   |  | 
        |  | 
        
        | Term 
 
        | 
 20. What is convolution? Why do we care?
 |  | Definition 
 
        | a local area of pixels are combined to produce each desired output
 pixel value.
 modify the rate of variation or spatial frequency characteristics of images.
 See section 1-9 of the course notes.  We care because many useful image filtering operation are implemented using convolution.
 |  | 
        |  | 
        
        | Term 
 
        | 20. What is convolution? Why do we care? |  | Definition 
 
        | a local area of pixels are combined to produce each desired output
 pixel value.
 modify the rate of variation or spatial frequency characteristics of images.
 See section 1-9 of the course notes.  We care because many useful image filtering operation are implemented using convolution.
 |  | 
        |  | 
        
        | Term 
 
        | 23. What is sub-pixel sampling? |  | Definition 
 
        | geometric transformations result in mappings that require values in between the pixels of the source image.
 Determining a value for an image location that falls in-between the sampled image pixel locations.
 |  | 
        |  | 
        
        | Term 
 
        | 22. What is a LaPlacian filter? What does it do to the image? |  | Definition 
 
        | This filter is omnidirectional, enhancing edges of all orientations.
 When this kernel is passed over an image area that has the same value at every pixel the resulting output pixels will be 0 or black, because the sum of each pixel times its mask value will yield zero. If there is an edge or contrast boundary in the image area, this lter will emphasize it.
 
 
 |  | 
        |  | 
        
        | Term 
 
        | 21. If we applied the following 3 x 3 convolution kernel to an image, what would be the effect on the output image?
 |  | Definition 
 
        | 
 This is for -5 in the middle
 This convolution kernel would have no effect on the image since all of the kernel values except the center one are zero.  It would multiply the current pixel value by -5 but then it would be divided by the sum of the kernel coefficients which in this case would be -5.  The net result is to multiply the pixel values by 1 producing no change in the image
 |  | 
        |  | 
        
        | Term 
 
        | 24. Compare nearest neighbor sampling with bi-linear sampling. |  | Definition 
 
        | This method simply selects the value of the existing pixel that is closest to the desired location.
 uses proportional amounts of the surrounding four-pixel neighborhood to compute the needed output value, linear interpolation is done in both the horizontal and the vertical directions
 Nearest neighbor sub-pixel sampling just determines which sampled pixel location is closest to the desired sub-pixel location and using that pixel's value as the sub-pixel value.  The bi-linear approach uses linear interpolation to derive a sub-pixel value based on the sampled pixel values surrounding the sub-pixel location.  The interpolation is first applied in one pixel array direct and then those interpolated values are then interpolated in the other direction.
 |  | 
        |  | 
        
        | Term 
 
        | 26. Compare forward transformation with inverse transformation as they relate the geometric image operations. |  | Definition 
 
        | Forward: Adds the horizontal and vertical translation values.
 Inverse: Subtracts the horizontal and vertical translation values.
 In the 'forward' transformation, for each pixel in the source image, a location for that data is determined in the target image.
 In 'inverse' transformation, for each pixel in the target image, the appropriate value is computed based on the source image data.
 |  | 
        |  | 
        
        | Term 
 
        | 
 25. What are geometric operations on images?
 |  | Definition 
 
        | translation, scaling and rotation
 Geometric operations actually move pixel values around in the image.  These include operations such as rotation, skewing, scaling, and image warping.
 |  | 
        |  | 
        
        | Term 
 
        | 27. Given four adjacent pixels whose 8-bit gray scale values (base 10) are as follows: P(11;6) = 69 P(12;6) = 65
 P(11;7) = 73 P(12;7) = 77
 What would be the value of the sub-pixel sample of the image at location [11:75; 6:5]?
 |  | Definition 
 
        | Interpolating 75% across the top and bottom rows gives the values 66 and 76.  Then interpolating 50% between the interpolated rows values gives the final sub-pixel value of 71.
 |  | 
        |  | 
        
        | Term 
 
        | 28. If you wanted to low-pass filter an image by manipulating its frequency domain power spectra, how would you accomplish this?
 |  | Definition 
 
        | Low-pass filtering means only allowing spatial frequencies below a maximum frequency in the output image.
 First convert the image to its equivalent frequency power spectra.
 Next, remove power spectra components that correspond to frequencies greater than the desired maximum frequency.
 Last, convert the edited power spectra back into its corresponding spatial domain image.
 |  | 
        |  | 
        
        | Term 
 
        | 29. What are the steps involved in JPEG compression? If we modified the JPEG image compression scheme to use only the dc coefficients, what would the resulting "compressed images contain? |  | Definition 
 
        | First, divide the image into 8x8 pixel blocks.  Then, for each block, perform a Discrete Cosine Transform to get a representation of its frequency components.  This will be in the form of a set of 64 coefficients. Depending on the compression quality desired some number of these coefficients will be encoded in the compressed image.  This encoding will use a Huffman coding scheme. In addition for each image block a zero frequency or 'DC' coefficient will be determined.  This DC component represents the average pixel value for the 8x8 pixel block.  If we only used these DC component to represent the image, the result would be an image composed of 8x8 pixel block where each block of pixel would have a uniform pixel value consisting of the average of the pixels in that block in the original image.
 |  | 
        |  | 
        
        | Term 
 
        | 30. What is image warping? What is image morphing? |  | Definition 
 
        | -can be produced by selecting the desired destination image control points and can be generated if four points in the source image are mapped to desired corresponding locations in the output image.
 -the method which makes one object appear to change into another.
 Image warping is applying some location based function to an image that moves its pixel values to new locations - this results in some geometric 'distortion' of the image.
 Image morphing is the combination an alpha blending operation between two images with an image warping operation.
 |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | This is the LaPlacian filter which enhances edges in the image |  | 
        |  | 
        
        | Term 
 
        | Sobel Filter Kernel turned?  |  | Definition 
 
        | This is the a X directional Sobel filter. It detects image edges aligned in the X direction |  | 
        |  | 
        
        | Term 
 
        | Prewitt Filter Kernel turned?  |  | Definition 
 
        | This is the a X directional Prewitt filter. It detects image edges aligned in the X direction |  | 
        |  | 
        
        | Term 
 | Definition 
 
        | This is the a 3x3 Gaussian filter. It blurs or low pass filters the image. |  | 
        |  | 
        
        | Term 
 
        | 33. Demonstrate that the order in which geometric transformations are applied will a effect the resultant image.
 |  | Definition 
 
        | When applying several of these transformations in sequence, the order in which they are applied determines the result. Applying the same transforms but in a different order will usually create a different result.
 |  | 
        |  | 
        
        | Term 
 
        | 34. What is the discrete Fourier transform?
 |  | Definition 
 
        | 
 DFT-Mathematical operations can be used to convert an image from its spatial domain to its corresponding frequency domain representation.
 |  | 
        |  | 
        
        | Term 
 
        | 35. What is the Fast Fourier transform? What is it used for? |  | Definition 
 
        | 
 FFT- tailored to the needs of ditigal imagery.
 This process is performed first on each row of image pixels and then on each columns of image pixels. The result is a two- dimensional array of values, called the image power spectra
 |  | 
        |  | 
        
        | Term 
 
        | 38. How would you do high-pass filtering on an image using its power spectra? |  | Definition 
 
        | eliminating the low frequency components clustered near the center of the power spectra
 |  | 
        |  | 
        
        | Term 
 
        | 36. What is the power spectra of an image? What does it show? |  | Definition 
 
        | two- dimensional array of values
 represents the strength or power of the various frequency components of the original image.
 |  | 
        |  | 
        
        | Term 
 
        | 37. How would you do low-pass filtering on an image using its power spectra? |  | Definition 
 
        | eliminating all but the central low frequency components
 |  | 
        |  | 
        
        | Term 
 
        | 40. Are these domains equivalent? If so, how? |  | Definition 
 
        | the accuracy f frequency domain operations are often higher than if they were performed in the spatial domain.
 |  | 
        |  | 
        
        | Term 
 
        | 39. How are spatial domain and frequency domain related for images? |  | Definition 
 
        | some image manipulations can be performed easily on the frequency domain
 representation of an image, while the equivalent operation in the spatial domain involves cumbersome and time-consuming convolutions
 |  | 
        |  | 
        
        | Term 
 
        | 1. Give an example of an “arithmetic” operation on images.  What would be the effect of your example arithmetic operation?   |  | Definition 
 
        | 
             These operations generally require two or more input images.  On a pixel by pixel basis the operation computes the new output pixel by doing an arithmetic operation between the input pixels.One example would be the image averaging function where pixels from several images are added together.  The summed pixel values are divided by the number of images to get the average value for each pixel.  If the images were a sequence of video images this operation would tend to reduce noise in the image.   Another operation would be to subtract one image from another, subtracting the pixel values of one image from the pixel values of the other image, this would emphasize the differences between the two images. |  | 
        |  | 
        
        | Term 
 
        |   2.  Give an example of a “point” image operation.  What would be the effect of your example point operation?   
 |  | Definition 
 
        | A simple example would be the unity operation where each pixel just gets its original value.  Other examples include the invert operation, the threshold operation, increasing or decreasingcontrast, and gamma correction. |  | 
        |  | 
        
        | Term 
 
        | 3.  Given a uniform gray image (uniform means all the pixels have the same value).  For this image the pixel value is 63.  What is the “mean” for this image?  What is the “variance” for this image? What is its “standard deviation?” 
 |  | Definition 
 
        |               Since all pixels have the same value, the ‘mean’ is simply that value – 63.  Also, since they all have the same value there is no variation so the variance and standard deviation is 0 |  | 
        |  | 
        
        | Term 
 | Definition 
 | 
        |  |