Identifying Resolution of Imaging Systems

Identifying Resolution of Imaging Systems Resolution is a measurement of an imaging system’s ability to resolve the object which is be imaged. Test targets are typically tools that are used to check the resolution of an imaging system. The most popular targets consist of “groups” of 6 “elements” and each element consists of three horizontal and three vertical bars equally spaced with well-defined width. The vertical bars are used to calculate horizontal resolution and horizontal bars are used to calculate vertical resolution. Analyzing test target image is to identify the Group Element number of highest spatial frequency where either horizontal or vertical lines are distinguishable. Here is an example of such test target image: To look at the pixel values we have chosen one row which is close to the center of the image. We see that the maximum pixel brightness is 120 counts at the center and about 95 counts at the edge for the test target image. Maximum theoretical pixel value for an 8-bit format image is 255 counts, thus only half of the sensor dynamic range is used for this test. Groups are labeled by numbers in order of increasing frequency. It is obvious that Groups with highest resolution are near the image center. The image part, where the groups 8 and 9 are located, is shown here: To avoid repetitions, we only show the calculation results for the resolution along the horizontal / X direction. To reduce the noise in calculation of the image contrast, each group image of the 3 vertical bars was averaged along Y direction inside the extent of the black bars. The resulting averaged amplitudes along X direction for all the elements of the group 8 are shown here: The signal amplitude difference between black stripe/line and white space is recognizable for the 6 elements of Group 8, i.e., all the 6 elements are distinguishable  Next picture shows the same kind of plot for all the elements of the group 9: The signal amplitude difference between black bar/line and white space is countable for the elements 1-5 but is not distinguishable for the element 6 of group 9.  The resolution of this imaging system is Group-9 Element-5 with line width of 0.62µm, i.e., frequency of 806 line pairs per mm. It is known that the resolution of an imaging system can be affected by factors such as object/test target contrast, lighting source intensity, and software correction. Increasing the illumination intensity and having proper parameter settings for the camera can improve the resolution of the imaging system.

Read more
Lossless Image Compression Example

For storage and transmission of large image files it is desirable to reduce the file size. For consumer-grade images this is achieved by lossy image compression when image details not very noticeable to humans are discarded. However, for scientific images discarding any image details may not be acceptable. Still, all the images, except completely random ones, do include some redundancy. This permits lossless compression which does decrease image file size while preserving all the image details. The simplest file compression can be achieved by using well-known arithmetic encoding of the image data. Arithmetic encoding compression degree can be calculated using Shannon entropy, which is just minus averaged base 2 Log of probabilities of all the values taken by the image pixels. This Shannon entropy gives averaged number of bits per pixel which is necessary to arithmetically encode the image. If, say, the original image is a monochrome one with 8 bits per pixels, then for completely random image the entropy will be equal to 8. For non-random images the entropy will be less than 8. Let’s consider simple example of NASA infrared image of the Earth, shown here using false color This image is 8-bit monochrome one, and has entropy of 5.85. This means arithmetic encoding can decrease image file size 1.367 times. This is better than nothing but not great. Significant improvement can be achieved by transforming the image. If we would use standard Lossless Wavelet compression (LWT), after one step of the LWT the initial image will be transformed into 4 smaller ones: 3 of these 4 smaller images contain only low pixel values which are not visible on the picture above. Zooming on them saturates top left corner, but makes small details near other corners visible (notice the changed scale on the right): Now the entropy of the top left corner 5.85, which is close to the entropy 5.87 of the complete initial image. The entropies of the other 3 corners are 1.83, 1.82 and 2.82. So, after only one LWT step the lossless compression ratio would be 2.6, which is significantly better than 1.367. Our proprietary adaptive prediction lossless compression algorithm shows small prediction residue for the complete image: Actual lossless compression ratio achieved here is about 4.06. It is remarkable that while the last picture looks quite different from the original NASA image, it does contain all the information necessary to completely recover the initial image. Due to lossless nature of the compression, the last picture, using arithmetic encoding, can be saved to the file 4.06 times smaller than the initial NASA picture file. Our proprietary algorithm applied to this smaller file completely recovers the initial picture, accurately to the last bit. No bit left behind.

Read more
3D Image Analysis

Image Analysis of a 3D Image Several manufacturers sell 3D cameras which use 2D sensor arrays sensitive to the phase of reflected laser light. All of them spread laser light so that it permanently illuminates the complete scene of interest. Laser light is modulated synchronously with the pixel sensitivity. The phase of reflected laser light, going back to sensor pixels, depends on the distance to reflection points. This is the basis for calculating the XYZ positions of the illuminated surface points. While this basic principle of operation is the same for a number of 3D cameras, there are a lot of technical details which determine the quality of the obtained data. The best known of these 3D cameras is Microsoft Kinect. It also provides the best distance measurement resolution. According to our measurements, the standard deviation of distance to both white and relatively dark objects is below 2 mm. Most 3D cameras have higher distance measurement noise, often unacceptably high for even relatively high target reflectivity of 20 percent. Here we show the example of data obtained using one not-so-good European 3D camera with 2D array of laser light Time-Of-Flight sensitive pixels. We used the default settings of the camera to calculate distances to the white target at 110 cm from the camera, which is close to the default calibration setup distance for the camera. Deviations of distance from smooth approximating surface in the center of the image are shown by the point cloud here: Here X and Y are distances from the image center measured in pixels. Here is a histogram of the distance noise: Both figures show that, in addition to Gaussian-looking noise, there are some points with very large deviations. Such large deviations are caused by strong fixed-pattern noise (difference between pixel sensitivities). While the noise of this camera is at least 8 times higher than the noise of Kinect, there are more problems which become visible looking at 2D projection of the 3D point cloud. If projected to the camera plane, color-coded distances, shown in cm, do not look too bad for some simple scanned scene: Here x index and Y index are just pixel numbers in x and y directions. The picture becomes more interesting when looking at the projection of the same point cloud to the X-Distance plane: We can clearly see stripes separated by about 4 cm in distance. Nothing like these spurious stripes can be seen in point clouds from the good 3D cameras, such as Kinect. So the conclusion of our 3D image analysis of this European 3D camera is that it is not competitive with the best available 3D cameras.

Read more
Image Processing Case Study

Let’s look at the transportation industry-based case of extensive image processing. Two video cameras were looking at the boxes moving fast on the conveyor belt. To provide high enough image resolution the cameras were placed close to the belt but they could not cover all the belt cross-section. They were placed on the sides of the belt, and could see parts of the boxes. Customer wanted good images of the texture on the top of the boxes, so the images from the two cameras needed to be stitched. Two cameras see the same object at different angles and distances. Before merging the images from the different cameras the images must be transformed from the coordinate systems of the cameras to one common coordinate system, and placed in one common plane in XYZ space. Our developed software performed transformation automatically, based on the known geometry of the camera positions relative to the conveyor belt. Still, after such transformation, multi-megapixel grayscale images from the left and the right cameras are shifted in common plane relative to each other: Here grayscale images from the two cameras are shown in false color. The scale on the right demonstrates the relation between 8-bit pixel signal strength and the false color. We see that the two images also have different brightness. Our algorithms adjust the brightness and shift the images from the left and right cameras to make merging of two images into one image possible. The resulting combined image is shown using different choice of false colors: Right image pixels are shown using magenta, and the left image pixels are shown using green color. Here is the zoomed version of the overlap region of the stitched image: If the stitching would be perfect, then in the overlap region all the pixels would be gray. Our engineer saw that while there are small fringes of color on the edges of black digits and stripes, the overall stitching accuracy is good. This is not trivial, as stitching of the images obtained by different cameras, looking at nearby object from different angles, is not easy. For comparison, here is an example of a  not very successful stitching: Avantier Inc.’s engineering  team with over 30 years of experience  developed software for the customer to perform  all the necessary transformations automatically, without any operator intervention.

Read more