Image Recovery or Image Reconstruction of an Imaging System

Blurring is a significant source of image degradation in an imperfect imaging system. The optical system’s point spread function (PSF) describes the measure of blur in a given imaging system and is often used in image reconstruction or image recovery algorithms. Below in example of using inverse PSF to eliminate the barcode image degradation. Barcodes are found on many everyday consumer products. A typical 1-D (one-dimensional) barcode is a series of varying width vertical lines (called bars) and spaces. The example of the popular GS1-128 Symbology barcode is shown here: The signal amplitude of code image only has changes in horizontal direction (i.e. X-direction).  For the imaging system used to capture and decode the barcode it is sufficient to look at one-dimensional intensity profile along the X-direction. In good conditions the profile may look like this: Using such a good scan, it is trivial to recover initial binary (only Black and only White) barcode. One can set threshold in the middle between maxima and minima of the received signal, and assign whatever is above the threshold to White, and below the threshold to Black. However, in situations when the Point Spread Function (PSF) of the imaging system is poor, it may be difficult or impossible to set the proper threshold.  See example below: PSF is the impulse response of an imaging system, it contains information of the image formation, systematic aberrations and imperfections. To correctly decode barcode in such situations one may try to use inverse PSF information to improve the received signal. The idea is to deduce inverse PSF from the multiple signals obtained from the many scans of different barcodes of the same symbology. All barcodes of the same Symbology, such as GS1-128, have the same common features defined by the Symbology standards. This permits us to calculate inverse PSF coefficients by minimizing deviation of the received signals from the ideal barcode profile signals. A small number, such as 15, of the inverse PSF coefficients may be used to correct the received signals to make them as close to barcode signals as possible in the Least Squares sense. The inverse PSF coefficients were found and used to convert poor received signal shown previously into better signal shown on the next picture by red: While the recovered red signal is not ideal, it does permit to set threshold and correctly recover the scanned barcode.

Read more
Case Study: Objective Lens Design
Optical Design, Custom optical design solutions, Optical Engineering, Custom Optical Lens Design

Design for Manufacturing (DFM) Case Study: Objective Lens Design for Trapping and Imaging Single Atoms At Avantier we offer Design for Manufacturing (DFM services), optimizing product design with our extensive knowledge of manufacturing constraints, costs, and methods. Avantier Inc. received a request from a University Physics department to custom design a long working distance, high numerical aperture objective. Our highly skilled and knowledgeable engineers designed and deployed state-of-the-art technologies to develop a single-atom trapping and imaging system where multiple laser beams are collimated at various angles and overlapped on the dichroic mirrors before entering the objective lens. The objective lens focuses the input laser beams to create optical tweezers arrays to simultaneously trap single atoms and image the trapped atoms over the full field of view of the microscope objective. The objective lens not only had high transmission but also can render the same point-spread function or diffractive-limited performance for all traps over the full field of view.  Typical requirements for the objective lens used for trapping and imaging single atoms:  Custom objective lens example Objective lens focuses high-power laser beams to create optical tweezers at 6 wavelengths (i.e., 420nm, 795nm, 813nm, 840nm, 1013nm, and 1064nm) and image the trapped atoms at the wavelength of 780nm. 

Read more
Lossless Image Compression Example

For storage and transmission of large image files it is desirable to reduce the file size. For consumer-grade images this is achieved by lossy image compression when image details not very noticeable to humans are discarded. However, for scientific images discarding any image details may not be acceptable. Still, all the images, except completely random ones, do include some redundancy. This permits lossless compression which does decrease image file size while preserving all the image details. The simplest file compression can be achieved by using well-known arithmetic encoding of the image data. Arithmetic encoding compression degree can be calculated using Shannon entropy, which is just minus averaged base 2 Log of probabilities of all the values taken by the image pixels. This Shannon entropy gives averaged number of bits per pixel which is necessary to arithmetically encode the image. If, say, the original image is a monochrome one with 8 bits per pixels, then for completely random image the entropy will be equal to 8. For non-random images the entropy will be less than 8. Let’s consider simple example of NASA infrared image of the Earth, shown here using false color This image is 8-bit monochrome one, and has entropy of 5.85. This means arithmetic encoding can decrease image file size 1.367 times. This is better than nothing but not great. Significant improvement can be achieved by transforming the image. If we would use standard Lossless Wavelet compression (LWT), after one step of the LWT the initial image will be transformed into 4 smaller ones: 3 of these 4 smaller images contain only low pixel values which are not visible on the picture above. Zooming on them saturates top left corner, but makes small details near other corners visible (notice the changed scale on the right): Now the entropy of the top left corner 5.85, which is close to the entropy 5.87 of the complete initial image. The entropies of the other 3 corners are 1.83, 1.82 and 2.82. So, after only one LWT step the lossless compression ratio would be 2.6, which is significantly better than 1.367. Our proprietary adaptive prediction lossless compression algorithm shows small prediction residue for the complete image: Actual lossless compression ratio achieved here is about 4.06. It is remarkable that while the last picture looks quite different from the original NASA image, it does contain all the information necessary to completely recover the initial image. Due to lossless nature of the compression, the last picture, using arithmetic encoding, can be saved to the file 4.06 times smaller than the initial NASA picture file. Our proprietary algorithm applied to this smaller file completely recovers the initial picture, accurately to the last bit. No bit left behind.

Read more
3D Image Analysis

Image Analysis of a 3D Image Several manufacturers sell 3D cameras which use 2D sensor arrays sensitive to the phase of reflected laser light. All of them spread laser light so that it permanently illuminates the complete scene of interest. Laser light is modulated synchronously with the pixel sensitivity. The phase of reflected laser light, going back to sensor pixels, depends on the distance to reflection points. This is the basis for calculating the XYZ positions of the illuminated surface points. While this basic principle of operation is the same for a number of 3D cameras, there are a lot of technical details which determine the quality of the obtained data. The best known of these 3D cameras is Microsoft Kinect. It also provides the best distance measurement resolution. According to our measurements, the standard deviation of distance to both white and relatively dark objects is below 2 mm. Most 3D cameras have higher distance measurement noise, often unacceptably high for even relatively high target reflectivity of 20 percent. Here we show the example of data obtained using one not-so-good European 3D camera with 2D array of laser light Time-Of-Flight sensitive pixels. We used the default settings of the camera to calculate distances to the white target at 110 cm from the camera, which is close to the default calibration setup distance for the camera. Deviations of distance from smooth approximating surface in the center of the image are shown by the point cloud here: Here X and Y are distances from the image center measured in pixels. Here is a histogram of the distance noise: Both figures show that, in addition to Gaussian-looking noise, there are some points with very large deviations. Such large deviations are caused by strong fixed-pattern noise (difference between pixel sensitivities). While the noise of this camera is at least 8 times higher than the noise of Kinect, there are more problems which become visible looking at 2D projection of the 3D point cloud. If projected to the camera plane, color-coded distances, shown in cm, do not look too bad for some simple scanned scene: Here x index and Y index are just pixel numbers in x and y directions. The picture becomes more interesting when looking at the projection of the same point cloud to the X-Distance plane: We can clearly see stripes separated by about 4 cm in distance. Nothing like these spurious stripes can be seen in point clouds from the good 3D cameras, such as Kinect. So the conclusion of our 3D image analysis of this European 3D camera is that it is not competitive with the best available 3D cameras.

Read more
Image Processing Case Study

Let’s look at the transportation industry-based case of extensive image processing. Two video cameras were looking at the boxes moving fast on the conveyor belt. To provide high enough image resolution the cameras were placed close to the belt but they could not cover all the belt cross-section. They were placed on the sides of the belt, and could see parts of the boxes. Customer wanted good images of the texture on the top of the boxes, so the images from the two cameras needed to be stitched. Two cameras see the same object at different angles and distances. Before merging the images from the different cameras the images must be transformed from the coordinate systems of the cameras to one common coordinate system, and placed in one common plane in XYZ space. Our developed software performed transformation automatically, based on the known geometry of the camera positions relative to the conveyor belt. Still, after such transformation, multi-megapixel grayscale images from the left and the right cameras are shifted in common plane relative to each other: Here grayscale images from the two cameras are shown in false color. The scale on the right demonstrates the relation between 8-bit pixel signal strength and the false color. We see that the two images also have different brightness. Our algorithms adjust the brightness and shift the images from the left and right cameras to make merging of two images into one image possible. The resulting combined image is shown using different choice of false colors: Right image pixels are shown using magenta, and the left image pixels are shown using green color. Here is the zoomed version of the overlap region of the stitched image: If the stitching would be perfect, then in the overlap region all the pixels would be gray. Our engineer saw that while there are small fringes of color on the edges of black digits and stripes, the overall stitching accuracy is good. This is not trivial, as stitching of the images obtained by different cameras, looking at nearby object from different angles, is not easy. For comparison, here is an example of a  not very successful stitching: Avantier Inc.’s engineering  team with over 30 years of experience  developed software for the customer to perform  all the necessary transformations automatically, without any operator intervention.

Read more
Reverse Optical Engineering Case Studies from Avantier

At Avantier, we are our proud of our track history in assisting customers to solve problems using reverse optical engineering. Here are three case studies. Case Study 1: Reverse Engineering an OFS 20x APO Objective Lens for Bioresearch Genetic engineering requires using precision optics to view and edit the genomes of plants or animals. One world renowned bio research lab has pioneered a new method to speed plant domestication by means of genome editing. While ordinary plant domestication typically requires decades of hard work to produce bigger and better fruit, their methods speed up the process through careful editing of the plants’ genome.  To accomplish this editing, the bio research lab used a high end OFS 20x Mitutoyo APO SL infinity corrected objective lens. The objective lens performed as desired, but there was just one problem. The high energy continuous wave (CW) laser waves involved in the project would damage the sensitive optical lens, causing the objective lens to fail. This became a recurrent problem, and the lab found itself constantly replacing the very expensive objective. It wasn’t long before the cost became untenable. We were approached with the details of this problem and asked if we could design a microscope objective lens with the same long working distance and high numerical aperture performance of the OFS 20x Mitutoyo but with better resistance to laser damage.  The problem was a complex one, but after years of intensive study and focused effort we succeeded in reverse engineering the objective lens and improving the design with a protective coating.  The new objective lens was produced and integrated into the bio research lab’s system. More than three years later, it continues to be used in close proximity to laser beams without any hint of failure or compromised imaging. Case Study 2: Reverse Engineering an OTS 10x Objective Lens for Biomedical Research Fluoresce microscopy is used by a biomedical research company to study embryo cells in a hot, humid incubator.  This company used an OTS Olympic microscope objective lens to view the incubator environment up close and determine the presence, health, and signals of labeled cells, but the objective was failing over time. Constant exposure to temperatures above 37 C and humidity of 70% was causing fungal spores to grow in the research environment and on the microscope objective. These fungal spores, after settling on the cover glass, developed into living organisms that digested the oils and lens coatings. Hydrofluoric acid, produced by the fungi as a waste product, slowly destroyed the lens coating and etched the glass.  The Olympus OTS 10x lens cost several thousand dollars, and this research company soon realized that regular replacement due to fungal growth would cost them far more than they were willing to pay. They approached us to ask if we would reverse engineer an objective that performed in a manner equivalent to the objective they were using, but with a resistance to fungal growth that the original objective did not have.  Our optical and coating engineers worked hard on this problem, and succeeded in producing an equivalent microscope objective with a special protective coating. This microscope lens can be used in humid, warm environments for a long period of time without the damage the Olympus objective sustained.  Case Study 3: Reverse Engineering a High Precision Projection Lens A producer of consumer electronics was designing a home planetarium projector, and found themselves in need of a high precision projection lens that could project an enhanced image. Nothing on the market seemed to suit, and they approached us to ask if we would reverse engineer a high quality lens that exactly fit their needs but is now obsolete.  We were able to study the lens and create our own design for a projector lens with outstanding performance. Not only did this lens exceed our customer’s expectations, it was also affordable to produce and suitable for high volume production.

Read more
Case Study: Infrared Lens Design

  Design for Manufacturing (DFM) Case Study: Infrared Lens Design for Scientific Equipment At Avantier we offer Design for Manufacturing (DFM services), optimizing product design with our extensive knowledge of manufacturing constraints, costs, and methods. Measuring the relative concentrations of carbon monoxide, products of combustion and unburned hydrocarbons is the basis of flare monitoring and is typically accomplished with infrared spectral imaging. For real time continuous monitoring, a multispectral infrared imager can be used.  We were approached by a scientific equipment supplier for DFM help on a particular infrared lens (50 mm f/1) that is used in their infrared imager. The lens was designed to offer the high image quality and low distortion needed for scientific research, but though it performed as desired, there were two major manufacturing problems that made it expensive to produce. The first issue was expensive aspheric lens elements. The second was the inclusion of GaAs to improve the modulation transfer function (MTF) and to correct chromatic aberration. GaAs is a highly toxic material, and incorporating it in the design complicates the manufacturing process and increases the cost.  Taking into account both lens performance and our client’s manufacturing budget, Avantier redesigned the infrared lens with DFM principles. Our final design included no aspheric lens elements and no hazardous material, but we met all requirements for distortion, MTF, and image height offset at the working spectral lens. Using a combination of 5 spherical lens elements and 1 filter, our 50 mm f/1 lens was able to reduce the lens cost by about 60%. In the below image the configuration of the 50 mm f/1 lens is shown. For a wavelength range of approximately 3 to 5 µm the MTF was 0.7, well within our client’s requirements. The next image below shows the modulation transfer function (MTF) plots for the redesigned lens. This last image below shows the lens distortion plot. The maximum field was 6.700 degrees, and the maximum f-tan distortion 0.6526%. Whatever your need might be, our engineers are ready to put their design for manufacturing experience to work for you. Call us to schedule a free initial consultation or discuss manufacturing possibilities. 

Read more