3D Image Analysis

Image Analysis of a 3D Image Several manufacturers sell 3D cameras which use 2D sensor arrays sensitive to the phase of reflected laser light. All of them spread laser light so that it permanently illuminates the complete scene of interest. Laser light is modulated synchronously with the pixel sensitivity. The phase of reflected laser light, going back to sensor pixels, depends on the distance to reflection points. This is the basis for calculating the XYZ positions of the illuminated surface points. While this basic principle of operation is the same for a number of 3D cameras, there are a lot of technical details which determine the quality of the obtained data. The best known of these 3D cameras is Microsoft Kinect. It also provides the best distance measurement resolution. According to our measurements, the standard deviation of distance to both white and relatively dark objects is below 2 mm. Most 3D cameras have higher distance measurement noise, often unacceptably high for even relatively high target reflectivity of 20 percent. Here we show the example of data obtained using one not-so-good European 3D camera with 2D array of laser light Time-Of-Flight sensitive pixels. We used the default settings of the camera to calculate distances to the white target at 110 cm from the camera, which is close to the default calibration setup distance for the camera. Deviations of distance from smooth approximating surface in the center of the image are shown by the point cloud here: Here X and Y are distances from the image center measured in pixels. Here is a histogram of the distance noise: Both figures show that, in addition to Gaussian-looking noise, there are some points with very large deviations. Such large deviations are caused by strong fixed-pattern noise (difference between pixel sensitivities). While the noise of this camera is at least 8 times higher than the noise of Kinect, there are more problems which become visible looking at 2D projection of the 3D point cloud. If projected to the camera plane, color-coded distances, shown in cm, do not look too bad for some simple scanned scene: Here x index and Y index are just pixel numbers in x and y directions. The picture becomes more interesting when looking at the projection of the same point cloud to the X-Distance plane: We can clearly see stripes separated by about 4 cm in distance. Nothing like these spurious stripes can be seen in point clouds from the good 3D cameras, such as Kinect. So the conclusion of our 3D image analysis of this European 3D camera is that it is not competitive with the best available 3D cameras.

Read more
Image Processing Case Study

Let’s look at the transportation industry-based case of extensive image processing. Two video cameras were looking at the boxes moving fast on the conveyor belt. To provide high enough image resolution the cameras were placed close to the belt but they could not cover all the belt cross-section. They were placed on the sides of the belt, and could see parts of the boxes. Customer wanted good images of the texture on the top of the boxes, so the images from the two cameras needed to be stitched. Two cameras see the same object at different angles and distances. Before merging the images from the different cameras the images must be transformed from the coordinate systems of the cameras to one common coordinate system, and placed in one common plane in XYZ space. Our developed software performed transformation automatically, based on the known geometry of the camera positions relative to the conveyor belt. Still, after such transformation, multi-megapixel grayscale images from the left and the right cameras are shifted in common plane relative to each other: Here grayscale images from the two cameras are shown in false color. The scale on the right demonstrates the relation between 8-bit pixel signal strength and the false color. We see that the two images also have different brightness. Our algorithms adjust the brightness and shift the images from the left and right cameras to make merging of two images into one image possible. The resulting combined image is shown using different choice of false colors: Right image pixels are shown using magenta, and the left image pixels are shown using green color. Here is the zoomed version of the overlap region of the stitched image: If the stitching would be perfect, then in the overlap region all the pixels would be gray. Our engineer saw that while there are small fringes of color on the edges of black digits and stripes, the overall stitching accuracy is good. This is not trivial, as stitching of the images obtained by different cameras, looking at nearby object from different angles, is not easy. For comparison, here is an example of a  not very successful stitching: Avantier Inc.’s engineering  team with over 30 years of experience  developed software for the customer to perform  all the necessary transformations automatically, without any operator intervention.

Read more
Case Study: Infrared Lens Design

  Design for Manufacturing (DFM) Case Study: Infrared Lens Design for Scientific Equipment At Avantier we offer Design for Manufacturing (DFM services), optimizing product design with our extensive knowledge of manufacturing constraints, costs, and methods. Measuring the relative concentrations of carbon monoxide, products of combustion and unburned hydrocarbons is the basis of flare monitoring and is typically accomplished with infrared spectral imaging. For real time continuous monitoring, a multispectral infrared imager can be used.  We were approached by a scientific equipment supplier for DFM help on a particular infrared lens (50 mm f/1) that is used in their infrared imager. The lens was designed to offer the high image quality and low distortion needed for scientific research, but though it performed as desired, there were two major manufacturing problems that made it expensive to produce. The first issue was expensive aspheric lens elements. The second was the inclusion of GaAs to improve the modulation transfer function (MTF) and to correct chromatic aberration. GaAs is a highly toxic material, and incorporating it in the design complicates the manufacturing process and increases the cost.  Taking into account both lens performance and our client’s manufacturing budget, Avantier redesigned the infrared lens with DFM principles. Our final design included no aspheric lens elements and no hazardous material, but we met all requirements for distortion, MTF, and image height offset at the working spectral lens. Using a combination of 5 spherical lens elements and 1 filter, our 50 mm f/1 lens was able to reduce the lens cost by about 60%. In the below image the configuration of the 50 mm f/1 lens is shown. For a wavelength range of approximately 3 to 5 µm the MTF was 0.7, well within our client’s requirements. The next image below shows the modulation transfer function (MTF) plots for the redesigned lens. This last image below shows the lens distortion plot. The maximum field was 6.700 degrees, and the maximum f-tan distortion 0.6526%. Whatever your need might be, our engineers are ready to put their design for manufacturing experience to work for you. Call us to schedule a free initial consultation or discuss manufacturing possibilities. 

Read more