LiDAR in Autonomous Vehicles

Today’s advanced driver assistance systems take advantage of AI-spiked cameras and radar or sonar systems, but most manufacturers have been waiting for advances in machine vision technology to go one step further into autonomous self-driving cars. Today, that technology is ready to roll out. We call it LiDAR: Light detection and ranging.  LIDAR in autonomous vehicles can create a 3D understanding of the environment of the LiDAR systems, providing a self-driving car with a dynamic, highly accurate map of anything within 400 meters.  Understanding LiDAR LiDAR works by sending out laser pulses that reach a target, then bounce back to where a LiDAR sensor measures the time it took for the round trip. This enables the LiDAR system to create a point map that gives the exact location of everything within the reach of the laser beam. While the reach depends on the laser type used, those used in autonomous cars can now provide accurate data on objects up to 400 meters distance. Since LiDAR systems use laser light from a moving source on the car to ‘see’, the technology is not dependent on ambient light and can function just as well at night as during the day.  LiDAR is used in more than just self-driving cars. It has become important in land surveying, forestry and farming, and mining applications. LiDAR technology was used to discover the topology of Mars, and is being used today in a program studying the distances between the surfaces of the moon and earth. It can provide soil profiling, forest canopy measurements, and even cloud profiling.  LiDAR in Autonomous Vehicles Ten years ago, LiDAR was expensive and clunky, but that didn’t stop autonomous driving pioneers from incorporating it into their prototypes. Google designed a car with a $70,000 LiDAR system sitting right on top of the vehicle, and ran a series of very successful tests in Mountain View, California and around the U.S. There was just one problem: tacking an extra $70,000 bill onto an already expensive car leads to something that is simply not practical for anything besides research.  Today, Waymo manufactures self-driving cars using what they learned from those original experiments, and each of these cars is fitted with a similar LiDAR system. The design has been improved over the years, but the most glaring change is a very promising one: advances in technology have enabled Waymo to bring the cost of the LiDAR system down 90%.  Now LiDAR technology is available to any manufacturer of LiDAR cars, and our LiDAR optical design specialists can help you come up with a LiDAR system that meets your budget and requirements. Contact us for more information or to chat with one of our engineers.

Read more
Peak to Valley and Root Mean Square in Optics

How Peak to Valley (PV) and Root Mean Square (RMS) Affect the Quality of Your Optics Just how smooth is the surface of your optic? Qualitative descriptions are of limited value, and often you’ll want to put a number to it. Two ways of quantifying deviations from an ideal optical surface are peak-to-valley (PV) and root-mean-square (RMS). While root-mean-square provides more information, peak-to-valley measurements have been used more often historically. Both methods have their pros and cons. Here we’ll look into both in more detail.  Understanding Peak to Valley (PV) Measurements A peak to valley measurement represents the distance between the highest point and lowest point on the surface of an optic. Theoretically, this number should be quite useful: for instance, it allows an optical designer to make worse case predictions on optical performance. In practice, though, it is much more problematic. Using peak to valley measurements assumes that all measurements are precise and the surface has no noise. It is most useful if there are large sized features on the optics, or if the order of aberration is low.  An actual optic will have numerous imperfections that are closer than a millimeter to the aperture. Measurements from peak to value will be taken with an instrument that is far from ideal; typically, it will have a significant noise level and a large MTF difference. Because of the inherent problems with measurement, optical manufacturers typically have their own measuring method which does little but mask the condition of the optic. Often the measuring instruments are only used to interpolate the standard parameters in a method based more on a study of optical properties rather than the actual optic under consideration. Unfortunately cost and manufacturability are often expected to depend entirely on the value of PV.  For instance, we have had customers come in with a minimum budget requesting a 1/10 wave PV optic, with no specifications on testing conditions. This type of demand can only lead to a prohibitively expensive optic, an unworkable project, or the specification reinterpreted by the vendor. Standard manufacturing processes do not admit for 1/10 wave PV.  Understanding Root Mean Square (RMS) Measurements A root mean square (RMS) measurement describes the average deviation of the actual optical surface from an ideal surface. While PV gave the ‘worst case scenario’, we can think of RMS as providing the overall surface variation.  RMS values are also dependent on measurement, and especially on the relative area of the optic sampled. However, they are typically much more informative than PV values.  While some may attempt to convert between PV and RMS measurements by multiplying by 3.5, this is inaccurate and misleading. The two numbers are measuring very different things, and there is a simple relationship between them. A surface may have a large gouge that results in a large PV, but if the rest of the optic is smooth the RMS will reflect that. Similarly, if a surface has a relatively small PV but the surface is completely rough, the RMS will be larger than expected by a look at the PV.  Comparing RMS and PV Measurements Below are images which show the plot of 1 micron PV for basic terms of aberration, as well as the corresponding RMS. We see RMS is noticeably different for each basic term of aberrations.  When computer-controlled sub-aperture polishing comes into play, we often see cyclic errors. The below plots again show 1-micron PV, but here we have  different frequencies of cyclic error.  Though you can align pretty much all of the basic aberration in your optical system, cyclic form error will typically not be compensated for. In these images you can see that the PV and RMS measurements end up being essentially the same as basic aberration terms.  Now consider the below image, which shows the stimulated plot of an optic where the error consists of many small imperfections and a few larger, localized imperfections. A well fabricated optic will typically be of this kind. In this situation, the PV is still close to the 1 micron we saw in our other plots; RMS, however, is much less. These plots demonstrate how RMS reflects the actual surface quality of the optical system, while PV provides hardly any useful information.  RMS is a better specification than PV, but as always, the key is to partner with a trustworthy optics supplier that can work with you to manufacture an optic with the best performance possible at your price point. At Avantier Inc., that’s what we do every day. Contact us for a free consultation, and put our 50+ years of optical experience to work for you.

Read more
Introduction to Microscopes and Objective Lenses

A microscope is an optical device designed to magnify the image of an object, enabling details indiscernible to the human eye to be differentiated. A microscope may project the image onto the human eye or onto a camera or video device.  Historically microscopes were simple devices composed of two elements. Like a magnifying glass today, they produced a larger image of an object placed within the field of view. Today, microscopes are usually complex assemblies that include an array of lenses, filters, polarizers, and beamsplitters. Illumination is arranged to provide enough light for a clear image, and sensors are used to ‘see’ the object. Although today’s microscopes are usually far more powerful than the microscopes used historically, they are used for much the same purpose: viewing objects that would otherwise be indiscernible to the human eye.  Here we’ll start with a basic compound microscope and go on to explore the components and function of larger more complex microscopes. We’ll also take an in-depth look at one of the key parts of a microscope, the objective lens. Compound Microscope: A Closer Look While a magnifying glass consists of just one lens element and can magnify any element placed within its focal length, a compound lens, by definition, contains multiple lens elements. A relay lens system is used to convey the image of the object to the eye or, in some cases, to camera and video sensors.  A basic compound microscope could consist of just two elements acting in relay, the objective and the eyepiece. The objective relays a real image to the eyepiece, while magnifying that image anywhere from 4-100x.  The eyepiece magnifies the real image received typically by another 10x, and conveys a virtual image to the sensor.  There are two major specifications for a microscope: the magnification power and the resolution. The magnification tells us how much larger the image is made to appear. The resolution tells us how far away two points must be to  be distinguishable. The smaller the resolution, the larger the resolving power of the microscope. The highest resolution you can get with a light microscope is 0.2 microns (0.2 microns), but this depends on the quality of both the objective and eyepiece. Both the objective lens and the eyepiece also contribute to the overall magnification of the system. If an objective lens magnifies the object by 10x and the eyepiece by 2x, the microscope will magnify the object by 20. If the microscope lens magnifies the object by 10x and the eyepiece by 10x, the microscope will magnify the object by 100x. This multiplicative relationship is the key to the power of microscopes, and the prime reason they perform so much better than simply magnifying glasses.  In modern microscopes, neither the eyepiece nor the microscope objective is a simple lens. Instead, a combination of carefully chosen optical components work together to create a high quality magnified image. A basic compound microscope can magnify up to about 1000x. If you need higher magnification, you may wish to use an electron microscope, which can magnify up to a million times.  Microscope Eyepieces The eyepiece or ocular lens is the part of the microscope closest to your eye when you bend over to look at a specimen. An eyepiece usually consists of two lenses: a field lens and an eye lens. If a larger field of view is required, a more complex eyepiece  that increases the field of view can be used instead.  Microscope Objective Microscope objective lenses are typically the most complex part of a microscope.  Most microscopes will have three or four objectives lenses, mounted on a turntable for ease of use. A scanning objective lens will provide 4x magnification,  a low power magnification lens will provide magnification of 10x, and a high power objective offers 40x magnification. For high magnification, you will need to use oil immersion objectives. These can provide up to 50x, 60x, or 100x magnification and increase the resolving power of the microscope, but they cannot be used on live specimens. An microscope objective  may be either reflective or refractive. It may also be either finite conjugate or infinite conjugate.   Refractive Objectives Refractive objectives are so-called because the elements bend or refract light as it passes through the system. They are well suited to machine vision applications, as they can provide high resolution imaging of very small objects or ultra fine details. Each element within a refractive element is typically coated with an anti-reflective coating. A basic achromatic objective is a refractive objective that consists of just an achromatic lens and a meniscus lens, mounted within appropriate housing. The design is meant to limit the effects of chromatic and spherical aberration  as they bring two wavelengths of light to focus in the same plane. Plan Apochromat objectives can be much more complex with up to fifteen elements. They can be quite expensive, as would be expected from their complexity. Reflective Objectives A reflective objective works by reflecting light rather than bending it. Primary and secondary mirror systems both magnify and relay the image of the object being studied. While reflective objectives are not as widely used as refractive objectives, they offer many benefits. They can work deeper in the UV or IR spectral regions, and they are not plagued with the same aberrations as refractive objectives. As a result, they tend to offer better resolving power.  Microscope Illumination  Most microscopes rely on background illumination such as daylight or a lightbulb rather than a dedicated light source. In brightfield illumination (also known as Koehler illumination), two convex lenses, a collector lens and a condenser lens,  are placed so as to saturate the specimen with external light admitted into the microscope from behind. This provides a bright, even, steady light throughout the system.  Key Microscope Objective Lens Terminology There are some important specifications and terminology you’ll want to be aware of when designing a microscope or ordering microscope objectives. Here is a list of key terminology. Numerical Aperture Numerical aperture NA denotes

Read more
Lossless Image Compression Example

For storage and transmission of large image files it is desirable to reduce the file size. For consumer-grade images this is achieved by lossy image compression when image details not very noticeable to humans are discarded. However, for scientific images discarding any image details may not be acceptable. Still, all the images, except completely random ones, do include some redundancy. This permits lossless compression which does decrease image file size while preserving all the image details. The simplest file compression can be achieved by using well-known arithmetic encoding of the image data. Arithmetic encoding compression degree can be calculated using Shannon entropy, which is just minus averaged base 2 Log of probabilities of all the values taken by the image pixels. This Shannon entropy gives averaged number of bits per pixel which is necessary to arithmetically encode the image. If, say, the original image is a monochrome one with 8 bits per pixels, then for completely random image the entropy will be equal to 8. For non-random images the entropy will be less than 8. Let’s consider simple example of NASA infrared image of the Earth, shown here using false color This image is 8-bit monochrome one, and has entropy of 5.85. This means arithmetic encoding can decrease image file size 1.367 times. This is better than nothing but not great. Significant improvement can be achieved by transforming the image. If we would use standard Lossless Wavelet compression (LWT), after one step of the LWT the initial image will be transformed into 4 smaller ones: 3 of these 4 smaller images contain only low pixel values which are not visible on the picture above. Zooming on them saturates top left corner, but makes small details near other corners visible (notice the changed scale on the right): Now the entropy of the top left corner 5.85, which is close to the entropy 5.87 of the complete initial image. The entropies of the other 3 corners are 1.83, 1.82 and 2.82. So, after only one LWT step the lossless compression ratio would be 2.6, which is significantly better than 1.367. Our proprietary adaptive prediction lossless compression algorithm shows small prediction residue for the complete image: Actual lossless compression ratio achieved here is about 4.06. It is remarkable that while the last picture looks quite different from the original NASA image, it does contain all the information necessary to completely recover the initial image. Due to lossless nature of the compression, the last picture, using arithmetic encoding, can be saved to the file 4.06 times smaller than the initial NASA picture file. Our proprietary algorithm applied to this smaller file completely recovers the initial picture, accurately to the last bit. No bit left behind.

Read more
3D Image Analysis

Image Analysis of a 3D Image Several manufacturers sell 3D cameras which use 2D sensor arrays sensitive to the phase of reflected laser light. All of them spread laser light so that it permanently illuminates the complete scene of interest. Laser light is modulated synchronously with the pixel sensitivity. The phase of reflected laser light, going back to sensor pixels, depends on the distance to reflection points. This is the basis for calculating the XYZ positions of the illuminated surface points. While this basic principle of operation is the same for a number of 3D cameras, there are a lot of technical details which determine the quality of the obtained data. The best known of these 3D cameras is Microsoft Kinect. It also provides the best distance measurement resolution. According to our measurements, the standard deviation of distance to both white and relatively dark objects is below 2 mm. Most 3D cameras have higher distance measurement noise, often unacceptably high for even relatively high target reflectivity of 20 percent. Here we show the example of data obtained using one not-so-good European 3D camera with 2D array of laser light Time-Of-Flight sensitive pixels. We used the default settings of the camera to calculate distances to the white target at 110 cm from the camera, which is close to the default calibration setup distance for the camera. Deviations of distance from smooth approximating surface in the center of the image are shown by the point cloud here: Here X and Y are distances from the image center measured in pixels. Here is a histogram of the distance noise: Both figures show that, in addition to Gaussian-looking noise, there are some points with very large deviations. Such large deviations are caused by strong fixed-pattern noise (difference between pixel sensitivities). While the noise of this camera is at least 8 times higher than the noise of Kinect, there are more problems which become visible looking at 2D projection of the 3D point cloud. If projected to the camera plane, color-coded distances, shown in cm, do not look too bad for some simple scanned scene: Here x index and Y index are just pixel numbers in x and y directions. The picture becomes more interesting when looking at the projection of the same point cloud to the X-Distance plane: We can clearly see stripes separated by about 4 cm in distance. Nothing like these spurious stripes can be seen in point clouds from the good 3D cameras, such as Kinect. So the conclusion of our 3D image analysis of this European 3D camera is that it is not competitive with the best available 3D cameras.

Read more
Image Processing Case Study

Let’s look at the transportation industry-based case of extensive image processing. Two video cameras were looking at the boxes moving fast on the conveyor belt. To provide high enough image resolution the cameras were placed close to the belt but they could not cover all the belt cross-section. They were placed on the sides of the belt, and could see parts of the boxes. Customer wanted good images of the texture on the top of the boxes, so the images from the two cameras needed to be stitched. Two cameras see the same object at different angles and distances. Before merging the images from the different cameras the images must be transformed from the coordinate systems of the cameras to one common coordinate system, and placed in one common plane in XYZ space. Our developed software performed transformation automatically, based on the known geometry of the camera positions relative to the conveyor belt. Still, after such transformation, multi-megapixel grayscale images from the left and the right cameras are shifted in common plane relative to each other: Here grayscale images from the two cameras are shown in false color. The scale on the right demonstrates the relation between 8-bit pixel signal strength and the false color. We see that the two images also have different brightness. Our algorithms adjust the brightness and shift the images from the left and right cameras to make merging of two images into one image possible. The resulting combined image is shown using different choice of false colors: Right image pixels are shown using magenta, and the left image pixels are shown using green color. Here is the zoomed version of the overlap region of the stitched image: If the stitching would be perfect, then in the overlap region all the pixels would be gray. Our engineer saw that while there are small fringes of color on the edges of black digits and stripes, the overall stitching accuracy is good. This is not trivial, as stitching of the images obtained by different cameras, looking at nearby object from different angles, is not easy. For comparison, here is an example of a  not very successful stitching: Avantier Inc.’s engineering  team with over 30 years of experience  developed software for the customer to perform  all the necessary transformations automatically, without any operator intervention.

Read more
Reverse Optical Engineering Case Studies from Avantier

At Avantier, we are our proud of our track history in assisting customers to solve problems using reverse optical engineering. Here are three case studies. Case Study 1: Reverse Engineering an OFS 20x APO Objective Lens for Bioresearch Genetic engineering requires using precision optics to view and edit the genomes of plants or animals. One world renowned bio research lab has pioneered a new method to speed plant domestication by means of genome editing. While ordinary plant domestication typically requires decades of hard work to produce bigger and better fruit, their methods speed up the process through careful editing of the plants’ genome.  To accomplish this editing, the bio research lab used a high end OFS 20x Mitutoyo APO SL infinity corrected objective lens. The objective lens performed as desired, but there was just one problem. The high energy continuous wave (CW) laser waves involved in the project would damage the sensitive optical lens, causing the objective lens to fail. This became a recurrent problem, and the lab found itself constantly replacing the very expensive objective. It wasn’t long before the cost became untenable. We were approached with the details of this problem and asked if we could design a microscope objective lens with the same long working distance and high numerical aperture performance of the OFS 20x Mitutoyo but with better resistance to laser damage.  The problem was a complex one, but after years of intensive study and focused effort we succeeded in reverse engineering the objective lens and improving the design with a protective coating.  The new objective lens was produced and integrated into the bio research lab’s system. More than three years later, it continues to be used in close proximity to laser beams without any hint of failure or compromised imaging. Case Study 2: Reverse Engineering an OTS 10x Objective Lens for Biomedical Research Fluoresce microscopy is used by a biomedical research company to study embryo cells in a hot, humid incubator.  This company used an OTS Olympic microscope objective lens to view the incubator environment up close and determine the presence, health, and signals of labeled cells, but the objective was failing over time. Constant exposure to temperatures above 37 C and humidity of 70% was causing fungal spores to grow in the research environment and on the microscope objective. These fungal spores, after settling on the cover glass, developed into living organisms that digested the oils and lens coatings. Hydrofluoric acid, produced by the fungi as a waste product, slowly destroyed the lens coating and etched the glass.  The Olympus OTS 10x lens cost several thousand dollars, and this research company soon realized that regular replacement due to fungal growth would cost them far more than they were willing to pay. They approached us to ask if we would reverse engineer an objective that performed in a manner equivalent to the objective they were using, but with a resistance to fungal growth that the original objective did not have.  Our optical and coating engineers worked hard on this problem, and succeeded in producing an equivalent microscope objective with a special protective coating. This microscope lens can be used in humid, warm environments for a long period of time without the damage the Olympus objective sustained.  Case Study 3: Reverse Engineering a High Precision Projection Lens A producer of consumer electronics was designing a home planetarium projector, and found themselves in need of a high precision projection lens that could project an enhanced image. Nothing on the market seemed to suit, and they approached us to ask if we would reverse engineer a high quality lens that exactly fit their needs but is now obsolete.  We were able to study the lens and create our own design for a projector lens with outstanding performance. Not only did this lens exceed our customer’s expectations, it was also affordable to produce and suitable for high volume production.

Read more
Case Study: Infrared Lens Design

  Design for Manufacturing (DFM) Case Study: Infrared Lens Design for Scientific Equipment At Avantier we offer Design for Manufacturing (DFM services), optimizing product design with our extensive knowledge of manufacturing constraints, costs, and methods. Measuring the relative concentrations of carbon monoxide, products of combustion and unburned hydrocarbons is the basis of flare monitoring and is typically accomplished with infrared spectral imaging. For real time continuous monitoring, a multispectral infrared imager can be used.  We were approached by a scientific equipment supplier for DFM help on a particular infrared lens (50 mm f/1) that is used in their infrared imager. The lens was designed to offer the high image quality and low distortion needed for scientific research, but though it performed as desired, there were two major manufacturing problems that made it expensive to produce. The first issue was expensive aspheric lens elements. The second was the inclusion of GaAs to improve the modulation transfer function (MTF) and to correct chromatic aberration. GaAs is a highly toxic material, and incorporating it in the design complicates the manufacturing process and increases the cost.  Taking into account both lens performance and our client’s manufacturing budget, Avantier redesigned the infrared lens with DFM principles. Our final design included no aspheric lens elements and no hazardous material, but we met all requirements for distortion, MTF, and image height offset at the working spectral lens. Using a combination of 5 spherical lens elements and 1 filter, our 50 mm f/1 lens was able to reduce the lens cost by about 60%. In the below image the configuration of the 50 mm f/1 lens is shown. For a wavelength range of approximately 3 to 5 µm the MTF was 0.7, well within our client’s requirements. The next image below shows the modulation transfer function (MTF) plots for the redesigned lens. This last image below shows the lens distortion plot. The maximum field was 6.700 degrees, and the maximum f-tan distortion 0.6526%. Whatever your need might be, our engineers are ready to put their design for manufacturing experience to work for you. Call us to schedule a free initial consultation or discuss manufacturing possibilities. 

Read more