How to Read an Optical Drawing

An optical drawing is a detailed plan that allows us to manufacture optical components according to a design and given specifications. When optical designers and engineers come up with a design, they condense it in an optical drawing that can be understood by manufacturers anywhere.  ISO 10110 is the most popular standard for optical drawing. It describes all optical parts in terms of tolerance and geometric dimension. The image below shows the standard format of an optical drawing. Notice thee main fields. The upper third, shown here in blue, is called the drawing field. Under this the green area is known as the table field, and below this the title field or, alternately, the title block (shown here in yellow). Once an optical drawing is completed, it will look something like this: Notice the three fields— the drawing field, the table field, and the title field. We’ll look at each of them in turn. Field I — Drawing Field The drawing field contains a sketch or schematic of the optical component or assembly. In the drawing here, we see key information on surface texture, lens thickness, and lens diameter. P3 means level 3 polished, and describes the surface texture. Surface texture tells us how close to a perfectly flat ideal plane our surface is, and how extensive are the deviations. 63 refers to the lens diameter, the physical measurement of the diameter of the front-most part of the lens 12 refers to the lens thickness, the distance along the optical axis between the two surfaces of the lens After reviewing the drawing field we know this is a polished bi-convex lens, and we know exactly how large and how thick it is. But there is more we need to know before we begin production. To find this additional information, we look at the table field. Field 2— Table Field In our example, the optical component has two optical surfaces, and table field is broken into three subfields. The left subfield refers to the specifications of the left surface, and the right subfield refers to the specifications of the right surface. The middle field refers to the specifications of the material. Surface Specifications: Sometimes designers will indicate “CC” or “CX” after radius of curvature, CC means concave, CX means convex. Material Specifications: 1/ : Bubbles and Inclusions Usually written as 1/AxB where A is the number of allowed bubbles or inclusions in lens B is the length of side of a square in units of mm 2/ : Homogeneity and Striae Usually written as 2/A;B where A is the class number for homogeneity B is the class for striae Field 3: Title Field The last field on an optical drawing is called the title field, and it is here that all the bookkeeping happens. The author of the drawing, the date it was drawn, and the project title will be listed here, along with applicable standards. Often there will also be room for an approval, for a revision count, and for the project company. A final crucial piece of information is the scale: is the drawing done in 1:1, or some other scale? Now you know how to read an optical drawing and where to find the information you’re looking for. If you have any other questions, feel free to contact us!

Read more
Infrared (IR) Lenses

An IR lens is an optical lens designed to collimate, focus, or collect infrared light. At Avantier Inc., we produce high performance IR Optics such as IR lenses for use with near-infrared (NIR), short-wave infrared (SWIR), mid-wave infrared (MIR), and long-wave infrared (LWIR) spectra. These Infrared lenses can be customized for specific areas of the infrared spectrum, and are suitable for applications in defense, life science, medical, research, security, surveillance and other industries. Why Choose Avantier for Your Infrared Optics Needs Whether you require one-off production of single infrared (IR) lens assembly for a specialized research project or a large quantity of fixed-focus IR lenses for industry use, you need to know you can count on your provider. When you work with Avantier, you know you are getting the best product possible, at the best possible price. Our engineers design for manufacturability and work hard to ensure you get an optimized product at an optimal price and within an optimal time frame. That’s because we’ve done it, again and again. Our extensive experience in infrared optics enables us to both design and produce the highest quality lenses and assemblies for IR light. State of the art metrology and a robust quality control program means that every lens with the Avantier name on it will perform exactly as intended, and we check and double check that each component meets your full specification. Our manufacturing processes meet all applicable ISO and MIL standards, and our IR lenses are well known throughout the world. Types of Infrared Lenses Infrared light is classified as light between the wavelengths of 1 mm to about 700 nm. Infrared IR radiation can be further divided into several categories: The substrate chosen for a lens will depend partly on which IR region it is designed for. For instance, Calcium Fluoride  (CaF2) lenses are a good choice for radiation between 80 nm – 8 μm and so would be ideal for NIR SWIR wavelengths. Zinc Selenide has optimal transmission from 8 – 12μm, although it offers partial transmission over 0.45 μm to 21.5 μm  and  Zinc Sulfide (good transmission in 8-12µm,  or partial transmission from 0.35 to 14µm). Avantier and IR Lens Design Our experienced engineers and consultants can help you determine the best substrate and antireflective or reflective coating best fits your application. Every situation is unique, and we can help you find a cost effective solution that meets your need. Whether you need special resistance to mechanical and thermal shock, or good performance in rugged environments, we can select the perfect substrate for you.  We can also help design your IR lens or optical lens assembly. From basic lens selection (singlet, aspherical lens, spheric lens, cylindrical lens, custom shape lens) to design of aspheric lenses arranged in a complex opto-mechanical device, or any other infrared optical assembly, we have you covered. Avantier can provide lenses in chalcogenide material. Chalcogenide is an amorphous glass and is easier to process than traditional IR crystalline materials. Chalcogenide glass is an ideal material for both high performance infrared imaging systems and high volume commercial applications. Chalcogenide glass is available in a variety of chemical composition options, but BD6, composed of arsenic and selenium (As 40 Se 60), is the best choice in terms of cost and ease of production. Chalcogenide infrared glass materials and lenses are also an excellent alternative to expensive, commodity price-driven materials such as Ge, ZnSe, and ZnS2. Chalcogenide glass primarily transmits in the MWIR and LWIR wavelength bands, making it suitable for infrared imaging applications. Please contact us if you’d like to schedule a free consultation or request for a quote on your next project.

Read more
Identifying Resolution of Imaging Systems

Identifying Resolution of Imaging Systems Resolution is a measurement of an imaging system’s ability to resolve the object which is be imaged. Test targets are typically tools that are used to check the resolution of an imaging system. The most popular targets consist of “groups” of 6 “elements” and each element consists of three horizontal and three vertical bars equally spaced with well-defined width. The vertical bars are used to calculate horizontal resolution and horizontal bars are used to calculate vertical resolution. Analyzing test target image is to identify the Group Element number of highest spatial frequency where either horizontal or vertical lines are distinguishable. Here is an example of such test target image: To look at the pixel values we have chosen one row which is close to the center of the image. We see that the maximum pixel brightness is 120 counts at the center and about 95 counts at the edge for the test target image. Maximum theoretical pixel value for an 8-bit format image is 255 counts, thus only half of the sensor dynamic range is used for this test. Groups are labeled by numbers in order of increasing frequency. It is obvious that Groups with highest resolution are near the image center. The image part, where the groups 8 and 9 are located, is shown here: To avoid repetitions, we only show the calculation results for the resolution along the horizontal / X direction. To reduce the noise in calculation of the image contrast, each group image of the 3 vertical bars was averaged along Y direction inside the extent of the black bars. The resulting averaged amplitudes along X direction for all the elements of the group 8 are shown here: The signal amplitude difference between black stripe/line and white space is recognizable for the 6 elements of Group 8, i.e., all the 6 elements are distinguishable  Next picture shows the same kind of plot for all the elements of the group 9: The signal amplitude difference between black bar/line and white space is countable for the elements 1-5 but is not distinguishable for the element 6 of group 9.  The resolution of this imaging system is Group-9 Element-5 with line width of 0.62µm, i.e., frequency of 806 line pairs per mm. It is known that the resolution of an imaging system can be affected by factors such as object/test target contrast, lighting source intensity, and software correction. Increasing the illumination intensity and having proper parameter settings for the camera can improve the resolution of the imaging system.

Read more
Case Study: Objective Lens Design
Optical Design, Custom optical design solutions, Optical Engineering, Custom Optical Lens Design

Design for Manufacturing (DFM) Case Study: Objective Lens Design for Trapping and Imaging Single Atoms At Avantier we offer Design for Manufacturing (DFM services), optimizing product design with our extensive knowledge of manufacturing constraints, costs, and methods. Avantier Inc. received a request from a University Physics department to custom design a long working distance, high numerical aperture objective. Our highly skilled and knowledgeable engineers designed and deployed state-of-the-art technologies to develop a single-atom trapping and imaging system where multiple laser beams are collimated at various angles and overlapped on the dichroic mirrors before entering the objective lens. The objective lens focuses the input laser beams to create optical tweezers arrays to simultaneously trap single atoms and image the trapped atoms over the full field of view of the microscope objective. The objective lens not only had high transmission but also can render the same point-spread function or diffractive-limited performance for all traps over the full field of view.  Typical requirements for the objective lens used for trapping and imaging single atoms:  Custom objective lens example Objective lens focuses high-power laser beams to create optical tweezers at 6 wavelengths (i.e., 420nm, 795nm, 813nm, 840nm, 1013nm, and 1064nm) and image the trapped atoms at the wavelength of 780nm. 

Read more
Introduction to Microscopes and Objective Lenses

A microscope is an optical device designed to magnify the image of an object, enabling details indiscernible to the human eye to be differentiated. A microscope may project the image onto the human eye or onto a camera or video device.  Historically microscopes were simple devices composed of two elements. Like a magnifying glass today, they produced a larger image of an object placed within the field of view. Today, microscopes are usually complex assemblies that include an array of lenses, filters, polarizers, and beamsplitters. Illumination is arranged to provide enough light for a clear image, and sensors are used to ‘see’ the object. Although today’s microscopes are usually far more powerful than the microscopes used historically, they are used for much the same purpose: viewing objects that would otherwise be indiscernible to the human eye.  Here we’ll start with a basic compound microscope and go on to explore the components and function of larger more complex microscopes. We’ll also take an in-depth look at one of the key parts of a microscope, the objective lens. Compound Microscope: A Closer Look While a magnifying glass consists of just one lens element and can magnify any element placed within its focal length, a compound lens, by definition, contains multiple lens elements. A relay lens system is used to convey the image of the object to the eye or, in some cases, to camera and video sensors.  A basic compound microscope could consist of just two elements acting in relay, the objective and the eyepiece. The objective relays a real image to the eyepiece, while magnifying that image anywhere from 4-100x.  The eyepiece magnifies the real image received typically by another 10x, and conveys a virtual image to the sensor.  There are two major specifications for a microscope: the magnification power and the resolution. The magnification tells us how much larger the image is made to appear. The resolution tells us how far away two points must be to  be distinguishable. The smaller the resolution, the larger the resolving power of the microscope. The highest resolution you can get with a light microscope is 0.2 microns (0.2 microns), but this depends on the quality of both the objective and eyepiece. Both the objective lens and the eyepiece also contribute to the overall magnification of the system. If an objective lens magnifies the object by 10x and the eyepiece by 2x, the microscope will magnify the object by 20. If the microscope lens magnifies the object by 10x and the eyepiece by 10x, the microscope will magnify the object by 100x. This multiplicative relationship is the key to the power of microscopes, and the prime reason they perform so much better than simply magnifying glasses.  In modern microscopes, neither the eyepiece nor the microscope objective is a simple lens. Instead, a combination of carefully chosen optical components work together to create a high quality magnified image. A basic compound microscope can magnify up to about 1000x. If you need higher magnification, you may wish to use an electron microscope, which can magnify up to a million times.  Microscope Eyepieces The eyepiece or ocular lens is the part of the microscope closest to your eye when you bend over to look at a specimen. An eyepiece usually consists of two lenses: a field lens and an eye lens. If a larger field of view is required, a more complex eyepiece  that increases the field of view can be used instead.  Microscope Objective Microscope objective lenses are typically the most complex part of a microscope.  Most microscopes will have three or four objectives lenses, mounted on a turntable for ease of use. A scanning objective lens will provide 4x magnification,  a low power magnification lens will provide magnification of 10x, and a high power objective offers 40x magnification. For high magnification, you will need to use oil immersion objectives. These can provide up to 50x, 60x, or 100x magnification and increase the resolving power of the microscope, but they cannot be used on live specimens. An microscope objective  may be either reflective or refractive. It may also be either finite conjugate or infinite conjugate.   Refractive Objectives Refractive objectives are so-called because the elements bend or refract light as it passes through the system. They are well suited to machine vision applications, as they can provide high resolution imaging of very small objects or ultra fine details. Each element within a refractive element is typically coated with an anti-reflective coating. A basic achromatic objective is a refractive objective that consists of just an achromatic lens and a meniscus lens, mounted within appropriate housing. The design is meant to limit the effects of chromatic and spherical aberration  as they bring two wavelengths of light to focus in the same plane. Plan Apochromat objectives can be much more complex with up to fifteen elements. They can be quite expensive, as would be expected from their complexity. Reflective Objectives A reflective objective works by reflecting light rather than bending it. Primary and secondary mirror systems both magnify and relay the image of the object being studied. While reflective objectives are not as widely used as refractive objectives, they offer many benefits. They can work deeper in the UV or IR spectral regions, and they are not plagued with the same aberrations as refractive objectives. As a result, they tend to offer better resolving power.  Microscope Illumination  Most microscopes rely on background illumination such as daylight or a lightbulb rather than a dedicated light source. In brightfield illumination (also known as Koehler illumination), two convex lenses, a collector lens and a condenser lens,  are placed so as to saturate the specimen with external light admitted into the microscope from behind. This provides a bright, even, steady light throughout the system.  Key Microscope Objective Lens Terminology There are some important specifications and terminology you’ll want to be aware of when designing a microscope or ordering microscope objectives. Here is a list of key terminology. Numerical Aperture Numerical aperture NA denotes

Read more
Lossless Image Compression Example

For storage and transmission of large image files it is desirable to reduce the file size. For consumer-grade images this is achieved by lossy image compression when image details not very noticeable to humans are discarded. However, for scientific images discarding any image details may not be acceptable. Still, all the images, except completely random ones, do include some redundancy. This permits lossless compression which does decrease image file size while preserving all the image details. The simplest file compression can be achieved by using well-known arithmetic encoding of the image data. Arithmetic encoding compression degree can be calculated using Shannon entropy, which is just minus averaged base 2 Log of probabilities of all the values taken by the image pixels. This Shannon entropy gives averaged number of bits per pixel which is necessary to arithmetically encode the image. If, say, the original image is a monochrome one with 8 bits per pixels, then for completely random image the entropy will be equal to 8. For non-random images the entropy will be less than 8. Let’s consider simple example of NASA infrared image of the Earth, shown here using false color This image is 8-bit monochrome one, and has entropy of 5.85. This means arithmetic encoding can decrease image file size 1.367 times. This is better than nothing but not great. Significant improvement can be achieved by transforming the image. If we would use standard Lossless Wavelet compression (LWT), after one step of the LWT the initial image will be transformed into 4 smaller ones: 3 of these 4 smaller images contain only low pixel values which are not visible on the picture above. Zooming on them saturates top left corner, but makes small details near other corners visible (notice the changed scale on the right): Now the entropy of the top left corner 5.85, which is close to the entropy 5.87 of the complete initial image. The entropies of the other 3 corners are 1.83, 1.82 and 2.82. So, after only one LWT step the lossless compression ratio would be 2.6, which is significantly better than 1.367. Our proprietary adaptive prediction lossless compression algorithm shows small prediction residue for the complete image: Actual lossless compression ratio achieved here is about 4.06. It is remarkable that while the last picture looks quite different from the original NASA image, it does contain all the information necessary to completely recover the initial image. Due to lossless nature of the compression, the last picture, using arithmetic encoding, can be saved to the file 4.06 times smaller than the initial NASA picture file. Our proprietary algorithm applied to this smaller file completely recovers the initial picture, accurately to the last bit. No bit left behind.

Read more
3D Image Analysis

Image Analysis of a 3D Image Several manufacturers sell 3D cameras which use 2D sensor arrays sensitive to the phase of reflected laser light. All of them spread laser light so that it permanently illuminates the complete scene of interest. Laser light is modulated synchronously with the pixel sensitivity. The phase of reflected laser light, going back to sensor pixels, depends on the distance to reflection points. This is the basis for calculating the XYZ positions of the illuminated surface points. While this basic principle of operation is the same for a number of 3D cameras, there are a lot of technical details which determine the quality of the obtained data. The best known of these 3D cameras is Microsoft Kinect. It also provides the best distance measurement resolution. According to our measurements, the standard deviation of distance to both white and relatively dark objects is below 2 mm. Most 3D cameras have higher distance measurement noise, often unacceptably high for even relatively high target reflectivity of 20 percent. Here we show the example of data obtained using one not-so-good European 3D camera with 2D array of laser light Time-Of-Flight sensitive pixels. We used the default settings of the camera to calculate distances to the white target at 110 cm from the camera, which is close to the default calibration setup distance for the camera. Deviations of distance from smooth approximating surface in the center of the image are shown by the point cloud here: Here X and Y are distances from the image center measured in pixels. Here is a histogram of the distance noise: Both figures show that, in addition to Gaussian-looking noise, there are some points with very large deviations. Such large deviations are caused by strong fixed-pattern noise (difference between pixel sensitivities). While the noise of this camera is at least 8 times higher than the noise of Kinect, there are more problems which become visible looking at 2D projection of the 3D point cloud. If projected to the camera plane, color-coded distances, shown in cm, do not look too bad for some simple scanned scene: Here x index and Y index are just pixel numbers in x and y directions. The picture becomes more interesting when looking at the projection of the same point cloud to the X-Distance plane: We can clearly see stripes separated by about 4 cm in distance. Nothing like these spurious stripes can be seen in point clouds from the good 3D cameras, such as Kinect. So the conclusion of our 3D image analysis of this European 3D camera is that it is not competitive with the best available 3D cameras.

Read more
Image Processing Case Study

Let’s look at the transportation industry-based case of extensive image processing. Two video cameras were looking at the boxes moving fast on the conveyor belt. To provide high enough image resolution the cameras were placed close to the belt but they could not cover all the belt cross-section. They were placed on the sides of the belt, and could see parts of the boxes. Customer wanted good images of the texture on the top of the boxes, so the images from the two cameras needed to be stitched. Two cameras see the same object at different angles and distances. Before merging the images from the different cameras the images must be transformed from the coordinate systems of the cameras to one common coordinate system, and placed in one common plane in XYZ space. Our developed software performed transformation automatically, based on the known geometry of the camera positions relative to the conveyor belt. Still, after such transformation, multi-megapixel grayscale images from the left and the right cameras are shifted in common plane relative to each other: Here grayscale images from the two cameras are shown in false color. The scale on the right demonstrates the relation between 8-bit pixel signal strength and the false color. We see that the two images also have different brightness. Our algorithms adjust the brightness and shift the images from the left and right cameras to make merging of two images into one image possible. The resulting combined image is shown using different choice of false colors: Right image pixels are shown using magenta, and the left image pixels are shown using green color. Here is the zoomed version of the overlap region of the stitched image: If the stitching would be perfect, then in the overlap region all the pixels would be gray. Our engineer saw that while there are small fringes of color on the edges of black digits and stripes, the overall stitching accuracy is good. This is not trivial, as stitching of the images obtained by different cameras, looking at nearby object from different angles, is not easy. For comparison, here is an example of a  not very successful stitching: Avantier Inc.’s engineering  team with over 30 years of experience  developed software for the customer to perform  all the necessary transformations automatically, without any operator intervention.

Read more
Reverse Optical Engineering Case Studies from Avantier

At Avantier, we are our proud of our track history in assisting customers to solve problems using reverse optical engineering. Here are three case studies. Case Study 1: Reverse Engineering an OFS 20x APO Objective Lens for Bioresearch Genetic engineering requires using precision optics to view and edit the genomes of plants or animals. One world renowned bio research lab has pioneered a new method to speed plant domestication by means of genome editing. While ordinary plant domestication typically requires decades of hard work to produce bigger and better fruit, their methods speed up the process through careful editing of the plants’ genome.  To accomplish this editing, the bio research lab used a high end OFS 20x Mitutoyo APO SL infinity corrected objective lens. The objective lens performed as desired, but there was just one problem. The high energy continuous wave (CW) laser waves involved in the project would damage the sensitive optical lens, causing the objective lens to fail. This became a recurrent problem, and the lab found itself constantly replacing the very expensive objective. It wasn’t long before the cost became untenable. We were approached with the details of this problem and asked if we could design a microscope objective lens with the same long working distance and high numerical aperture performance of the OFS 20x Mitutoyo but with better resistance to laser damage.  The problem was a complex one, but after years of intensive study and focused effort we succeeded in reverse engineering the objective lens and improving the design with a protective coating.  The new objective lens was produced and integrated into the bio research lab’s system. More than three years later, it continues to be used in close proximity to laser beams without any hint of failure or compromised imaging. Case Study 2: Reverse Engineering an OTS 10x Objective Lens for Biomedical Research Fluoresce microscopy is used by a biomedical research company to study embryo cells in a hot, humid incubator.  This company used an OTS Olympic microscope objective lens to view the incubator environment up close and determine the presence, health, and signals of labeled cells, but the objective was failing over time. Constant exposure to temperatures above 37 C and humidity of 70% was causing fungal spores to grow in the research environment and on the microscope objective. These fungal spores, after settling on the cover glass, developed into living organisms that digested the oils and lens coatings. Hydrofluoric acid, produced by the fungi as a waste product, slowly destroyed the lens coating and etched the glass.  The Olympus OTS 10x lens cost several thousand dollars, and this research company soon realized that regular replacement due to fungal growth would cost them far more than they were willing to pay. They approached us to ask if we would reverse engineer an objective that performed in a manner equivalent to the objective they were using, but with a resistance to fungal growth that the original objective did not have.  Our optical and coating engineers worked hard on this problem, and succeeded in producing an equivalent microscope objective with a special protective coating. This microscope lens can be used in humid, warm environments for a long period of time without the damage the Olympus objective sustained.  Case Study 3: Reverse Engineering a High Precision Projection Lens A producer of consumer electronics was designing a home planetarium projector, and found themselves in need of a high precision projection lens that could project an enhanced image. Nothing on the market seemed to suit, and they approached us to ask if we would reverse engineer a high quality lens that exactly fit their needs but is now obsolete.  We were able to study the lens and create our own design for a projector lens with outstanding performance. Not only did this lens exceed our customer’s expectations, it was also affordable to produce and suitable for high volume production.

Read more
Case Study: Infrared Lens Design

  Design for Manufacturing (DFM) Case Study: Infrared Lens Design for Scientific Equipment At Avantier we offer Design for Manufacturing (DFM services), optimizing product design with our extensive knowledge of manufacturing constraints, costs, and methods. Measuring the relative concentrations of carbon monoxide, products of combustion and unburned hydrocarbons is the basis of flare monitoring and is typically accomplished with infrared spectral imaging. For real time continuous monitoring, a multispectral infrared imager can be used.  We were approached by a scientific equipment supplier for DFM help on a particular infrared lens (50 mm f/1) that is used in their infrared imager. The lens was designed to offer the high image quality and low distortion needed for scientific research, but though it performed as desired, there were two major manufacturing problems that made it expensive to produce. The first issue was expensive aspheric lens elements. The second was the inclusion of GaAs to improve the modulation transfer function (MTF) and to correct chromatic aberration. GaAs is a highly toxic material, and incorporating it in the design complicates the manufacturing process and increases the cost.  Taking into account both lens performance and our client’s manufacturing budget, Avantier redesigned the infrared lens with DFM principles. Our final design included no aspheric lens elements and no hazardous material, but we met all requirements for distortion, MTF, and image height offset at the working spectral lens. Using a combination of 5 spherical lens elements and 1 filter, our 50 mm f/1 lens was able to reduce the lens cost by about 60%. In the below image the configuration of the 50 mm f/1 lens is shown. For a wavelength range of approximately 3 to 5 µm the MTF was 0.7, well within our client’s requirements. The next image below shows the modulation transfer function (MTF) plots for the redesigned lens. This last image below shows the lens distortion plot. The maximum field was 6.700 degrees, and the maximum f-tan distortion 0.6526%. Whatever your need might be, our engineers are ready to put their design for manufacturing experience to work for you. Call us to schedule a free initial consultation or discuss manufacturing possibilities. 

Read more