Optical Filters for AR/MR/VR – Part 5

Key Takeaways Avantier specializes in advanced optical filters for AR/VR/MR applications, offering anti-reflective coatings to reduce glare, color filters for vibrant visuals, polarizing filters to enhance contrast, and neutral density optical filters. Their high-quality neutral density filters optimize light entry, preventing eye strain. Optical Filters for AR/VR/MR Optical Filters are used in AR/VR/MR (Augmented Reality, Virtual Reality, and Mixed Reality) to enhance the visual experience and make it more realistic. Here are a few examples of how filters are used in these fields: Anti-reflective coatings AR/VR/MR devices often have multiple lenses and screens, which can cause reflections and glare that can distract from the virtual experience. Anti-reflective coatings on these surfaces can help to reduce these reflections and improve the clarity of the images. What Aventier is using for coating can mainly be divided into two categories: physical vapor deposition (PVD) and chemical vapor deposition (CVD). PVD is a family of techniques that involve the deposition of a thin film of material onto a substrate by physical means, such as thermal evaporation, electron beam evaporation, etc. CVD is a technique that involves the deposition of a thin film of material onto a substrate by chemical means, such as plasma-enhanced CVD, low-pressure CVD, and etc. Color Optical Filters Color filters can be used in AR/VR/MR to enhance the colors of the virtual environment and make them more vivid. For example, a red filter can be used to enhance the color red in a virtual scene. Avantier colored glass filters are highly quality absorption filters made of colored glass, they allow certain wavelengths of light to pass unimpeded, while blocking other wavelength ranges to a designated extent. Rather than using thin film coatings to achieve filtering effects, these filters rely on the absorption and transmission properties of the color glass. Precision can be achieved through careful control of the thickness of the material as well as of the concentration of color used. Colored glass filters are often categorized as longpass, shortpass, or bandpass. Polarizing Optical filters Polarizing filters can be used to reduce glare and improve the contrast in the virtual environment. This can be particularly useful in outdoor settings or bright environments where glare can be a problem. At Avantier, we also specialize in polarizing coating, which can be formed of a very thin film of a birefringent material, or alternately by means of interference effects in a multi-layer dielectric coating. If desired, polarizers can be designed to work with an incidence angle of 45 degrees, leading to a beam reflected at a 90 degree angle. Under certain circumstances, a polarizing coating on a lens or optical window can be used to replace polarizing prisms in an optical assembly. Neutral Density Optical Filters Neutral density filters can be used to reduce the amount of light entering the AR/VR/MR device. This can help to prevent eye strain and improve the overall comfort of the user. At Avantier, we produce high quality neutral density for visible light as well as for ultraviolet and infrared applications. Our neutral density filter kit provides a set of filters with varying optical densities that can be used separately or in stacked configurations. Stepped optical filters, also known as stepped neutral density filters, are another option where imaging with a wide range of light transmission is required. They are designed to provide a discrete range of optical densities on a single filter. These are just a few examples of how filters are used in AR/VR/MR. There are many other types of filters and applications in these fields, depending on the specific device and the requirements of the user. Please contact us if you’d like to schedule a consultation or request for quote on your next project. RELATED CONTENT:

Read more
Innovative AR/MR/VR Filter Applications – Part 3

Holographic filters in AR/MR/VR enhance realism by manipulating light, addressing optical challenges, and improving image quality. Notable applications include expanding the field of view and reducing device components, exemplified by Microsoft HoloLens 2. This article explores unique applications and offers consultations for holographic filter projects in AR/MR/VR.

Read more
Image Recovery or Image Reconstruction of an Imaging System

Blurring is a significant source of image degradation in an imperfect imaging system. The optical system’s point spread function (PSF) describes the measure of blur in a given imaging system and is often used in image reconstruction or image recovery algorithms. Below in example of using inverse PSF to eliminate the barcode image degradation. Barcodes are found on many everyday consumer products. A typical 1-D (one-dimensional) barcode is a series of varying width vertical lines (called bars) and spaces. The example of the popular GS1-128 Symbology barcode is shown here: The signal amplitude of code image only has changes in horizontal direction (i.e. X-direction).  For the imaging system used to capture and decode the barcode it is sufficient to look at one-dimensional intensity profile along the X-direction. In good conditions the profile may look like this: Using such a good scan, it is trivial to recover initial binary (only Black and only White) barcode. One can set threshold in the middle between maxima and minima of the received signal, and assign whatever is above the threshold to White, and below the threshold to Black. However, in situations when the Point Spread Function (PSF) of the imaging system is poor, it may be difficult or impossible to set the proper threshold.  See example below: PSF is the impulse response of an imaging system, it contains information of the image formation, systematic aberrations and imperfections. To correctly decode barcode in such situations one may try to use inverse PSF information to improve the received signal. The idea is to deduce inverse PSF from the multiple signals obtained from the many scans of different barcodes of the same symbology. All barcodes of the same Symbology, such as GS1-128, have the same common features defined by the Symbology standards. This permits us to calculate inverse PSF coefficients by minimizing deviation of the received signals from the ideal barcode profile signals. A small number, such as 15, of the inverse PSF coefficients may be used to correct the received signals to make them as close to barcode signals as possible in the Least Squares sense. The inverse PSF coefficients were found and used to convert poor received signal shown previously into better signal shown on the next picture by red: While the recovered red signal is not ideal, it does permit to set threshold and correctly recover the scanned barcode.

Read more
Case Study: Objective Lens Design
Optical Design, Custom optical design solutions, Optical Engineering, Custom Optical Lens Design

Design for Manufacturing (DFM) Case Study: Objective Lens Design for Trapping and Imaging Single Atoms At Avantier we offer Design for Manufacturing (DFM services), optimizing product design with our extensive knowledge of manufacturing constraints, costs, and methods. Avantier Inc. received a request from a University Physics department to custom design a long working distance, high numerical aperture objective. Our highly skilled and knowledgeable engineers designed and deployed state-of-the-art technologies to develop a single-atom trapping and imaging system where multiple laser beams are collimated at various angles and overlapped on the dichroic mirrors before entering the objective lens. The objective lens focuses the input laser beams to create optical tweezers arrays to simultaneously trap single atoms and image the trapped atoms over the full field of view of the microscope objective. The objective lens not only had high transmission but also can render the same point-spread function or diffractive-limited performance for all traps over the full field of view.  Typical requirements for the objective lens used for trapping and imaging single atoms:  Custom objective lens example Objective lens focuses high-power laser beams to create optical tweezers at 6 wavelengths (i.e., 420nm, 795nm, 813nm, 840nm, 1013nm, and 1064nm) and image the trapped atoms at the wavelength of 780nm. 

Read more
LiDAR in Autonomous Vehicles

Today’s advanced driver assistance systems take advantage of AI-spiked cameras and radar or sonar systems, but most manufacturers have been waiting for advances in machine vision technology to go one step further into autonomous self-driving cars. Today, that technology is ready to roll out. We call it LiDAR: Light detection and ranging.  LIDAR in autonomous vehicles can create a 3D understanding of the environment of the LiDAR systems, providing a self-driving car with a dynamic, highly accurate map of anything within 400 meters.  Understanding LiDAR LiDAR works by sending out laser pulses that reach a target, then bounce back to where a LiDAR sensor measures the time it took for the round trip. This enables the LiDAR system to create a point map that gives the exact location of everything within the reach of the laser beam. While the reach depends on the laser type used, those used in autonomous cars can now provide accurate data on objects up to 400 meters distance. Since LiDAR systems use laser light from a moving source on the car to ‘see’, the technology is not dependent on ambient light and can function just as well at night as during the day.  LiDAR is used in more than just self-driving cars. It has become important in land surveying, forestry and farming, and mining applications. LiDAR technology was used to discover the topology of Mars, and is being used today in a program studying the distances between the surfaces of the moon and earth. It can provide soil profiling, forest canopy measurements, and even cloud profiling.  LiDAR in Autonomous Vehicles Ten years ago, LiDAR was expensive and clunky, but that didn’t stop autonomous driving pioneers from incorporating it into their prototypes. Google designed a car with a $70,000 LiDAR system sitting right on top of the vehicle, and ran a series of very successful tests in Mountain View, California and around the U.S. There was just one problem: tacking an extra $70,000 bill onto an already expensive car leads to something that is simply not practical for anything besides research.  Today, Waymo manufactures self-driving cars using what they learned from those original experiments, and each of these cars is fitted with a similar LiDAR system. The design has been improved over the years, but the most glaring change is a very promising one: advances in technology have enabled Waymo to bring the cost of the LiDAR system down 90%.  Now LiDAR technology is available to any manufacturer of LiDAR cars, and our LiDAR optical design specialists can help you come up with a LiDAR system that meets your budget and requirements. Contact us for more information or to chat with one of our engineers.

Read more
Image Processing Case Study

Let’s look at the transportation industry-based case of extensive image processing. Two video cameras were looking at the boxes moving fast on the conveyor belt. To provide high enough image resolution the cameras were placed close to the belt but they could not cover all the belt cross-section. They were placed on the sides of the belt, and could see parts of the boxes. Customer wanted good images of the texture on the top of the boxes, so the images from the two cameras needed to be stitched. Two cameras see the same object at different angles and distances. Before merging the images from the different cameras the images must be transformed from the coordinate systems of the cameras to one common coordinate system, and placed in one common plane in XYZ space. Our developed software performed transformation automatically, based on the known geometry of the camera positions relative to the conveyor belt. Still, after such transformation, multi-megapixel grayscale images from the left and the right cameras are shifted in common plane relative to each other: Here grayscale images from the two cameras are shown in false color. The scale on the right demonstrates the relation between 8-bit pixel signal strength and the false color. We see that the two images also have different brightness. Our algorithms adjust the brightness and shift the images from the left and right cameras to make merging of two images into one image possible. The resulting combined image is shown using different choice of false colors: Right image pixels are shown using magenta, and the left image pixels are shown using green color. Here is the zoomed version of the overlap region of the stitched image: If the stitching would be perfect, then in the overlap region all the pixels would be gray. Our engineer saw that while there are small fringes of color on the edges of black digits and stripes, the overall stitching accuracy is good. This is not trivial, as stitching of the images obtained by different cameras, looking at nearby object from different angles, is not easy. For comparison, here is an example of a  not very successful stitching: Avantier Inc.’s engineering  team with over 30 years of experience  developed software for the customer to perform  all the necessary transformations automatically, without any operator intervention.

Read more