Skip to main content
FRAMOS Logo

The Crucial Role of Lens Focusing in Embedded Vision Systems

FRAMOS

FRAMOS

June 30, 2025

The Crucial Role of Lens Focusing in Embedded Vision Systems

Embedded systems are ubiquitous. They are found in aircraft as flight control systems, in cars as infotainment systems or advanced driver assistance systems (ADAS), and of course in our everyday lives as the basis for smartphones. Embedded vision and embedded vision systems are key technologies enabling real-time image processing and analysis within embedded devices, supporting a wide range of applications.

One area of application is autonomous mobile robots (AMR). Without the ability to perceive their environment, they are blind. Embedded cameras are critical components for these robots, providing the visual data needed for navigation and perception. In addition to LiDAR, stereo cameras and time-of-flight cameras are the first choice when it comes to sensors that enable these AMRs to see their environment. The reliability of the data generated is crucial here – as you would expect. Developers build algorithms so that AMRs can navigate safely with the help of cameras. This is done with the help of visual SLAM algorithms (VSLAM, Visual Simultaneous Localization and Mapping). They must build these in such a way that their AMRs can reliably recognize and process image features. However, this is only possible if the built-in sensor and its optics are perfectly matched to each other and to this application. The camera integration journey can be streamlined by consulting imaging experts, who offer guidance and solutions for selecting and customizing embedded vision components.

Companies with a long-standing presence in the embedded vision space, have demonstrated expertise in developing embedded vision and machine vision solutions for a variety of industries, contributing significantly to the advancement of these technologies.

The right choice of sensor and lens

First of all, it is of course crucial that the image sensor and lens are suitable for the application. Sensor size and sensor sizes play a key role in lens compatibility and image coverage, ensuring that the selected lens matches the sensor for optimal results. For example, in logistics, goods may need to be scanned on conveyor belts, requiring wide-angle sensors or wide-angle lenses, while AMRs typically require stereo camera setups or image sensors to see spatially and time-of-flight sensors to measure distances. Depending on the field of view (FOV) requirements, options such as wide-angle lenses, zoom lenses, and telephoto lenses can be considered. For all applications, regardless of whether they involve AMRs or conveyor belt scanners, the lens must first be selected so that all effective pixels are illuminated with the correct incident angle. In technical terms, the light must fall completely on the focal plane so that a sharp image is produced. When selecting a lens, it is important to consider focal length and vertical FOV to ensure the lens matches the application’s needs.

However, what can vary greatly between applications is whether there is a fixed position for a camera and how far away the objects to be captured are. The working distance, which is the physical distance between the lens and the object, significantly impacts focus and lens choice. This allows a preliminary selection of lenses to be made. The lens selection process should also take into account pixel count, pixel size, and overall performance to achieve optimal imaging results. For example, a wide-angle lens is required to capture a wide field, while telephoto lenses are ideal for capturing more distant objects.

Image quality can be affected by lens distortion, barrel distortion, and lens vignetting, especially when using wide-angle lenses, so these factors should be evaluated during selection. For different applications, it is important to consider the broad types of cameras available, match the host platform for processing needs, and explore customization options to tailor the system for specific requirements. In some embedded vision applications, surround view and the position of the object plane are also important considerations for effective imaging.

Application details before lens focusing

The first step, before even thinking about focusing the lens, is to determine the application requirements. These will then narrow down a set of components that might be suitable. Once the image sensor and a suitable lens have been found, the next step is to assemble the parts. Choosing components with easy setup and using pre-configured camera modules can significantly streamline integration and reduce complexity for users.

However, the environmental conditions on site are often underestimated. Factors such as ambient lighting can greatly impact camera performance, especially for applications requiring accurate imaging in varying light conditions. Thermal stress can not only lead to noise artifacts in the image result, but also defocus the lens if it is not professionally integrated. This integration is based on reliably testing through simulations until a suitable prototype can be built. ISP tuning is also crucial at this stage to optimize image quality and ensure the camera system meets specific application requirements. In addition to the challenge of correctly addressing heat generation, moisture (e.g., in cameras exposed to rain) or contamination, such as soot particles, can also play a role.

When defining system requirements, it is important to consider the need for various processors to handle high-bandwidth video data and support advanced features such as depth cameras and processing of depth data, which are essential for 3D vision applications in the market today.

Ensure Precision with Expert Lens Calibration

Don’t leave the active alignment and calibration of the lens to chance:
Contact us for reliable and long-lasting functionality of your lens!

The right lens setting: lens focusing in a few steps

However, it is even more important to determine the correct distance to the object to be photographed. The ideal distance between the lens and the object or scene to be photographed is called the focus distance. This raises the question of how large the depth of field (DOF) is. The depth of field refers to the space between the closest object to be photographed and the furthest object, in which acceptable sharpness can still be achieved. Achieving the best focus is crucial for maximizing image sharpness, as it ensures that details are rendered clearly at the sensor.

The depth of field is therefore the leeway around the focus distance that you have in a given application. This varies from application to application. The more the aperture is closed, the sharper the image will generally be. The aperture size is described by the f-number (f/#), which controls how much light enters the lens. In low-light situations, it is important to allow as much light as possible to reach the sensor, which may require a lower f-number (wider aperture). However, the relationship between aperture and sharpness is inversely proportional in terms of diffraction and resolution: as the f-number increases, diffraction increases and resolution decreases.

Image sharpness and resolution are often measured in terms of spatial frequency, such as line pairs per millimeter (lp/mm). A line pair consists of one dark and one bright line, and the ability to distinguish one line pair is a key indicator of lens resolving power. The number of line pairs that can be resolved determines the level of detail and contrast in the image. At the sensor, one pixel is the fundamental unit affected by focus and aberrations, as blur or misfocus can cause light to bleed into adjacent pixels, reducing sharpness.

You therefore have to weigh up the pros and cons to achieve the perfect result. In high-speed imaging applications, such as line scan or fast-motion photography, the need for short exposure times further impacts the balance between aperture, focus, and image quality.

Calculating the depth of field for optimal lens focus

To find out how wide the field needs to be to obtain an acceptably sharp image, i.e. the depth of field, you can use the depth of field formula:

DOF = 2 u2 N c f2

u stands for the distance to the object to be photographed, N denotes the aperture number, c is the circle of confusion, and f is the focal length.

Imaging experts use this formula and determine the optimum sharpness in laboratories based on various distances, aperture sizes, circles of confusion, and focal lengths. For precise measurements, it is important to use a high-quality camera and high-resolution sensors to accurately assess image sharpness and detail. To do this, the experts use test charts with special software under controlled lighting conditions, where the edges are checked to achieve optimum sharpness. Collimators are used as an alternative. These are devices for bundling rays in parallel, which helps to produce sharp images for very small or very large distances. When optimizing for different application requirements, such as low-light applications, selecting the appropriate lens and sensor combination is crucial to ensure sufficient image quality and performance. This is of course only a small part of the process to build the optimal solution. Tailoring the vision system for specific vision applications, and customizing components for each embedded vision application, ensures the best results for diverse use cases. In addition to fine-tuning the focus distance, further optimization work is of course necessary, such as thermal defocus immunization and the assembly process in general, which we will discuss in other articles.

The benefits of perfectly focused lenses for industrial applications

Sharp images are essential in industry – especially for embedded systems such as autonomous mobile robots (AMRs) that use Visual Simultaneous Localization and Mapping (VSLAM) algorithms. Examples of AMRs include warehouse robots, patrol robots, and telepresence robots, all of which benefit from focused lenses for reliable navigation and task execution. Only with good image quality can edges, textures, and other distinctive image structures be reliably detected and processed. This is especially critical in machine vision and surveillance applications, where focus and sharpness directly impact overall performance. However, if the image is blurred, these structures appear washed out. This makes it difficult to recognize so-called keypoints or features – characteristic image points that VSLAM uses for position determination and mapping. Blurred images often lead to incorrect or even missing descriptors – the data structures that uniquely describe the characteristics of a feature.

Imagine a shelf in a warehouse labeled “Row 4.” If this label is no longer clearly visible due to blurring, it becomes irrelevant to the algorithm – it is not recognized as a usable feature. As a result, the robot loses potential points of orientation and navigates less precisely through its environment.

Comparison of warehouse shelves with good focus vs bad focus showing how blurriness reduces detectable keypoints for robotic navigation

In the case of barcode scanners, such as those used in logistics, incorrect scans are possible. In Germany, everyone is familiar with the problem of deposit bottles not being recognized in vending machines. This is partly due to the dirt on the bottles, but also to the sensors and, to a certain extent, to camera modules that are not optimally selected and are not sufficiently focused. Object recognition in general, such as barcode reading and object detection for recognizing features, relies on sharp images. So the reasons for vending machines not functioning properly are, on the one hand, the vibration of the machines and, on the other hand, the heat generated, which can cause poorly positioned optics to produce a focus drift. It is therefore not only the calculation of optimal image sharpness that is decisive, but also the practical implementation on the lens or image sensor.

Faster to mass production with pre-tuned camera modules from the FSM:GO series

FRAMOS has launched the FSM:GO series to develop prototypes even faster, which can later be turned into products for the mass market. The FSM:GO camera module is designed for a variety of embedded vision applications, offering customization options to meet specific requirements and ensure optimal performance. FSM:GO are center-focused, pre-tuned camera modules that are ideal for a wide range of applications and can be set up in advance with very little information (such as the field of view and the resolution) to achieve optimal image results. This means customers benefit from a very short time between prototype and market launch, thereby saving costs. FSM:GO camera modules are designed to be focusable depending on the application’s requirements, so optimal results are achieved virtually every time, and can be pre-configured to optimize for each embedded vision application. There is no need for time-consuming lens evaluation or fine-tuning.

Accelerate Your Vision Project with FSM:GO

Quickly move from prototype to market launch in just a few steps with FSM:GO

Conclusion

Lens focusing is a crucial component for optimal image results. It determines whether applications work well or are prone to malfunction. Not only precise calculation, but also knowledgeable assembly of the sensor and lens and testing of the configuration are crucial for later success. In addition to the calculated depth of field and focus distance, environmental conditions such as heat generation and vibration must also be taken into account.