Market launch: New Ensenso N models for 3D and robot vision

Upgraded Ensenso 3D camera series now available at IDS
 

Resolution and accuracy have almost doubled, the price has remained the same – those who choose 3D cameras from the Ensenso N series can now benefit from more advanced models. The new stereo vision cameras (N31, N36, N41, N46) can now be purchased from IDS Imaging Development Systems.

The Ensenso N 3D cameras have a compact housing (made of aluminium or plastic composite, depending on the model) with an integrated pattern projector. They are suitable for capturing both static and moving objects. The integrated projector projects a high-contrast texture onto the objects in question. A pattern mask with a random dot pattern complements non-existing or only weakly visible surface structures. This allows the cameras to deliver detailed 3D point clouds even in difficult lighting conditions.

With the Ensenso models N31, N36, N41 and N46, IDS is now launching the next generation of the previously available N30, N35, N40 and N45. Visually, the cameras do not differ from their predecessors. They do, however, use a new sensor from Sony, the IMX392. This results in a higher resolution (2.3 MP instead of 1.3 MP). All cameras are pre-calibrated and therefore easy to set up. The Ensenso selector on the IDS website helps to choose the right model.

Whether firmly installed or in mobile use on a robot arm: with Ensenso N, users opt for a 3D camera series that provides reliable 3D information for a wide range of applications. The cameras prove their worth in single item picking, for example, support remote-controlled industrial robots, are used in logistics and even help to automate high-volume laundries. IDS provides more in-depth insights into the versatile application possibilities with case studies on the company website.

Learn more: https://en.ids-imaging.com/ensenso-3d-camera-n-series.html

2D, 3D and AI: IDS presents numerous new products and camera developments at VISION

Today, cameras are often more than just suppliers of images – they can recognise objects, generate results or trigger follow-up processes. Visitors to VISION Stuttgart, Germany, can find out about the possibilities offered by state-of-the-art camera technology at IDS booth 8C60. There, they will discover the next level of the all-in-one AI system IDS NXT. The company is not only expanding the machine learning methods to include anomaly detection, but is also developing a significantly faster hardware platform. IDS is also unveiling the next stage of development for its new uEye Warp10 cameras. By combining a fast 10GigE interface and TFL mount, large-format sensors with up to 45 MP can be integrated, opening up completely new applications. The trade fair innovations also include prototypes of the smallest IDS board-level camera and a new 3D camera model in the Ensenso product line.

IDS NXT: More than artificial intelligence
IDS NXT is a holistic system with a variety of workflows and tools for realising custom AI vision applications. The intelligent IDS NXT cameras can process tasks „on device” and deliver image processing results themselves. They can also trigger subsequent processes directly. The range of tasks is determined by apps that run on the cameras. Their functionality can therefore be changed at any time. This is supported by a cloud-based AI Vision Studio, with which users can not only train neural networks, but now also create vision apps. The system offers both beginners and professionals enormous scope for designing AI vision apps. At VISION, the company shows how artificial intelligence is redefining the performance spectrum of industrial cameras and gives an outlook on further developments in the hardware and software sector.

uEye Warp10: High speed for applications
With 10 times the transmission bandwidth of 1GigE cameras and about twice the speed of cameras with USB 3.0 interfaces, the recently launched uEye Warp10 camera family with 10GigE interface is the fastest in the IDS range. At VISION, the company is demonstrating that these models will not only set standards in terms of speed, but also resolution. Thanks to the TFL mount, it becomes possible to integrate much higher resolution sensors than before. This means that even detailed inspections with high clock rates and large amounts of data will be feasible over long cable distances. The industrial lens mount allows the cameras to fully utilise the potential of large format (larger than 1.1″) and high resolution sensors (up to 45 MP).

uEye XLS: Smallest board-level camera with cost-optimised design
IDS is presenting prototypes of an additional member of its low-cost portfolio at the fair. The name uEye XLS indicates that it is a small variant of the popular uEye XLE series. The models will be the smallest IDS board-level cameras in the range. They are aimed at users who, e.g. for embedded applications, require particularly low-cost, extremely compact cameras with and without lens holders in large quantities. They can look forward to Vision Standard-compliant project cameras with various global shutter sensors and trigger options.

Ensenso C: Powerful 3D camera for large-volume applications
3D camera technology is an indispensable component in many automation projects. Ensenso C is a new variant in the Ensenso 3D product line that scores with a long baseline and high resolution, while at the same time offering a cost-optimised design. Customers receive a fully integrated, pre-configured 3D camera system for large-volume applications that is quickly ready for use and provides even better 3D data thanks to RGB colour information. A prototype will be available le at the fair.

Learn more: https://en.ids-imaging.com/ueye-warp10.html

Large laundry

Intelligent robotics for laundries closes automation gap

The textile and garment industry is facing major challenges with current supply chain and energy issues. The future recovery is also threatened by factors that hinder production, such as labour and equipment shortages, which put them under additional pressure. The competitiveness of the industry, especially in a global context, depends on how affected companies respond to these framework conditions. One solution is to move the production of clothing back to Europe in an economically viable way. Shorter transport routes and the associated significant savings in transport costs and greenhouse gases speak in favour of this. On the other hand, the related higher wage costs and the prevailing shortage of skilled workers in this country must be compensated. The latter requires further automation of textile processing. The German deep-tech start-up sewts GmbH from Munich has focused on the great potential that lies in this task. It develops solutions with the help of which robots – similar to humans – anticipate how a textile will behave and adapt their movement accordingly.

The German deep-tech start-up sewts GmbH from Munich has focused on the great potential that lies in this task. It develops solutions with the help of which robots – similar to humans – anticipate how a textile will behave and adapt their movement accordingly. In the first step, sewts has set its sights on an application for large industrial laundries. With a system that uses both 2D and 3D cameras from IDS Imaging Development Systems GmbH, the young entrepreneurs are automating one of the last remaining manual steps in large-scale industrial laundries, the unfolding process. Although 90% of the process steps in industrial washing are already automated, the remaining manual operations account for 30% of labour costs. The potential savings through automation are therefore enormous at this point.

Application

It is true that industrial laundries already operate in a highly automated environment to handle the large volumes of laundry. Among other things, the folding of laundry is done by machines. However, each of these machines usually requires an employee to manually spread out the laundry and feed it without creases. This monotonous and strenuous loading of the folding machines has a disproportionate effect on personnel costs. In addition, qualified workforce is difficult to find, which often has an impact on the capacity utilisation and thus the profitability of industrial laundries. The seasonal nature of the business also requires a high degree of flexibility. sewts makes IDS cameras the image processing components of a new type of intelligent system whose technology can now be used to automate individual steps, such as sorting dirty textiles or inserting laundry into folding machines.

„The particular challenge here is the malleability of the textiles,“ explains Tim Doerks, co-founder and CTO. While the automation of the processing of solid materials, such as metals, is comparatively unproblematic with the help of robotics and AI solutions, available software solutions and conventional image processing often still have their limits when it comes to easily deformable materials. Accordingly, commercially available robots and gripping systems have so far only been able to perform such simple operations as gripping a towel or piece of clothing inadequately. But the sewts system

Weiterlesen

Picked up and put off

Guest post by IDS Corporate Communications

Autonomously driving robotic assistance system for the automated placement of coil creels

Due to the industry standard 4.0, digitalisation, automation and networking of systems and facilities are becoming the predominant topics in production and thus also in logistics. Industry 4.0 pursues the increasing optimisation of processes and workflows in favour of productivity and flexibility and thus the saving of time and costs. Robotic systems have become the driving force for automating processes. Through the Internet of Things (IoT), robots are becoming increasingly sensitive, autonomous, mobile and easier to operate. More and more they are becoming an everyday helper in factories and warehouses. Intelligent imaging techniques are playing an increasingly important role in this.

To meet the growing demands in scaling and changing production environments towards fully automated and intelligently networked production, the company ONTEC Automation GmbH from Naila in Bavaria has developed an autonomously driving robotic assistance system. The „Smart Robot Assistant“ uses the synergies of mobility and automation: it consists of a powerful and efficient intralogistics platform, a flexible robot arm and a robust 3D stereo camera system from the Ensenso N series by IDS Imaging Development Systems GmbH.

The solution is versatile and takes over monotonous, weighty set-up and placement tasks, for example. The autonomous transport system is suitable for floor-level lifting of Euro pallets up to container or industrial format as well as mesh pallets in various sizes with a maximum load of up to 1,200 kilograms. For a customer in the textile industry, the AGV (Automated Guided Vehicle) is used for the automated loading of coil creels. For this purpose, it picks up pallets with yarn spools, transports them to the designated creel and loads it for further processing. Using a specially developed gripper system, up to 1000 yarn packages per 8-hour shift are picked up and pushed onto a mandrel of the creel. The sizing scheme and the position of the coils are captured by an Ensenso 3D camera (N45 series) installed on the gripper arm.

Application

Pallets loaded with industrial yarn spools are picked up from the floor of a predefined storage place and transported to the creel location. There, the gripper positions itself vertically above the pallet. An image trigger is sent to the Ensenso 3D camera from the N45 series, triggered by the in-house software ONTEC SPSComm. It networks with the vehicle’s PLC and can thus read out and pass on data. In the application, SPSComm controls the communication between the software parts of the vehicle, gripper and camera. This way, the camera knows when the vehicle and the grabber are in position to take a picture. This takes an image and passes on a point cloud to a software solution from ONTEC based on the standard HALCON software, which reports the coordinates of the coils on the pallet to the robot. The robot can then accurately pick up the coils and process them further. As soon as the gripper has cleared a layer of the yarn spools, the Ensenso camera takes a picture of the packaging material lying between the yarn spools and provides point clouds of this as well. These point clouds are processed similarly to provide the robot with the information with which a needle gripper removes the intermediate layers. „This approach means that the number of layers and finishing patterns of the pallets do not have to be defined in advance and even incomplete pallets can be processed without any problems,“ explains Tim Böckel, software developer at ONTEC. „The gripper does not have to be converted for the use of the needle gripper. For this application, it has a normal gripping component for the coils and a needle gripping component for the intermediate layers.“

For this task, the mobile use for 3D acquisition of moving and static objects on the robot arm, the Ensenso 3D camera is suitable due to its compact design. The Ensenso N 45’s 3D stereo electronics are completely decoupled from the housing, allowing the use of a lightweight plastic composite as the housing material. The low weight facilitates the use on robot arms such as the Smart Robotic Asstistant. The camera can also cope with demanding environmental conditions. „Challenges with this application can be found primarily in the different lighting conditions that are evident in different rooms of the hall and at different times of the day,“ Tim Böckel describes the situation. Even in difficult lighting conditions, the integrated projector projects a high-contrast texture onto the object to be imaged by means of a pattern mask with a random dot pattern, thus supplementing the structures on featureless homogenous surfaces. This means that the integrated camera meets the requirements exactly. „By pre-configuring within NxView, the task was solved well.“ This sample programme with source code demonstrates the main functions of the NxLib library, which can be used to open one or more stereo and colour cameras whose image and depth data are visualised. Parameters such as exposure time, binning, AOI and depth measuring range can – as in this case – be adjusted live for the matching method used.

The matching process empowers the Ensenso 3D camera to recognise a very high number of pixels, including their position change, by means of the auxiliary structures projected onto the surface and to create complete, homogeneous depth information of the scene from this. This in turn ensures the necessary precision with which the Smart Robot Assistant proceeds. Other selection criteria for the camera were, among others, the standard vision interface Gigabit Ethernet and the global shutter 1.3 MP sensor. „The camera only takes one image pair of the entire pallet in favour of a faster throughput time, but it has to provide the coordinates from a relatively large distance with an accuracy in the millimetre range to enable the robot arm to grip precisely,“ explains Matthias Hofmann, IT specialist for application development at ONTEC. „We therefore need the high resolution of the camera to be able to safely record the edges of the coils with the 3D camera.“ The localisation of the edges is important in order to be able to pass on as accurate as possible the position from the centre of the spool to the gripper.

Furthermore, the camera is specially designed for use in harsh environmental conditions. It has a screwable GPIO connector for trigger and flash and is IP65/67 protected against dirt, dust, splash water or cleaning agents.

Software

The Ensenso SDK enables hand-eye calibration of the camera to the robot arm, allowing easy translation or displacement of coordinates using the robot pose. In addition, by using the internal camera settings, a „FileCam“ of the current situation is recorded at each pass, i.e. at each image trigger. This makes it possible to easily adjust any edge cases later on, in this application for example unexpected lighting conditions, obstacles in the image or also an unexpected positioning of the coils in the image. The Ensenso SDK also allows the internal camera LOG files to be stored and archived for possible evaluation.

ONTEC also uses these „FileCams“ to automatically check test cases and thus ensure the correct functioning of all arrangements when making adjustments to the vision software. In addition, various vehicles can be coordinated and logistical bottlenecks minimised on the basis of the control system specially developed by ONTEC. Different assistants can be navigated and act simultaneously in a very confined space. By using the industrial interface tool ONTEC SPSComm, even standard industrial robots can be safely integrated into the overall application and data can be exchanged between the different systems.

Outlook

Further development of the system is planned, among other things, in terms of navigation of the autonomous vehicle. „With regard to vehicle navigation for our AGV, the use of IDS cameras is very interesting. We are currently evaluating the use of the new Ensenso S series to enable the vehicle to react even more flexibly to obstacles, for example, classify them and possibly even drive around them,“ says Tim Böckel, software developer at ONTEC, outlining the next development step.

ONTEC’s own interface configuration already enables the system to be integrated into a wide variety of Industry 4.0 applications, while the modular structure of the autonomously moving robot solution leaves room for adaptation to a wide variety of tasks. In this way, it not only serves to increase efficiency and flexibility in production and logistics, but in many places also literally contributes to relieving the workload of employees.

More at: https://en.ids-imaging.com/casestudies-detail/picked-up-and-put-off-ensenso.html