Anyone who needs quickly available industrial cameras for image processing projects is not faced with an easy task due to the worldwide chip shortage. IDS Imaging Development Systems GmbH has therefore been pushing the development of alternative USB3 hardware generations with available, advanced semiconductor technology in recent months and has consistently acquired components for this purpose. Series production of new industrial cameras with USB3 interface and Vision Standard compatibility has recently started. In the CP and LE camera series of the uEye+ product line, customers can choose the right model for their applications from a total of six housing variants and numerous CMOS sensors.
The models of the uEye CP family are particularly suitable for space-critical applications thanks to their distinctive, compact magnesium housing with dimensions of only 29 x 29 x 29 millimetres and a weight of around 50 grams. Customers can choose from global and rolling shutter sensors from 0.5 to 20 MP in this product line. Those who prefer a board-level camera instead should take a look at the versatile uEye LE series. These cameras are available with coated plastic housings and C-/CS-mount lens flanges as well as board versions with or without C-/CS-mount or S-mount lens connections. They are therefore particularly suitable for projects in small device construction and integration in embedded vision systems. IDS initially offers the global shutter Sony sensors IMX273 (1.6 MP) and IMX265 (3.2 MP) as well as the rolling shutter sensors IMX290 (2.1 MP) and IMX178 (6.4 MP). Other sensors will follow.
The USB3 cameras are perfectly suited for use with IDS peak thanks to the vision standard transport protocol USB3 Vision®. The Software Development Kit includes programming interfaces in C, C++, C# with .NET and Python as well as tools that simplify the programming and operation of IDS cameras while optimising factors such as compatibility, reproducible behaviour and stable data transmission. Special convenience features reduce application code and provide an intuitive programming experience, enabling quick and easy commissioning of the cameras.
Today, cameras are often more than just suppliers of images – they can recognise objects, generate results or trigger follow-up processes. Visitors to VISION Stuttgart, Germany, can find out about the possibilities offered by state-of-the-art camera technology at IDS booth 8C60. There, they will discover the next level of the all-in-one AI system IDS NXT. The company is not only expanding the machine learning methods to include anomaly detection, but is also developing a significantly faster hardware platform. IDS is also unveiling the next stage of development for its new uEye Warp10 cameras. By combining a fast 10GigE interface and TFL mount, large-format sensors with up to 45 MP can be integrated, opening up completely new applications. The trade fair innovations also include prototypes of the smallest IDS board-level camera and a new 3D camera model in the Ensenso product line.
IDS NXT: More than artificial intelligence IDS NXT is a holistic system with a variety of workflows and tools for realising custom AI vision applications. The intelligent IDS NXT cameras can process tasks “on device” and deliver image processing results themselves. They can also trigger subsequent processes directly. The range of tasks is determined by apps that run on the cameras. Their functionality can therefore be changed at any time. This is supported by a cloud-based AI Vision Studio, with which users can not only train neural networks, but now also create vision apps. The system offers both beginners and professionals enormous scope for designing AI vision apps. At VISION, the company shows how artificial intelligence is redefining the performance spectrum of industrial cameras and gives an outlook on further developments in the hardware and software sector.
uEye Warp10: High speed for applications With 10 times the transmission bandwidth of 1GigE cameras and about twice the speed of cameras with USB 3.0 interfaces, the recently launched uEye Warp10 camera family with 10GigE interface is the fastest in the IDS range. At VISION, the company is demonstrating that these models will not only set standards in terms of speed, but also resolution. Thanks to the TFL mount, it becomes possible to integrate much higher resolution sensors than before. This means that even detailed inspections with high clock rates and large amounts of data will be feasible over long cable distances. The industrial lens mount allows the cameras to fully utilise the potential of large format (larger than 1.1″) and high resolution sensors (up to 45 MP).
uEye XLS: Smallest board-level camera with cost-optimised design IDS is presenting prototypes of an additional member of its low-cost portfolio at the fair. The name uEye XLS indicates that it is a small variant of the popular uEye XLE series. The models will be the smallest IDS board-level cameras in the range. They are aimed at users who, e.g. for embedded applications, require particularly low-cost, extremely compact cameras with and without lens holders in large quantities. They can look forward to Vision Standard-compliant project cameras with various global shutter sensors and trigger options.
Ensenso C: Powerful 3D camera for large-volume applications 3D camera technology is an indispensable component in many automation projects. Ensenso C is a new variant in the Ensenso 3D product line that scores with a long baseline and high resolution, while at the same time offering a cost-optimised design. Customers receive a fully integrated, pre-configured 3D camera system for large-volume applications that is quickly ready for use and provides even better 3D data thanks to RGB colour information. A prototype will be available le at the fair.
Faster than any other IDS industrial camera: uEye Warp10 with 10GigE
With high speed to new spheres! When fast-moving scenes need to be captured in all their details, a high-performance transmission interface is essential in addition to the right sensor. With uEye Warp10, IDS Imaging Development Systems GmbH is launching a new camera family that, thanks to 10GigE, transmits data in the Gigabit Ethernet-based network at a very high frame rate and virtually without delay. The first models with the IMX250 (5 MP), IMX253 (12 MP) and IMX255 (8.9 MP) sensors from the Sony Pregius series are now available.
Compared to 1GigE cameras, the uEye Warp10 models achieve up to 10 times the transmission bandwidth; they are also about twice as fast as cameras with USB 3.0 interfaces. The advantages become particularly apparent when scenes are to be captured, monitored and analysed in all details and without motion blur. Consequently, applications such as inspection applications on the production line with high clock speeds or image processing systems in sports analysis benefit from the fast data transfer.
The GigE Vision standard-compliant industrial cameras enable high-speed data transfer over cable lengths of up to 100 metres without repeaters or optical extenders via standard CAT6A cables (under 40 metres also CAT5E) with RJ45 connectors. The robust uEye Warp10 cameras are initially offered with C-mount lens holders. IDS is already working on additional models. In the future, versions with TFL mount (M35 x 0.75) will also be available for use with particularly high-resolution sensors up to 45 MP. The cameras are supported by the powerful IDS peak software development kit.
In the IDS Vision Channel, the experts from IDS present the features and possible applications of the new camera family in detail. The video is available here free of charge. All you need is a free IDS website user account. Learn more: https://en.ids-imaging.com/ueye-warp10.html
Unlike webcams from the consumer sector, uEye XC is specially designed for industrial requirements. Its components score with long availability – a must for industrial applications. It is used, for example, in kiosk systems, logistics, in the medical sector and in robotics applications.
Thanks to the integrated autofocus module, the Vision standard-compliant USB3 camera can easily manage changing object distances. In terms of appearance, it is distinguished by its elegant and lightweight magnesium housing. With dimensions of only 32 x 61 x 19 mm (W x H x D) and screwable USB Micro B connection, the camera can be easily integrated into image processing systems. Its 13 MP onsemi sensor delivers 20 fps at full resolution and ensures high image quality even in changing light conditions thanks to BSI (“Back Side Illumination”) pixel technology. Useful functions such as 24x digital zoom, auto white balance and colour correction ensure that all details can be captured perfectly. With the IDS peak software development kit, users can configure the camera specifically for their application if required.
From process and quality assurance to kiosk systems, logistics automation and medical technology: the autofocus camera can be used in a variety of ways in both industrial and non-industrial areas. With the quick-change macro attachment lens, users can easily shorten the minimum object distance of the camera. This makes it also suitable for close-up applications, such as diagnostics.
New IDS NXT software release themed “App your camera!”
The current software release 2.6 for the AI vision system IDS NXT focuses primarily on simplifying app creation. The initial phase in development is often one of the greatest challenges in the realisation of a project. With the help of the new Application Assistant in IDS NXT lighthouse, users configure a complete vision app under guidance in just a few steps, which they can then run directly on an IDS NXT camera. With the Block-based Editor, which is also new, users can configure their own program sequences with AI image processing functions, such as object recognition or classification, without any programming knowledge. Users create simple sequences in a few minutes with this visual code editor without having to know the syntax of a specific programming language.
With the Use Case Assistant, IDS supports users in creating Vision App projects. They simply select the use case that fits their project. With queries and tips, the assistant guides them through the process of creating the Vision App project and creates the code, just like in an interview. It links existing training projects with the vision app project or creates new training projects and data sets in IDS NXT lighthouse if required.
With the combinable blocks and the intuitive user interface of the Block-based Editor, anyone can realise their own projects using AI-based image processing (such as object detection or classification) as an individual vision app without having to know the syntax of a specific programming language. Using the predefined blocks of the code editor, users build their vision app graphically, including processes such as loops and conditional statements. How this works is demonstrated, for example, in the IDS Vision Channel (www.ids-vision-channel.tech). The session “Build AI vision apps without coding – xciting new easyness” is available for viewing as a recording.
IDS NXT is a comprehensive system with a wide range of workflows and tools for realising your own AI vision applications. The intelligent IDS NXT cameras can process tasks “OnDevice” and deliver image processing results themselves. The tasks of the cameras are determined by apps that are uploaded to the cameras and executed there. Their functionality can thus be changed at any time. This is supported by software such as IDS NXT lighthouse, with which users can not only train neural networks, but now also create their own vision apps. The system offers both beginners and professionals enormous scope for designing AI vision apps. Learn more: www.ids-nxt.com
Intelligent robotics for laundries closes automation gap
The textile and garment industry is facing major challenges with current supply chain and energy issues. The future recovery is also threatened by factors that hinder production, such as labour and equipment shortages, which put them under additional pressure. The competitiveness of the industry, especially in a global context, depends on how affected companies respond to these framework conditions. One solution is to move the production of clothing back to Europe in an economically viable way. Shorter transport routes and the associated significant savings in transport costs and greenhouse gases speak in favour of this. On the other hand, the related higher wage costs and the prevailing shortage of skilled workers in this country must be compensated. The latter requires further automation of textile processing. The German deep-tech start-up sewts GmbH from Munich has focused on the great potential that lies in this task. It develops solutions with the help of which robots – similar to humans – anticipate how a textile will behave and adapt their movement accordingly.
The German deep-tech start-up sewts GmbH from Munich has focused on the great potential that lies in this task. It develops solutions with the help of which robots – similar to humans – anticipate how a textile will behave and adapt their movement accordingly. In the first step, sewts has set its sights on an application for large industrial laundries. With a system that uses both 2D and 3D cameras from IDS Imaging Development Systems GmbH, the young entrepreneurs are automating one of the last remaining manual steps in large-scale industrial laundries, the unfolding process. Although 90% of the process steps in industrial washing are already automated, the remaining manual operations account for 30% of labour costs. The potential savings through automation are therefore enormous at this point. Application It is true that industrial laundries already operate in a highly automated environment to handle the large volumes of laundry. Among other things, the folding of laundry is done by machines. However, each of these machines usually requires an employee to manually spread out the laundry and feed it without creases. This monotonous and strenuous loading of the folding machines has a disproportionate effect on personnel costs. In addition, qualified workforce is difficult to find, which often has an impact on the capacity utilisation and thus the profitability of industrial laundries. The seasonal nature of the business also requires a high degree of flexibility. sewts makes IDS cameras the image processing components of a new type of intelligent system whose technology can now be used to automate individual steps, such as sorting dirty textiles or inserting laundry into folding machines.
“The particular challenge here is the malleability of the textiles,” explains Tim Doerks, co-founder and CTO. While the automation of the processing of solid materials, such as metals, is comparatively unproblematic with the help of robotics and AI solutions, available software solutions and conventional image processing often still have their limits when it comes to easily deformable materials. Accordingly, commercially available robots and gripping systems have so far only been able to perform such simple operations as gripping a towel or piece of clothing inadequately. But the sewts system
Lettuce is a valuable crop in Europe and the USA. But labor shortages make it difficult to harvest this valuable field vegetable, as sourcing sufficient seasonal labor to meet harvesting commitments is one of the sector’s biggest challenges. Moreover, with wage inflation rising faster than producer prices, margins are very tight. In England, agricultural technology and machinery experts are working with IDS Imaging Development Systems GmbH (Obersulm, Germany) to develop a robotic solution to automate lettuce harvesting.
The team is working on a project funded by Innovate UK and includes experts from the Grimme agricultural machinery factory, the Agri-EPI Centre (Edinburgh UK), Harper Adams University (Newport UK), the Centre for Machine Vision at the University of the West of England (Bristol) and two of the UK’s largest salad producers, G’s Fresh and PDM Produce.
Within the project, existing leek harvesting machinery is adapted to lift the lettuce clear from the ground and grip it in between pinch belts. The lettuce’s outer, or ‘wrapper’, leaves will be mechanically removed to expose the stem. Machine vision and artificial intelligence are then used to identify a precise cut point on the stem to to neatly separate the head of lettuce.
“The cutting process of an iceberg is the most technically complicated step in the process to automate, according to teammates from G subsidiary Salad Harvesting Services Ltd.”, explains IDS Product Sales Specialist Rob Webb. “The prototype harvesting robot being built incorporates a GigE Vision camera from the uEye FA family. It is considered to be particularly robust and is therefore ideally suited to demanding environments. “As this is an outdoor application, a housing with IP65/67 protection is required here”, Rob Webb points out.
The choice fell on the GV-5280FA-C-HQ model with the compact 2/3″ global shutter CMOS sensor IMX264 from Sony. “The sensor was chosen mainly because of its versatility. We don’t need full resolution for AI processing, so sensitivity can be increased by binning. The larger sensor format means that wide-angle optics are not needed either”, Rob Webb summarized the requirements. In the application, the CMOS sensor convinces with excellent image quality, light sensitivity and exceptionally high dynamic range and delivers almost noise-free, very high-contrast 5 MP images in 5:4 format at 22 fps – even in applications with fluctuating light conditions. The extensive range of accessories, such as lens tubes and trailing cables, is just as tough as the camera housing and the screwable connectors (8-pin M12 connector with X-coding and 8-pin Binder connector). Another advantage: camera-internal functions such as pixel pre-processing, LUT or gamma reduce the required computer power to a minimum.
The prototype of the robotic mower will be used for field trials in England towards the end of the 2021 season.
“We are delighted to be involved in the project and look forward to seeing the results. We are convinced of its potential to automate and increase the efficiency of the lettuce harvest, not only in terms of compensating for the lack of seasonal workers”, affirms Jan Hartmann, Managing Director of IDS Imaging Development Systems GmbH.
The challenges facing the agricultural sector are indeed complex. According to a forecast by the United Nations Food and Agriculture Organization (FAO), agricultural productivity will have to increase by almost 50 percent by 2050 compared to 2012 due to the dramatic increase in population. Such a yield expectation means an enormous challenge for the agricultural industry, which is still in its infancy in terms of digitalization compared to other sectors and is already under high pressure to innovate in view of climatic changes and labor shortages. The agriculture of the future is based on networked devices and automation. Cameras are an important building block, and artificial intelligence is a central technology here. Smart applications such as harvesting robots can make a significant contribution to this.
All-in-one embedded vision platform with new tools and functions
(PresseBox) (Obersulm) At IDS, image processing with artificial intelligence does not just mean that AI runs directly on cameras and users also have enormous design options through vision apps. Rather, with the IDS NXT ocean embedded vision platform, customers receive all the necessary, coordinated tools and workflows to realise their own AI vision applications without prior knowledge and to run them directly on the IDS NXT industrial cameras. Now follows the next free software update for the AI package. In addition to the topic of user-friendliness, the focus is also on making artificial intelligence clear and comprehensible for the user.
An all-in-one system such as IDS NXT ocean, which has integrated computing power and artificial intelligence thanks to the “deep ocean core” developed by IDS, is ideally suited for entry into AI Vision. It requires no prior knowledge of deep learning or camera programming. The current software update makes setting up, deploying and controlling the intelligent cameras in the IDS NXT cockpit even easier. For this purpose, among other things, an ROI editor is integrated with which users can freely draw the image areas to be evaluated and configure, save and reuse them as custom grids with many parameters. In addition, the new tools Attention Maps and Confusion Matrix illustrate how the AI works in the cameras and what decisions it makes. This helps to clarify the process and enables the user to evaluate the quality of a trained neural network and to improve it through targeted retraining. Data security also plays an important role in the industrial use of artificial intelligence. As of the current update, communication between IDS NXT cameras and system components can therefore be encrypted via HTTPS.
Just get started with the IDS NXT ocean Creative Kit
Anyone who wants to test the industrial-grade embedded vision platform IDS NXT ocean and evaluate its potential for their own applications should take a look at the IDS NXT ocean Creative Kit. It provides customers with all the components they need to create, train and run a neural network. In addition to an IDS NXT industrial camera with 1.6 MP Sony sensor, lens, cable and tripod adapter, the package includes six months’ access to the AI training software IDS NXT lighthouse. Currently, IDS is offering the set in a special promotion at particularly favourable conditions. Promotion page: https://en.ids-imaging.com/ids-nxt-ocean-creative-kit.html.
All-in-One Embedded Vision Plattform mit neuen Werkzeugen und Funktionen
(PresseBox) (Obersulm) Bei IDS bedeutet Bildverarbeitung mit künstlicher Intelligenz nicht nur, dass die KI direkt auf Kameras läuft und Anwender zusätzlich enorme Gestaltungsmöglichkeiten durch Vision Apps haben. Kunden erhalten mit der Embedded-Vision-Plattform IDS NXT ocean vielmehr alle erforderlichen, aufeinander abgestimmten Tools und Workflows, um eigene KI-Vision-Anwendungen ohne Vorwissen zu realisieren und direkt auf den IDS NXT Industriekameras auszuführen. Jetzt folgt das nächste kostenlose Softwareupdate für das KI-Paket. Im Fokus steht neben dem Thema Benutzerfreundlichkeit auch der Anspruch, die künstliche Intelligenz für den Anwender anschaulich und nachvollziehbar zu machen.
Ein All-in-One System wie IDS NXT ocean, das durch den von IDS entwickelten „deep ocean core“ über integrierte Rechenleistung und künstliche Intelligenz verfügt, eignet sich bestens für den Einstieg in AI Vision. Es erfordert weder Vorkenntnisse in Deep Learning noch in der Kameraprogrammierung. Das aktuelle Softwareupdate macht die Einrichtung, Inbetriebnahme und Steuerung der intelligenten Kameras im IDS NXT cockpit noch einfacher. Hierzu wird unter anderem ein ROI-Editor integriert, mit dem Anwender die auszuwertenden Bildbereiche frei zeichnen und als beliebige Raster mit vielen Parametern konfigurieren, speichern und wiederverwenden können. Darüber hinaus veranschaulichen die neuen Werkzeuge Attention Maps und Confusion Matrix, wie die KI in den Kameras arbeitet und welche Entscheidungen sie trifft. Das macht sie transparenter und hilft dem Anwender, die Qualität eines trainierten neuronalen Netzes zu bewerten und durch gezieltes Nachtraining zu verbessern. Beim industriellen Einsatz von künstlicher Intelligenz spielt auch Datensicherheit eine wichtige Rolle. Ab dem aktuellen Update lässt sich die Kommunikation zwischen IDS NXT Kameras und Anlagenkomponenten deshalb per HTTPS verschlüsseln.
Einfach loslegen mit dem IDS NXT ocean Creative Kit
Wer die industrietaugliche Embedded-Vision-Plattform IDS NXT ocean testen und das Potenzial für die eigenen Anwendungen evaluieren möchte, sollte einen Blick auf das IDS NXT ocean Creative Kit werfen. Kunden erhalten damit alle Komponenten, die sie für die Erstellung, das Trainieren und das Ausführen eines neuronalen Netzes benötigen. Neben einer IDS NXT Industriekamera mit 1,6 MP Sony Sensor, Objektiv, Kabel und Stativadapter enthält das Paket u.a. einen sechsmonatigen Zugang zur KI-Trainingssoftware IDS NXT lighthouse. Aktuell bietet IDS das Set in einer Sonderaktion zu besonders günstigen Konditionen an. Aktionsseite: https://de.ids-imaging.com/ids-nxt-ocean-creative-kit.html.
Autonomously driving robotic assistance system for the automated placement of coil creels
Due to the industry standard 4.0, digitalisation, automation and networking of systems and facilities are becoming the predominant topics in production and thus also in logistics. Industry 4.0 pursues the increasing optimisation of processes and workflows in favour of productivity and flexibility and thus the saving of time and costs. Robotic systems have become the driving force for automating processes. Through the Internet of Things (IoT), robots are becoming increasingly sensitive, autonomous, mobile and easier to operate. More and more they are becoming an everyday helper in factories and warehouses. Intelligent imaging techniques are playing an increasingly important role in this.
To meet the growing demands in scaling and changing production environments towards fully automated and intelligently networked production, the company ONTEC Automation GmbH from Naila in Bavaria has developed an autonomously driving robotic assistance system. The “Smart Robot Assistant” uses the synergies of mobility and automation: it consists of a powerful and efficient intralogistics platform, a flexible robot arm and a robust 3D stereo camera system from the Ensenso N series by IDS Imaging Development Systems GmbH.
The solution is versatile and takes over monotonous, weighty set-up and placement tasks, for example. The autonomous transport system is suitable for floor-level lifting of Euro pallets up to container or industrial format as well as mesh pallets in various sizes with a maximum load of up to 1,200 kilograms. For a customer in the textile industry, the AGV (Automated Guided Vehicle) is used for the automated loading of coil creels. For this purpose, it picks up pallets with yarn spools, transports them to the designated creel and loads it for further processing. Using a specially developed gripper system, up to 1000 yarn packages per 8-hour shift are picked up and pushed onto a mandrel of the creel. The sizing scheme and the position of the coils are captured by an Ensenso 3D camera (N45 series) installed on the gripper arm.
Pallets loaded with industrial yarn spools are picked up from the floor of a predefined storage place and transported to the creel location. There, the gripper positions itself vertically above the pallet. An image trigger is sent to the Ensenso 3D camera from the N45 series, triggered by the in-house software ONTEC SPSComm. It networks with the vehicle’s PLC and can thus read out and pass on data. In the application, SPSComm controls the communication between the software parts of the vehicle, gripper and camera. This way, the camera knows when the vehicle and the grabber are in position to take a picture. This takes an image and passes on a point cloud to a software solution from ONTEC based on the standard HALCON software, which reports the coordinates of the coils on the pallet to the robot. The robot can then accurately pick up the coils and process them further. As soon as the gripper has cleared a layer of the yarn spools, the Ensenso camera takes a picture of the packaging material lying between the yarn spools and provides point clouds of this as well. These point clouds are processed similarly to provide the robot with the information with which a needle gripper removes the intermediate layers. “This approach means that the number of layers and finishing patterns of the pallets do not have to be defined in advance and even incomplete pallets can be processed without any problems,” explains Tim Böckel, software developer at ONTEC. “The gripper does not have to be converted for the use of the needle gripper. For this application, it has a normal gripping component for the coils and a needle gripping component for the intermediate layers.”
For this task, the mobile use for 3D acquisition of moving and static objects on the robot arm, the Ensenso 3D camera is suitable due to its compact design. The Ensenso N 45’s 3D stereo electronics are completely decoupled from the housing, allowing the use of a lightweight plastic composite as the housing material. The low weight facilitates the use on robot arms such as the Smart Robotic Asstistant. The camera can also cope with demanding environmental conditions. “Challenges with this application can be found primarily in the different lighting conditions that are evident in different rooms of the hall and at different times of the day,” Tim Böckel describes the situation. Even in difficult lighting conditions, the integrated projector projects a high-contrast texture onto the object to be imaged by means of a pattern mask with a random dot pattern, thus supplementing the structures on featureless homogenous surfaces. This means that the integrated camera meets the requirements exactly. “By pre-configuring within NxView, the task was solved well.” This sample programme with source code demonstrates the main functions of the NxLib library, which can be used to open one or more stereo and colour cameras whose image and depth data are visualised. Parameters such as exposure time, binning, AOI and depth measuring range can – as in this case – be adjusted live for the matching method used.
The matching process empowers the Ensenso 3D camera to recognise a very high number of pixels, including their position change, by means of the auxiliary structures projected onto the surface and to create complete, homogeneous depth information of the scene from this. This in turn ensures the necessary precision with which the Smart Robot Assistant proceeds. Other selection criteria for the camera were, among others, the standard vision interface Gigabit Ethernet and the global shutter 1.3 MP sensor. “The camera only takes one image pair of the entire pallet in favour of a faster throughput time, but it has to provide the coordinates from a relatively large distance with an accuracy in the millimetre range to enable the robot arm to grip precisely,” explains Matthias Hofmann, IT specialist for application development at ONTEC. “We therefore need the high resolution of the camera to be able to safely record the edges of the coils with the 3D camera.” The localisation of the edges is important in order to be able to pass on as accurate as possible the position from the centre of the spool to the gripper.
Furthermore, the camera is specially designed for use in harsh environmental conditions. It has a screwable GPIO connector for trigger and flash and is IP65/67 protected against dirt, dust, splash water or cleaning agents.
The Ensenso SDK enables hand-eye calibration of the camera to the robot arm, allowing easy translation or displacement of coordinates using the robot pose. In addition, by using the internal camera settings, a “FileCam” of the current situation is recorded at each pass, i.e. at each image trigger. This makes it possible to easily adjust any edge cases later on, in this application for example unexpected lighting conditions, obstacles in the image or also an unexpected positioning of the coils in the image. The Ensenso SDK also allows the internal camera LOG files to be stored and archived for possible evaluation.
ONTEC also uses these “FileCams” to automatically check test cases and thus ensure the correct functioning of all arrangements when making adjustments to the vision software. In addition, various vehicles can be coordinated and logistical bottlenecks minimised on the basis of the control system specially developed by ONTEC. Different assistants can be navigated and act simultaneously in a very confined space. By using the industrial interface tool ONTEC SPSComm, even standard industrial robots can be safely integrated into the overall application and data can be exchanged between the different systems.
Further development of the system is planned, among other things, in terms of navigation of the autonomous vehicle. “With regard to vehicle navigation for our AGV, the use of IDS cameras is very interesting. We are currently evaluating the use of the new Ensenso S series to enable the vehicle to react even more flexibly to obstacles, for example, classify them and possibly even drive around them,” says Tim Böckel, software developer at ONTEC, outlining the next development step.
ONTEC’s own interface configuration already enables the system to be integrated into a wide variety of Industry 4.0 applications, while the modular structure of the autonomously moving robot solution leaves room for adaptation to a wide variety of tasks. In this way, it not only serves to increase efficiency and flexibility in production and logistics, but in many places also literally contributes to relieving the workload of employees.