IDS Showcases Groundbreaking 2D, 3D, and AI-Based Vision Systems at SPS 2024 in Nuremberg, Germany

Innovative Industrial Cameras for Robotics and Process Automation to Take Center Stage at Stand 6-36

Industrial cameras are indispensable for automation – whether with 2D, 3D or AI

Detecting the smallest errors, increasing throughput rates, preventing wear – industrial cameras provide important information for automated processes. IDS will be demonstrating which technologies and products are particularly relevant at SPS / Smart Production Solutions in Nuremberg, Germany, from 12 to 14 November. If you want to experience how small cameras can achieve big things, stand 6-360 is the right place to visit.

IDS presents a wide range of applications, from visualisation to picking and OCR

Around 1,200 companies will be represented in a total of 16 exhibition halls at the trade fair for smart and digital automation. IDS will be taking part for the first time, focussing on industrial image processing for robotics, process automation and networked systems. Philipp Ohl, Head of Product Management at IDS, explains: „Automation and cameras? They go together like a lock and key. Depending on the task, very different qualities are required – such as particularly high-resolution images, remarkably fast cameras or models with integrated intelligence.“ Consequently, the products and demo systems that the company will be showcasing at SPS are highly diverse.

The highlights of IDS can be divided into three categories: 2D, 3D and AI-based image processing. The company will be presenting uEye Live, a newly developed product line. These industrial-grade monitoring cameras enable live streaming and are designed for the continuous monitoring and documentation of processes. IDS will also be introducing a new event-based sensor that is recommended for motion analyses or high-speed counting. It enables the efficient detection of rapid changes through continuous pixel-by-pixel detection instead of the usual sequential image-by-image analysis.

With over 25 years of experience, IDS is one of the key players in the image processing market

In the 3D cameras product segment, IDS will be demonstrating the advantages of the new stereo vision camera Ensenso B for precise close-range picking tasks as well as a prototype of the first time-of-flight camera developed entirely in-house. Anyone interested in robust character recognition will also find what they are looking for at the trade fair stand: Thanks to DENKnet OCR, texts and symbols on surfaces can be reliably identified and processed. IDS will be exhibiting at SPS, stand 6-360.

More information: https://en.ids-imaging.com/sps-2024.html

IDS NXT malibu now available with the 8 MP Sony Starvis 2 sensor IMX678

Intelligent industrial camera with 4K streaming and excellent low-light performance

IDS expands its product line for intelligent image processing and launches a new IDS NXT malibu camera. It enables AI-based image processing, video compression and streaming in full 4K sensor resolution at 30 fps – directly in and out of the camera. The 8 MP sensor IMX678 is part of the Starvis 2 series from Sony. It ensures impressive image quality even in low light conditions and twilight.

Industrial camera with live AI: IDS NXT malibu is able to independently perform AI-based image analyses and provide the results as live overlays in compressed video streams via RTSP (Real Time Streaming Protocol). Hidden inside is a special SoC (system-on-a-chip) from Ambarella, which is known from action cameras. An ISP with helpful automatic features such as brightness, noise and colour correction ensures that optimum image quality is attained at all times. The new 8 MP camera complements the recently introduced camera variant with the 5 MP onsemi sensor AR0521.

To coincide with the market launch of the new model, IDS Imaging Development Systems has also published a new software release. Users now also have have the the option of displaying live images from the IDS NXT malibu camera models via MJPEG-compressed HTTP stream. This enables visualisation in any web browser without additional software or plug-ins. In addition, the AI vision studio IDS lighthouse can be used to train individual neural networks for the Ambarella SoC of the camera family. This simplifies the use of the camera for AI-based image analyses with classification, object recognition and anomaly detection methods.

IDS NXT malibu: Camera combines advanced consumer image processing and AI technology from Ambarella and industrial quality from IDS

New class of edge AI industrial cameras allows AI overlays in live video streams
 

IDS NXT malibu marks a new class of intelligent industrial cameras that act as edge devices and generate AI overlays in live video streams. For the new camera series, IDS Imaging Development Systems has collaborated with Ambarella, leading developer of visual AI products, making consumer technology available for demanding applications in industrial quality. It features Ambarella’s CVflow® AI vision system on chip and takes full advantage of the SoC’s advanced image processing and on-camera AI capabilities. Consequently, Image analysis can be performed at high speed (>25fps) and displayed as live overlays in compressed video streams via the RTSP protocol for end devices.

Thanks to the SoC’s integrated image signal processor (ISP), the information captured by the light-sensitive onsemi AR0521 image sensor is processed directly on the camera and accelerated by its integrated hardware. The camera also offers helpful automatic features, such as brightness, noise and colour correction, which significantly improve image quality.

„With IDS NXT malibu, we have developed an industrial camera that can analyse images in real time and incorporate results directly into video streams,” explained Kai Hartmann, Product Innovation Manager at IDS. “The combination of on-camera AI with compression and streaming is a novelty in the industrial setting, opening up new application scenarios for intelligent image processing.“

These on-camera capabilities were made possible through close collaboration between IDS and Ambarella, leveraging the companies’ strengths in industrial camera and consumer technology. „We are proud to work with IDS, a leading company in industrial image processing,” said Jerome Gigot, senior director of marketing at Ambarella. “The IDS NXT malibu represents a new class of industrial-grade edge AI cameras, achieving fast inference times and high image quality via our CVflow AI vision SoC.“

IDS NXT malibu has entered series production. The camera is part of the IDS NXT all-in-one AI system. Optimally coordinated components – from the camera to the AI vision studio – accompany the entire workflow. This includes the acquisition of images and their labelling, through to the training of a neural network and its execution on the IDS NXT series of cameras.

Robot plays „Rock, Paper, Scissors“ – Part 1/3

Gesture recognition with intelligent camera

I am passionate about technology and robotics. Here in my own blog, I am always taking on new tasks. But I have hardly ever worked with image processing. However, a colleague’s LEGO® MINDSTORMS® robot, which can recognize the rock, paper or scissors gestures of a hand with several different sensors, gave me an idea: „The robot should be able to ’see‘.“ Until now, the respective gesture had to be made at a very specific point in front of the robot in order to be reliably recognized. Several sensors were needed for this, which made the system inflexible and dampened the joy of playing. Can image processing solve this task more „elegantly“?

Rock-Paper-Scissors with Robot Inventor by Seshan Brothers. The robot which inspired me for this project

From the idea to implementation

In my search for a suitable camera, I came across IDS NXT – a complete system for the use of intelligent image processing. It fulfilled all my requirements and, thanks to artificial intelligence, much more besides pure gesture recognition. My interest was woken. Especially because the evaluation of the images and the communication of the results took place directly on or through the camera – without an additional PC! In addition, the IDS NXT Experience Kit came with all the components needed to start using the application immediately – without any prior knowledge of AI.

I took the idea further and began to develop a robot that would play the game „Rock, Paper, Scissors“ in the future – with a process similar to that in the classical sense: The (human) player is asked to perform one of the familiar gestures (scissors, stone, paper) in front of the camera. The virtual opponent has already randomly determined his gesture at this point. The move is evaluated in real time and the winner is displayed.

The first step: Gesture recognition by means of image processing

But until then, some intermediate steps were necessary. I began by implementing gesture recognition using image processing – new territory for me as a robotics fan. However, with the help of IDS lighthouse – a cloud-based AI vision studio – this was easier to realize than expected. Here, ideas evolve into complete applications. For this purpose, neural networks are trained by application images with the necessary product knowledge – such as in this case the individual gestures from different perspectives – and packaged into a suitable application workflow.

The training process was super easy, and I just used IDS Lighthouse’s step-by-step wizard after taking several hundred pictures of my hands using rock, scissor, or paper gestures from different angles against different backgrounds. The first trained AI was able to reliably recognize the gestures directly. This works for both left- and right-handers with a recognition rate of approx. 95%. Probabilities are returned for the labels „Rock“, „Paper“, „Scissor“, or „Nothing“. A satisfactory result. But what happens now with the data obtained?

Further processing

The further processing of the recognized gestures could be done by means of a specially created vision app. For this, the captured image of the respective gesture – after evaluation by the AI – must be passed on to the app. The latter „knows“ the rules of the game and can thus decide which gesture beats another. It then determines the winner. In the first stage of development, the app will also simulate the opponent. All this is currently in the making and will be implemented in the next step to become a „Rock, Paper, Scissors“-playing robot.

From play to everyday use

At first, the project is more of a gimmick. But what could come out of it? A gambling machine? Or maybe even an AI-based sign language translator?

To be continued…

uEye+ Warp10 cameras from IDS combine high speed and high resolution

See more, see better! New 10GigE cameras with onsemi XGS sensors up to 45 MP

In industrial automation, the optimisation of processes is often primarily about higher efficiency and accuracy. 10GigE cameras, such as those in the uEye Warp10 camera family from IDS Imaging Development Systems GmbH, set standards here. They enable high-speed image processing in Gigabit Ethernet-based networks even with large amounts of data and over long cable distances. For even more precision, the company is now introducing new models with sensors up to 45 MP that reliably capture even the smallest details.

The new industrial cameras are equipped with the onsemi global shutter sensors XGS20000 (20 MP, 1.3″), XGS30000 (30 MP, 1.5″) and XGS45000 (45 MP, 2″). They are primarily used in high-precision quality assurance tasks when motion blur needs to be minimised and data needs to be quickly available on the network. The 10GigE cameras offer up to ten times the transmission bandwidth of 1GigE cameras and are about twice as fast as cameras with USB3 interfaces.

Accuracy and speed go hand in hand when it comes to these models. This has advantages for many applications, e.g. in inspection systems for status and end checks at production lines with high cycle rates – such as semiconductor or solar panel inspection. Users also benefit from the fact that even large scenes and image sections can be precisely monitored and evaluated with these cameras. This proves its worth, for example, in logistics tasks for incoming goods and in the warehouse.

The large format onsemi XGS sensors require correspondingly large optics. Therefore, unlike the previous uEye+ Warp10 models, they are equipped with a TFL mount (M35x0.75). For secure mounting, TFL lenses can be firmly screwed to the cameras. The flange focal distance is standardised and, at 17.526 mm, the same as for the previously available cameras with C-mount. To ensure optimal image quality, IDS recommends the use of Active Heat Sinks. They can be mounted both on and under the models, reduce the operating temperature and are optionally available as accessories.

More information: https://en.ids-imaging.com/ueye-warp10.html

Free update makes third deep learning method available for IDS NXT

Update for the AI system IDS NXT: cameras can now also detect anomalies

In quality assurance, it is often necessary to reliably detect deviations from the norm. Industrial cameras have a key role in this, capturing images of products and analysing them for defects. If the error cases are not known in advance or are too diverse, however, rule-based image processing reaches its limits. By contrast, this challenge can be reliably solved with the AI method Anomaly Detection. The new, free IDS NXT 3.0 software update from IDS Imaging Development Systems makes the method available to all users of the AI vision system with immediate effect.

The intelligent IDS NXT cameras are now able to detect anomalies independently and thereby optimise quality assurance processes. For this purpose, users train a neural network that is then executed on the programmable cameras. To achieve this, IDS offers the AI Vision Studio IDS NXT lighthouse, which is characterised by easy-to-use workflows and seamless integration into the IDS NXT ecosystem. Customers can even use only „GOOD“ images for training. This means that relatively little training data is required compared to the other AI methods Object Detection and Classification. This simplifies the development of an AI vision application and is well suited for evaluating the potential of AI-based image processing for projects in the company.

Another highlight of the release is the code reading function in the block-based editor. This enables IDS NXT cameras to locate, identify and read out different types of code and the required parameters. Attention maps in IDS NXT lighthouse also provide more transparency in the training process. They illustrate which areas in the image have an impact on classification results. In this way, users can identify and eliminate training errors before a neural network is deployed in the cameras.

IDS NXT is a comprehensive AI-based vision system consisting of intelligent cameras plus software environment that covers the entire process from the creation to the execution of AI vision applications. The software tools make AI-based vision usable for different target groups – even without prior knowledge of artificial intelligence or application programming. In addition, expert tools enable open-platform programming, making IDS NXT cameras highly customisable and suitable for a wide range of applications.

More information: www.ids-nxt.com

Elephant Robotics launched ultraArm with various solutions for education

In the past year, Elephant Robotics has launched various products to meet more user needs and help them unlock more potential in research and education. To help users learn more knowledge about machine vision, Elephant Robotics launched the AI Kit, which can work with multiple robotic arms to do AI recognition and grabbing. In June 2022, Elephant Robotics released the mechArm to help individual makers and students get better learning in industrial robotics.

Nowadays, robotic arms are used in an increasingly wide range of applications, such as industry, medicine, commercial exhibitions, etc. At the very end of 2022, ultraArm is launched, and this time, Elephant Robotics doesn’t just release a new robotic arm, but brings 5 sets of solutions to education and R&D.

Small but powerful

As the core product of this launch, ultraArm is a small 4-axis desktop robotic arm. It is designed with a classic metal structure and occupies only an area of A5 paper. It is the first robotic arm equipped with a high-performance stepper motors of Elephant Robotics. It is stable and owns ±0.1mm repeated positioning accuracy. What’s more, ultraArm comes with 5 kits together including slide rail, conveyor belt, and cameras.

Multiple environments supported

As a preferred tool for the education field, ultraArm supports all major programming languages, including Python, Arduino, C++, etc. Also, it can be programmed in Mac, Windows, and Linux systems. For individual makers who are new to robotics, they can learn robotic programming with myBlockly, a visualization software that allows users to drag and drop code blocks.

Moreover, ultraArm supports ROS1 & ROS2. In the ROS environment, users can control ultraArm and verify algorithms in the virtual environment, improving experiment efficiency. With the support of ROS2, users can achieve more functions and objects in the developments.

Five robotic kits with for Robot Vision & DIY

In the past year, Elephant Robotics found that many users have to spend large amount of time creating accessories or kits to work with robotic arms. Therefore, to provide more solutions in different fields, ultraArm comes with five robotic kits, which are devided into two series: vision educational kits and DIY kits. These kits will help users, especially students program easily for a better learning experience on practical exercises about AI robot vision and DIY robotic projects.

Vision educational kits

Combined with vision functions, robotic arms can be used for more applications in industry, medicine, education, etc. In robotics education, collaborative robotic arms with vision capabilities allow students to better learn about artificial intelligence. Elephant Robotics has launched three kits for machine vision education: Vision & Picking Kit, Vision & Conveyor Belt Kit, and Vision & Slide Rail Kit to provide more choices and support to the education industry. With the camera and built-in related AI algorithms (Yolo, SIFT, ORB, etc.), ultraArm can achieve different artificial intelligence recognition applications with different identifying ways. Therefore, users can select the kit based on their needs.

For Vision & Picking Kit, users can learn about color & image recognition, intelligent grasping, robot control principle, etc. For Vision & Conveyor Belt Kit and Vision, the robotic arm can sense the distance of materials for identifying, grabbing, and classifying the objects on the belt. Users can easily create simulated industrial applications in this kit, such as color sorting. Users can have deeper learning in machine vision with the Vision & Slide Rail Kit because the robot can track and grab objects through the dynamic vision algorithm in this kit. With the multiple programming environments supported, the vision educational kits are preferred for school or STEM education production line simulation. Moreover, Elephant Robotics also offers different educational programs for students and teachers, to enable them to better understand the principles and algorithms of robot vision, helping them operate these robot kits more easily.

DIY kits

There are two kits in the DIY series: the Drawing Kit and Laser Engraving Kit. Users will enjoyonline production, nameplate and phone case DIY production, and AI drawing with the multiple accessories in the DIY kits.

To help users quickly achieve DIY production, Elephant Robotics created software called Elephant Luban. It is a platform that generates the G-Code track and provides primary cases for users. Users can select multiple functions such as precise writing and drawing, laser engravingwith, only with a few clicks. For example, users can upload the images they like to the software, Elephant Luban will automatically generate the path of images and transmit to ultraArm, then users can choose drawing or engraving with different accessories.

There is no doubt that ultraArm with different robotics kits certainly provides a great deal of help and support to the education field. These kits offer better operating environments and conditions for students, and help them get better learning in robotic programming. Elephant Robotics will continue to launch more products and projects with the concept of helping more users to enjoy the robot world.

Now ordering ultraArm in Elephant Robotics Shop can enjoy the 20% discount with the code: Ultra20

Market launch: New Ensenso N models for 3D and robot vision

Upgraded Ensenso 3D camera series now available at IDS
 

Resolution and accuracy have almost doubled, the price has remained the same – those who choose 3D cameras from the Ensenso N series can now benefit from more advanced models. The new stereo vision cameras (N31, N36, N41, N46) can now be purchased from IDS Imaging Development Systems.

The Ensenso N 3D cameras have a compact housing (made of aluminium or plastic composite, depending on the model) with an integrated pattern projector. They are suitable for capturing both static and moving objects. The integrated projector projects a high-contrast texture onto the objects in question. A pattern mask with a random dot pattern complements non-existing or only weakly visible surface structures. This allows the cameras to deliver detailed 3D point clouds even in difficult lighting conditions.

With the Ensenso models N31, N36, N41 and N46, IDS is now launching the next generation of the previously available N30, N35, N40 and N45. Visually, the cameras do not differ from their predecessors. They do, however, use a new sensor from Sony, the IMX392. This results in a higher resolution (2.3 MP instead of 1.3 MP). All cameras are pre-calibrated and therefore easy to set up. The Ensenso selector on the IDS website helps to choose the right model.

Whether firmly installed or in mobile use on a robot arm: with Ensenso N, users opt for a 3D camera series that provides reliable 3D information for a wide range of applications. The cameras prove their worth in single item picking, for example, support remote-controlled industrial robots, are used in logistics and even help to automate high-volume laundries. IDS provides more in-depth insights into the versatile application possibilities with case studies on the company website.

Learn more: https://en.ids-imaging.com/ensenso-3d-camera-n-series.html

Robotic Vision Platform Luxonis Announces its First Open Source Personal Robot, rae

LITTLETON, Colo., 2022 /PRNewswire/ — Luxonis, a Colorado-based robotic vision platform, is thrilled to introduce rae, its first fully-formed and high-powered personal robot. Backed by a Kickstarter campaign to help support its development, rae sets itself apart by offering a multitude of features right out of the box, along with a unique degree of experimental programming potential that far exceeds other consumer robots on the market. The most recent of a long line of Luxonis innovations, rae is designed to make robotics accessible and simple for users of any experience level.

„rae is representative of our foremost goal at Luxonis: to make robotics accessible and simple for anyone, not just the tenured engineer with years of programming experience,“ said Brandon Gilles, CEO of Luxonis. „A longstanding truth about robotics is that the barrier to entry sometimes feels impossibly high, but it doesn’t have to be that way. By creating rae, we want to help demonstrate the kinds of positive impacts robotics can bring to all people’s lives, whether it’s as simple as helping you find your keys, talking with your friend who uses American Sign Language, or playing with your kids.“

At its core, rae is a world-class robot, and includes AI, computer vision and machine learning all on-device. Building upon the technology of the brand’s award-winning OAK cameras, rae offers stereo depth, object tracking, motion estimation, 9-axis IMU data, neural inference, corner detection, motion zooming, and is fully compatible with the DepthAI API. With next-generation simultaneous localization and mapping (SLAM), rae can map out and navigate through unknown environments, and is preconfigured with ROS 2 for access to its robust collection of software and applications.

Featuring a full suite of standard applications, rae offers games like follow me and hide and seek, and useful tools like barcode/QR code scanning, license plate reading, fall detection, time-lapse recording, emotion recognition, object finding, sign language interpretation, and a security alert mode. All applications are controllable through rae’s mobile app, which users can use from anywhere around the world. They can also link with Luxonis‘ cloud platform, RobotHub, to manage customizations, download and install user-created applications, and collaborate with Luxonis‘ user community.

Crowdfunding campaigns and developing trailblazing products that roboticists love is something that is in Luxonis‘ DNA, which is made evident by its established track record of two previously successful campaigns. Luxonis‘ first Kickstarter for the OpenCV AI Kit in 2021 raised $1.3 million with 6,564 backers, and the second for the OAK-D Lite raised $1.02 million with 7,988 backers. With the support of robot hobbyists and brand loyalists, as well as new target backers such as educators, students, and parents, rae is en route to leave an impact as a revolutionary personal robot that isn’t limited to niche demographics.

Pricing for rae starts at $299 and international shipping is available.

Interested backers can learn more about the campaign here.

For more information about Luxonis, visit https://www.luxonis.com/ 

About Luxonis:

The mission at Luxonis is robotic vision, made simple. They are working to improve the engineering efficiency of the world through their industry leading and award winning camera systems, which embed AI, CV, and machine learning into a high performing, compact package. Luxonis offers full-stack solutions stretching from hardware, firmware, software, and a growing cloud-based management platform, and prioritizes customer success above all else through the continued development of their DepthAI ecosystem.

Quickly available in six different housing variants | IDS adds numerous new USB3 cameras to its product range

Anyone who needs quickly available industrial cameras for image processing projects is not faced with an easy task due to the worldwide chip shortage. IDS Imaging Development Systems GmbH has therefore been pushing the development of alternative USB3 hardware generations with available, advanced semiconductor technology in recent months and has consistently acquired components for this purpose. Series production of new industrial cameras with USB3 interface and Vision Standard compatibility has recently started. In the CP and LE camera series of the uEye+ product line, customers can choose the right model for their applications from a total of six housing variants and numerous CMOS sensors.

The models of the uEye CP family are particularly suitable for space-critical applications thanks to their distinctive, compact magnesium housing with dimensions of only 29 x 29 x 29 millimetres and a weight of around 50 grams. Customers can choose from global and rolling shutter sensors from 0.5 to 20 MP in this product line. Those who prefer a board-level camera instead should take a look at the versatile uEye LE series. These cameras are available with coated plastic housings and C-/CS-mount lens flanges as well as board versions with or without C-/CS-mount or S-mount lens connections. They are therefore particularly suitable for projects in small device construction and integration in embedded vision systems. IDS initially offers the global shutter Sony sensors IMX273 (1.6 MP) and IMX265 (3.2 MP) as well as the rolling shutter sensors IMX290 (2.1 MP) and IMX178 (6.4 MP). Other sensors will follow.

The USB3 cameras are perfectly suited for use with IDS peak thanks to the vision standard transport protocol USB3 Vision®. The Software Development Kit includes programming interfaces in C, C++, C# with .NET and Python as well as tools that simplify the programming and operation of IDS cameras while optimising factors such as compatibility, reproducible behaviour and stable data transmission. Special convenience features reduce application code and provide an intuitive programming experience, enabling quick and easy commissioning of the cameras.

Learn more: https://en.ids-imaging.com/news-article/usb3-cameras-series-production-launched.html