IDS NXT malibu: Camera combines advanced consumer image processing and AI technology from Ambarella and industrial quality from IDS

New class of edge AI industrial cameras allows AI overlays in live video streams
 

IDS NXT malibu marks a new class of intelligent industrial cameras that act as edge devices and generate AI overlays in live video streams. For the new camera series, IDS Imaging Development Systems has collaborated with Ambarella, leading developer of visual AI products, making consumer technology available for demanding applications in industrial quality. It features Ambarella’s CVflow® AI vision system on chip and takes full advantage of the SoC’s advanced image processing and on-camera AI capabilities. Consequently, Image analysis can be performed at high speed (>25fps) and displayed as live overlays in compressed video streams via the RTSP protocol for end devices.

Thanks to the SoC’s integrated image signal processor (ISP), the information captured by the light-sensitive onsemi AR0521 image sensor is processed directly on the camera and accelerated by its integrated hardware. The camera also offers helpful automatic features, such as brightness, noise and colour correction, which significantly improve image quality.

„With IDS NXT malibu, we have developed an industrial camera that can analyse images in real time and incorporate results directly into video streams,” explained Kai Hartmann, Product Innovation Manager at IDS. “The combination of on-camera AI with compression and streaming is a novelty in the industrial setting, opening up new application scenarios for intelligent image processing.“

These on-camera capabilities were made possible through close collaboration between IDS and Ambarella, leveraging the companies’ strengths in industrial camera and consumer technology. „We are proud to work with IDS, a leading company in industrial image processing,” said Jerome Gigot, senior director of marketing at Ambarella. “The IDS NXT malibu represents a new class of industrial-grade edge AI cameras, achieving fast inference times and high image quality via our CVflow AI vision SoC.“

IDS NXT malibu has entered series production. The camera is part of the IDS NXT all-in-one AI system. Optimally coordinated components – from the camera to the AI vision studio – accompany the entire workflow. This includes the acquisition of images and their labelling, through to the training of a neural network and its execution on the IDS NXT series of cameras.

Robot plays „Rock, Paper, Scissors“ – Part 1/3

Gesture recognition with intelligent camera

I am passionate about technology and robotics. Here in my own blog, I am always taking on new tasks. But I have hardly ever worked with image processing. However, a colleague’s LEGO® MINDSTORMS® robot, which can recognize the rock, paper or scissors gestures of a hand with several different sensors, gave me an idea: „The robot should be able to ’see‘.“ Until now, the respective gesture had to be made at a very specific point in front of the robot in order to be reliably recognized. Several sensors were needed for this, which made the system inflexible and dampened the joy of playing. Can image processing solve this task more „elegantly“?

Rock-Paper-Scissors with Robot Inventor by Seshan Brothers. The robot which inspired me for this project

From the idea to implementation

In my search for a suitable camera, I came across IDS NXT – a complete system for the use of intelligent image processing. It fulfilled all my requirements and, thanks to artificial intelligence, much more besides pure gesture recognition. My interest was woken. Especially because the evaluation of the images and the communication of the results took place directly on or through the camera – without an additional PC! In addition, the IDS NXT Experience Kit came with all the components needed to start using the application immediately – without any prior knowledge of AI.

I took the idea further and began to develop a robot that would play the game „Rock, Paper, Scissors“ in the future – with a process similar to that in the classical sense: The (human) player is asked to perform one of the familiar gestures (scissors, stone, paper) in front of the camera. The virtual opponent has already randomly determined his gesture at this point. The move is evaluated in real time and the winner is displayed.

The first step: Gesture recognition by means of image processing

But until then, some intermediate steps were necessary. I began by implementing gesture recognition using image processing – new territory for me as a robotics fan. However, with the help of IDS lighthouse – a cloud-based AI vision studio – this was easier to realize than expected. Here, ideas evolve into complete applications. For this purpose, neural networks are trained by application images with the necessary product knowledge – such as in this case the individual gestures from different perspectives – and packaged into a suitable application workflow.

The training process was super easy, and I just used IDS Lighthouse’s step-by-step wizard after taking several hundred pictures of my hands using rock, scissor, or paper gestures from different angles against different backgrounds. The first trained AI was able to reliably recognize the gestures directly. This works for both left- and right-handers with a recognition rate of approx. 95%. Probabilities are returned for the labels „Rock“, „Paper“, „Scissor“, or „Nothing“. A satisfactory result. But what happens now with the data obtained?

Further processing

The further processing of the recognized gestures could be done by means of a specially created vision app. For this, the captured image of the respective gesture – after evaluation by the AI – must be passed on to the app. The latter „knows“ the rules of the game and can thus decide which gesture beats another. It then determines the winner. In the first stage of development, the app will also simulate the opponent. All this is currently in the making and will be implemented in the next step to become a „Rock, Paper, Scissors“-playing robot.

From play to everyday use

At first, the project is more of a gimmick. But what could come out of it? A gambling machine? Or maybe even an AI-based sign language translator?

To be continued…

uEye+ Warp10 cameras from IDS combine high speed and high resolution

See more, see better! New 10GigE cameras with onsemi XGS sensors up to 45 MP

In industrial automation, the optimisation of processes is often primarily about higher efficiency and accuracy. 10GigE cameras, such as those in the uEye Warp10 camera family from IDS Imaging Development Systems GmbH, set standards here. They enable high-speed image processing in Gigabit Ethernet-based networks even with large amounts of data and over long cable distances. For even more precision, the company is now introducing new models with sensors up to 45 MP that reliably capture even the smallest details.

The new industrial cameras are equipped with the onsemi global shutter sensors XGS20000 (20 MP, 1.3″), XGS30000 (30 MP, 1.5″) and XGS45000 (45 MP, 2″). They are primarily used in high-precision quality assurance tasks when motion blur needs to be minimised and data needs to be quickly available on the network. The 10GigE cameras offer up to ten times the transmission bandwidth of 1GigE cameras and are about twice as fast as cameras with USB3 interfaces.

Accuracy and speed go hand in hand when it comes to these models. This has advantages for many applications, e.g. in inspection systems for status and end checks at production lines with high cycle rates – such as semiconductor or solar panel inspection. Users also benefit from the fact that even large scenes and image sections can be precisely monitored and evaluated with these cameras. This proves its worth, for example, in logistics tasks for incoming goods and in the warehouse.

The large format onsemi XGS sensors require correspondingly large optics. Therefore, unlike the previous uEye+ Warp10 models, they are equipped with a TFL mount (M35x0.75). For secure mounting, TFL lenses can be firmly screwed to the cameras. The flange focal distance is standardised and, at 17.526 mm, the same as for the previously available cameras with C-mount. To ensure optimal image quality, IDS recommends the use of Active Heat Sinks. They can be mounted both on and under the models, reduce the operating temperature and are optionally available as accessories.

More information: https://en.ids-imaging.com/ueye-warp10.html

Free update makes third deep learning method available for IDS NXT

Update for the AI system IDS NXT: cameras can now also detect anomalies

In quality assurance, it is often necessary to reliably detect deviations from the norm. Industrial cameras have a key role in this, capturing images of products and analysing them for defects. If the error cases are not known in advance or are too diverse, however, rule-based image processing reaches its limits. By contrast, this challenge can be reliably solved with the AI method Anomaly Detection. The new, free IDS NXT 3.0 software update from IDS Imaging Development Systems makes the method available to all users of the AI vision system with immediate effect.

The intelligent IDS NXT cameras are now able to detect anomalies independently and thereby optimise quality assurance processes. For this purpose, users train a neural network that is then executed on the programmable cameras. To achieve this, IDS offers the AI Vision Studio IDS NXT lighthouse, which is characterised by easy-to-use workflows and seamless integration into the IDS NXT ecosystem. Customers can even use only „GOOD“ images for training. This means that relatively little training data is required compared to the other AI methods Object Detection and Classification. This simplifies the development of an AI vision application and is well suited for evaluating the potential of AI-based image processing for projects in the company.

Another highlight of the release is the code reading function in the block-based editor. This enables IDS NXT cameras to locate, identify and read out different types of code and the required parameters. Attention maps in IDS NXT lighthouse also provide more transparency in the training process. They illustrate which areas in the image have an impact on classification results. In this way, users can identify and eliminate training errors before a neural network is deployed in the cameras.

IDS NXT is a comprehensive AI-based vision system consisting of intelligent cameras plus software environment that covers the entire process from the creation to the execution of AI vision applications. The software tools make AI-based vision usable for different target groups – even without prior knowledge of artificial intelligence or application programming. In addition, expert tools enable open-platform programming, making IDS NXT cameras highly customisable and suitable for a wide range of applications.

More information: www.ids-nxt.com

Elephant Robotics launched ultraArm with various solutions for education

In the past year, Elephant Robotics has launched various products to meet more user needs and help them unlock more potential in research and education. To help users learn more knowledge about machine vision, Elephant Robotics launched the AI Kit, which can work with multiple robotic arms to do AI recognition and grabbing. In June 2022, Elephant Robotics released the mechArm to help individual makers and students get better learning in industrial robotics.

Nowadays, robotic arms are used in an increasingly wide range of applications, such as industry, medicine, commercial exhibitions, etc. At the very end of 2022, ultraArm is launched, and this time, Elephant Robotics doesn’t just release a new robotic arm, but brings 5 sets of solutions to education and R&D.

Small but powerful

As the core product of this launch, ultraArm is a small 4-axis desktop robotic arm. It is designed with a classic metal structure and occupies only an area of A5 paper. It is the first robotic arm equipped with a high-performance stepper motors of Elephant Robotics. It is stable and owns ±0.1mm repeated positioning accuracy. What’s more, ultraArm comes with 5 kits together including slide rail, conveyor belt, and cameras.

Multiple environments supported

As a preferred tool for the education field, ultraArm supports all major programming languages, including Python, Arduino, C++, etc. Also, it can be programmed in Mac, Windows, and Linux systems. For individual makers who are new to robotics, they can learn robotic programming with myBlockly, a visualization software that allows users to drag and drop code blocks.

Moreover, ultraArm supports ROS1 & ROS2. In the ROS environment, users can control ultraArm and verify algorithms in the virtual environment, improving experiment efficiency. With the support of ROS2, users can achieve more functions and objects in the developments.

Five robotic kits with for Robot Vision & DIY

In the past year, Elephant Robotics found that many users have to spend large amount of time creating accessories or kits to work with robotic arms. Therefore, to provide more solutions in different fields, ultraArm comes with five robotic kits, which are devided into two series: vision educational kits and DIY kits. These kits will help users, especially students program easily for a better learning experience on practical exercises about AI robot vision and DIY robotic projects.

Vision educational kits

Combined with vision functions, robotic arms can be used for more applications in industry, medicine, education, etc. In robotics education, collaborative robotic arms with vision capabilities allow students to better learn about artificial intelligence. Elephant Robotics has launched three kits for machine vision education: Vision & Picking Kit, Vision & Conveyor Belt Kit, and Vision & Slide Rail Kit to provide more choices and support to the education industry. With the camera and built-in related AI algorithms (Yolo, SIFT, ORB, etc.), ultraArm can achieve different artificial intelligence recognition applications with different identifying ways. Therefore, users can select the kit based on their needs.

For Vision & Picking Kit, users can learn about color & image recognition, intelligent grasping, robot control principle, etc. For Vision & Conveyor Belt Kit and Vision, the robotic arm can sense the distance of materials for identifying, grabbing, and classifying the objects on the belt. Users can easily create simulated industrial applications in this kit, such as color sorting. Users can have deeper learning in machine vision with the Vision & Slide Rail Kit because the robot can track and grab objects through the dynamic vision algorithm in this kit. With the multiple programming environments supported, the vision educational kits are preferred for school or STEM education production line simulation. Moreover, Elephant Robotics also offers different educational programs for students and teachers, to enable them to better understand the principles and algorithms of robot vision, helping them operate these robot kits more easily.

DIY kits

There are two kits in the DIY series: the Drawing Kit and Laser Engraving Kit. Users will enjoyonline production, nameplate and phone case DIY production, and AI drawing with the multiple accessories in the DIY kits.

To help users quickly achieve DIY production, Elephant Robotics created software called Elephant Luban. It is a platform that generates the G-Code track and provides primary cases for users. Users can select multiple functions such as precise writing and drawing, laser engravingwith, only with a few clicks. For example, users can upload the images they like to the software, Elephant Luban will automatically generate the path of images and transmit to ultraArm, then users can choose drawing or engraving with different accessories.

There is no doubt that ultraArm with different robotics kits certainly provides a great deal of help and support to the education field. These kits offer better operating environments and conditions for students, and help them get better learning in robotic programming. Elephant Robotics will continue to launch more products and projects with the concept of helping more users to enjoy the robot world.

Now ordering ultraArm in Elephant Robotics Shop can enjoy the 20% discount with the code: Ultra20

Market launch: New Ensenso N models for 3D and robot vision

Upgraded Ensenso 3D camera series now available at IDS
 

Resolution and accuracy have almost doubled, the price has remained the same – those who choose 3D cameras from the Ensenso N series can now benefit from more advanced models. The new stereo vision cameras (N31, N36, N41, N46) can now be purchased from IDS Imaging Development Systems.

The Ensenso N 3D cameras have a compact housing (made of aluminium or plastic composite, depending on the model) with an integrated pattern projector. They are suitable for capturing both static and moving objects. The integrated projector projects a high-contrast texture onto the objects in question. A pattern mask with a random dot pattern complements non-existing or only weakly visible surface structures. This allows the cameras to deliver detailed 3D point clouds even in difficult lighting conditions.

With the Ensenso models N31, N36, N41 and N46, IDS is now launching the next generation of the previously available N30, N35, N40 and N45. Visually, the cameras do not differ from their predecessors. They do, however, use a new sensor from Sony, the IMX392. This results in a higher resolution (2.3 MP instead of 1.3 MP). All cameras are pre-calibrated and therefore easy to set up. The Ensenso selector on the IDS website helps to choose the right model.

Whether firmly installed or in mobile use on a robot arm: with Ensenso N, users opt for a 3D camera series that provides reliable 3D information for a wide range of applications. The cameras prove their worth in single item picking, for example, support remote-controlled industrial robots, are used in logistics and even help to automate high-volume laundries. IDS provides more in-depth insights into the versatile application possibilities with case studies on the company website.

Learn more: https://en.ids-imaging.com/ensenso-3d-camera-n-series.html

Robotic Vision Platform Luxonis Announces its First Open Source Personal Robot, rae

LITTLETON, Colo., 2022 /PRNewswire/ — Luxonis, a Colorado-based robotic vision platform, is thrilled to introduce rae, its first fully-formed and high-powered personal robot. Backed by a Kickstarter campaign to help support its development, rae sets itself apart by offering a multitude of features right out of the box, along with a unique degree of experimental programming potential that far exceeds other consumer robots on the market. The most recent of a long line of Luxonis innovations, rae is designed to make robotics accessible and simple for users of any experience level.

„rae is representative of our foremost goal at Luxonis: to make robotics accessible and simple for anyone, not just the tenured engineer with years of programming experience,“ said Brandon Gilles, CEO of Luxonis. „A longstanding truth about robotics is that the barrier to entry sometimes feels impossibly high, but it doesn’t have to be that way. By creating rae, we want to help demonstrate the kinds of positive impacts robotics can bring to all people’s lives, whether it’s as simple as helping you find your keys, talking with your friend who uses American Sign Language, or playing with your kids.“

At its core, rae is a world-class robot, and includes AI, computer vision and machine learning all on-device. Building upon the technology of the brand’s award-winning OAK cameras, rae offers stereo depth, object tracking, motion estimation, 9-axis IMU data, neural inference, corner detection, motion zooming, and is fully compatible with the DepthAI API. With next-generation simultaneous localization and mapping (SLAM), rae can map out and navigate through unknown environments, and is preconfigured with ROS 2 for access to its robust collection of software and applications.

Featuring a full suite of standard applications, rae offers games like follow me and hide and seek, and useful tools like barcode/QR code scanning, license plate reading, fall detection, time-lapse recording, emotion recognition, object finding, sign language interpretation, and a security alert mode. All applications are controllable through rae’s mobile app, which users can use from anywhere around the world. They can also link with Luxonis‘ cloud platform, RobotHub, to manage customizations, download and install user-created applications, and collaborate with Luxonis‘ user community.

Crowdfunding campaigns and developing trailblazing products that roboticists love is something that is in Luxonis‘ DNA, which is made evident by its established track record of two previously successful campaigns. Luxonis‘ first Kickstarter for the OpenCV AI Kit in 2021 raised $1.3 million with 6,564 backers, and the second for the OAK-D Lite raised $1.02 million with 7,988 backers. With the support of robot hobbyists and brand loyalists, as well as new target backers such as educators, students, and parents, rae is en route to leave an impact as a revolutionary personal robot that isn’t limited to niche demographics.

Pricing for rae starts at $299 and international shipping is available.

Interested backers can learn more about the campaign here.

For more information about Luxonis, visit https://www.luxonis.com/ 

About Luxonis:

The mission at Luxonis is robotic vision, made simple. They are working to improve the engineering efficiency of the world through their industry leading and award winning camera systems, which embed AI, CV, and machine learning into a high performing, compact package. Luxonis offers full-stack solutions stretching from hardware, firmware, software, and a growing cloud-based management platform, and prioritizes customer success above all else through the continued development of their DepthAI ecosystem.

Quickly available in six different housing variants | IDS adds numerous new USB3 cameras to its product range

Anyone who needs quickly available industrial cameras for image processing projects is not faced with an easy task due to the worldwide chip shortage. IDS Imaging Development Systems GmbH has therefore been pushing the development of alternative USB3 hardware generations with available, advanced semiconductor technology in recent months and has consistently acquired components for this purpose. Series production of new industrial cameras with USB3 interface and Vision Standard compatibility has recently started. In the CP and LE camera series of the uEye+ product line, customers can choose the right model for their applications from a total of six housing variants and numerous CMOS sensors.

The models of the uEye CP family are particularly suitable for space-critical applications thanks to their distinctive, compact magnesium housing with dimensions of only 29 x 29 x 29 millimetres and a weight of around 50 grams. Customers can choose from global and rolling shutter sensors from 0.5 to 20 MP in this product line. Those who prefer a board-level camera instead should take a look at the versatile uEye LE series. These cameras are available with coated plastic housings and C-/CS-mount lens flanges as well as board versions with or without C-/CS-mount or S-mount lens connections. They are therefore particularly suitable for projects in small device construction and integration in embedded vision systems. IDS initially offers the global shutter Sony sensors IMX273 (1.6 MP) and IMX265 (3.2 MP) as well as the rolling shutter sensors IMX290 (2.1 MP) and IMX178 (6.4 MP). Other sensors will follow.

The USB3 cameras are perfectly suited for use with IDS peak thanks to the vision standard transport protocol USB3 Vision®. The Software Development Kit includes programming interfaces in C, C++, C# with .NET and Python as well as tools that simplify the programming and operation of IDS cameras while optimising factors such as compatibility, reproducible behaviour and stable data transmission. Special convenience features reduce application code and provide an intuitive programming experience, enabling quick and easy commissioning of the cameras.

Learn more: https://en.ids-imaging.com/news-article/usb3-cameras-series-production-launched.html

2D, 3D and AI: IDS presents numerous new products and camera developments at VISION

Today, cameras are often more than just suppliers of images – they can recognise objects, generate results or trigger follow-up processes. Visitors to VISION Stuttgart, Germany, can find out about the possibilities offered by state-of-the-art camera technology at IDS booth 8C60. There, they will discover the next level of the all-in-one AI system IDS NXT. The company is not only expanding the machine learning methods to include anomaly detection, but is also developing a significantly faster hardware platform. IDS is also unveiling the next stage of development for its new uEye Warp10 cameras. By combining a fast 10GigE interface and TFL mount, large-format sensors with up to 45 MP can be integrated, opening up completely new applications. The trade fair innovations also include prototypes of the smallest IDS board-level camera and a new 3D camera model in the Ensenso product line.

IDS NXT: More than artificial intelligence
IDS NXT is a holistic system with a variety of workflows and tools for realising custom AI vision applications. The intelligent IDS NXT cameras can process tasks „on device” and deliver image processing results themselves. They can also trigger subsequent processes directly. The range of tasks is determined by apps that run on the cameras. Their functionality can therefore be changed at any time. This is supported by a cloud-based AI Vision Studio, with which users can not only train neural networks, but now also create vision apps. The system offers both beginners and professionals enormous scope for designing AI vision apps. At VISION, the company shows how artificial intelligence is redefining the performance spectrum of industrial cameras and gives an outlook on further developments in the hardware and software sector.

uEye Warp10: High speed for applications
With 10 times the transmission bandwidth of 1GigE cameras and about twice the speed of cameras with USB 3.0 interfaces, the recently launched uEye Warp10 camera family with 10GigE interface is the fastest in the IDS range. At VISION, the company is demonstrating that these models will not only set standards in terms of speed, but also resolution. Thanks to the TFL mount, it becomes possible to integrate much higher resolution sensors than before. This means that even detailed inspections with high clock rates and large amounts of data will be feasible over long cable distances. The industrial lens mount allows the cameras to fully utilise the potential of large format (larger than 1.1″) and high resolution sensors (up to 45 MP).

uEye XLS: Smallest board-level camera with cost-optimised design
IDS is presenting prototypes of an additional member of its low-cost portfolio at the fair. The name uEye XLS indicates that it is a small variant of the popular uEye XLE series. The models will be the smallest IDS board-level cameras in the range. They are aimed at users who, e.g. for embedded applications, require particularly low-cost, extremely compact cameras with and without lens holders in large quantities. They can look forward to Vision Standard-compliant project cameras with various global shutter sensors and trigger options.

Ensenso C: Powerful 3D camera for large-volume applications
3D camera technology is an indispensable component in many automation projects. Ensenso C is a new variant in the Ensenso 3D product line that scores with a long baseline and high resolution, while at the same time offering a cost-optimised design. Customers receive a fully integrated, pre-configured 3D camera system for large-volume applications that is quickly ready for use and provides even better 3D data thanks to RGB colour information. A prototype will be available le at the fair.

Learn more: https://en.ids-imaging.com/ueye-warp10.html

Precisely capture and monitor high-speed processes

Faster than any other IDS industrial camera: uEye Warp10 with 10GigE

With high speed to new spheres! When fast-moving scenes need to be captured in all their details, a high-performance transmission interface is essential in addition to the right sensor. With uEye Warp10, IDS Imaging Development Systems GmbH is launching a new camera family that, thanks to 10GigE, transmits data in the Gigabit Ethernet-based network at a very high frame rate and virtually without delay. The first models with the IMX250 (5 MP), IMX253 (12 MP) and IMX255 (8.9 MP) sensors from the Sony Pregius series are now available.

Compared to 1GigE cameras, the uEye Warp10 models achieve up to 10 times the transmission bandwidth; they are also about twice as fast as cameras with USB 3.0 interfaces. The advantages become particularly apparent when scenes are to be captured, monitored and analysed in all details and without motion blur. Consequently, applications such as inspection applications on the production line with high clock speeds or image processing systems in sports analysis benefit from the fast data transfer.

The GigE Vision standard-compliant industrial cameras enable high-speed data transfer over cable lengths of up to 100 metres without repeaters or optical extenders via standard CAT6A cables (under 40 metres also CAT5E) with RJ45 connectors. The robust uEye Warp10 cameras are initially offered with C-mount lens holders. IDS is already working on additional models. In the future, versions with TFL mount (M35 x 0.75) will also be available for use with particularly high-resolution sensors up to 45 MP. The cameras are supported by the powerful IDS peak software development kit.

In the IDS Vision Channel, the experts from IDS present the features and possible applications of the new camera family in detail. The video is available here free of charge. All you need is a free IDS website user account.
Learn more: https://en.ids-imaging.com/ueye-warp10.html