IDS NXT malibu: Camera combines advanced consumer image processing and AI technology from Ambarella and industrial quality from IDS

New class of edge AI industrial cameras allows AI overlays in live video streams
 

IDS NXT malibu marks a new class of intelligent industrial cameras that act as edge devices and generate AI overlays in live video streams. For the new camera series, IDS Imaging Development Systems has collaborated with Ambarella, leading developer of visual AI products, making consumer technology available for demanding applications in industrial quality. It features Ambarella’s CVflow® AI vision system on chip and takes full advantage of the SoC’s advanced image processing and on-camera AI capabilities. Consequently, Image analysis can be performed at high speed (>25fps) and displayed as live overlays in compressed video streams via the RTSP protocol for end devices.

Thanks to the SoC’s integrated image signal processor (ISP), the information captured by the light-sensitive onsemi AR0521 image sensor is processed directly on the camera and accelerated by its integrated hardware. The camera also offers helpful automatic features, such as brightness, noise and colour correction, which significantly improve image quality.

„With IDS NXT malibu, we have developed an industrial camera that can analyse images in real time and incorporate results directly into video streams,” explained Kai Hartmann, Product Innovation Manager at IDS. “The combination of on-camera AI with compression and streaming is a novelty in the industrial setting, opening up new application scenarios for intelligent image processing.“

These on-camera capabilities were made possible through close collaboration between IDS and Ambarella, leveraging the companies’ strengths in industrial camera and consumer technology. „We are proud to work with IDS, a leading company in industrial image processing,” said Jerome Gigot, senior director of marketing at Ambarella. “The IDS NXT malibu represents a new class of industrial-grade edge AI cameras, achieving fast inference times and high image quality via our CVflow AI vision SoC.“

IDS NXT malibu has entered series production. The camera is part of the IDS NXT all-in-one AI system. Optimally coordinated components – from the camera to the AI vision studio – accompany the entire workflow. This includes the acquisition of images and their labelling, through to the training of a neural network and its execution on the IDS NXT series of cameras.

Robot plays „Rock, Paper, Scissors“ – Part 1/3

Gesture recognition with intelligent camera

I am passionate about technology and robotics. Here in my own blog, I am always taking on new tasks. But I have hardly ever worked with image processing. However, a colleague’s LEGO® MINDSTORMS® robot, which can recognize the rock, paper or scissors gestures of a hand with several different sensors, gave me an idea: „The robot should be able to ’see‘.“ Until now, the respective gesture had to be made at a very specific point in front of the robot in order to be reliably recognized. Several sensors were needed for this, which made the system inflexible and dampened the joy of playing. Can image processing solve this task more „elegantly“?

Rock-Paper-Scissors with Robot Inventor by Seshan Brothers. The robot which inspired me for this project

From the idea to implementation

In my search for a suitable camera, I came across IDS NXT – a complete system for the use of intelligent image processing. It fulfilled all my requirements and, thanks to artificial intelligence, much more besides pure gesture recognition. My interest was woken. Especially because the evaluation of the images and the communication of the results took place directly on or through the camera – without an additional PC! In addition, the IDS NXT Experience Kit came with all the components needed to start using the application immediately – without any prior knowledge of AI.

I took the idea further and began to develop a robot that would play the game „Rock, Paper, Scissors“ in the future – with a process similar to that in the classical sense: The (human) player is asked to perform one of the familiar gestures (scissors, stone, paper) in front of the camera. The virtual opponent has already randomly determined his gesture at this point. The move is evaluated in real time and the winner is displayed.

The first step: Gesture recognition by means of image processing

But until then, some intermediate steps were necessary. I began by implementing gesture recognition using image processing – new territory for me as a robotics fan. However, with the help of IDS lighthouse – a cloud-based AI vision studio – this was easier to realize than expected. Here, ideas evolve into complete applications. For this purpose, neural networks are trained by application images with the necessary product knowledge – such as in this case the individual gestures from different perspectives – and packaged into a suitable application workflow.

The training process was super easy, and I just used IDS Lighthouse’s step-by-step wizard after taking several hundred pictures of my hands using rock, scissor, or paper gestures from different angles against different backgrounds. The first trained AI was able to reliably recognize the gestures directly. This works for both left- and right-handers with a recognition rate of approx. 95%. Probabilities are returned for the labels „Rock“, „Paper“, „Scissor“, or „Nothing“. A satisfactory result. But what happens now with the data obtained?

Further processing

The further processing of the recognized gestures could be done by means of a specially created vision app. For this, the captured image of the respective gesture – after evaluation by the AI – must be passed on to the app. The latter „knows“ the rules of the game and can thus decide which gesture beats another. It then determines the winner. In the first stage of development, the app will also simulate the opponent. All this is currently in the making and will be implemented in the next step to become a „Rock, Paper, Scissors“-playing robot.

From play to everyday use

At first, the project is more of a gimmick. But what could come out of it? A gambling machine? Or maybe even an AI-based sign language translator?

To be continued…

Geek Club and CircuitMess Launch a NASA-inspired DIY Perseverance Educational Space Rover Kit

After a series of successful Kickstarter Campaigns, Geek Club and CircuitMess launch their most ambitious project yet – a NASA-approved AI-powered scale model Replica of the Perseverance Space Rover  

Zagreb, Croatia – October 31st, 2023. – Today, Geek Club and CircuitMess announced their Kickstarter space exploration campaign designed to teach children eleven and up about engineering, AI, and coding by assembling the iconic NASA Perseverance Space Rover, as well as a series of other NASA-inspired space vehicles.

This new space-themed line of DIY educational products was born out of both companies‘ shared vision to aim for the stars and to take their fans with them. The Kickstarter campaign starts today, October 31st, and will last for 35 days.

The collaboration was a logical union of the two companies. Both companies create educational STEM DIY kits that are targeted towards kids and adults. Both share the same mission: To make learning STEM skills easy and fun.

“For decades, the team and I have been crafting gadgets for geeks always inspired by space exploration,” says Nicolas Deladerrière, co-founder of Geek Club. “Inspired by Mars exploration, we’ve studied thousands of official documents and blueprints to craft an authentic Mars exploration experience. The product comes alive thanks to microchips, electromotors, and artificial intelligence. Imagine simulating your own Mars mission right from your desk!”

Geek Club is an American company that specializes in designing and producing DIY robotics kits that educate their users on soldering and electronics. They focus primarily on space exploration and robotics, all to make learning engineering skills easy and fun for kids, adults, and everyone in between.

“We have successfully delivered seven Kickstarter campaigns, raised more than 2.5 million dollars, and made hundreds of thousands of geeks all around the world extremely happy,” says Albert Gajšak, CEO of CircuitMess. “In a universe where space and technology are constantly growing, we’re here to ensure you’re never left behind.”

The new product line consists of five unique space-themed products:

  • 1. The Perseverance Space Rover Kit

This kit is designed to be an educational journey into programming, electronics, robotics, and AI. The model comes with four electromotors, six wheels, a control system with a dual-core Espressif ESP32 processor, Wi-Fi, and Bluetooth connectivity, a sample collection arm based on the real thing with two servo motors, a Wi-Fi-connected remote controller, and support for programming in Python or via a Scratch-inspired drag-and-drop visual coding environment.

Alongside the Perseverance Space Rover, you’ll be able to get more iconic space vehicles:

  • 2. The Voyager: A DIY kit made as a tribute to NASA’s longest-lasting mission, which has been beaming back data for an incredible 45 years and counting.
  • 3. Juno: A solar-powered DIY kit celebrating the mission that gave us the most detailed and breathtaking images of Jupiter.
  • 4. Discovery: A DIY kit honoring the legendary space shuttle with 39 successful orbital flights under its belt.
  • 5. The Artemis Watch: A sleek, space-themed wrist gadget inspired by NASA’s upcoming Artemis space suit design. The watch is a programmable device equipped with an LCD display, Bluetooth, and a gyroscope.

The Perseverance Educational Space Rover Kit is available for pre-order now on Kickstarter, starting at $149.

No previous experience or knowledge is needed for assembling your very own space rover. The kit is designed for anyone aged 11+  and comes with detailed video instructions.

You can visit the Kickstarter page here.

Miko 3 – KI-basierter intelligenter Roboter – Testbericht

Gastbeitrag von Markus Schmidt

Intelligente Geräte haben das Leben von Erwachsenen auf der ganzen Welt einfacher gemacht. Sie ermöglichen es uns, Dinge auszuschalten, die wir eingeschaltet gelassen haben, und Dinge einzuschalten, für die wir zu faul sind, aufzustehen und sie zu erledigen. Die meisten dieser Geräte sind verständlicherweise für ein erwachsenes Publikum gedacht. Aber was ist mit den Kindern? In den letzten zwei Wochen hatten meine Kinder viel Spaß mit einem der vielseitigsten intelligenten Roboter auf dem Markt, dem Miko . Mit vielen tollen Aktivitäten und eingebauten Assistenten haben auch sie ihr eigenes intelligentes Gerät, das sie nutzen können. Mal sehen, ob dieser kinderfreundliche Roboter sein Geld wert ist!

Der Miko-Roboter bringt Spaß und Lernen zusammen. Er wird durch künstliche Intelligenz angetrieben und wurde für Kinder im Alter von 5 bis 10 Jahren entwickelt. Die Technologie des Unternehmens, die in den Miko-Roboter eingebaut wurde, ermöglicht es ihm, mit Menschen zu interagieren, Emotionen zu erkennen und sogar sein eigenes Verhalten auf der Grundlage des Gelernten anzupassen. Je länger man also mit ihm spielt, desto mehr lernt er und passt sich an verschiedene Aktivitäten an.

Der Miko ist einfach einzurichten

Wie bei jedem elektronischen Gerät ist die Einrichtung des Miko sehr einfach und überschaubar. Das Gerät führt Sie durch die verschiedenen Schritte und hält Sie dabei an der Hand. Sie stellen eine Verbindung zu Ihrem lokalen Wi-Fi-Netzwerk her, fügen Benutzer hinzu und vieles mehr. Die Eltern werden sogar aufgefordert, die kostenlose Miko-Mobil-App aus dem iOS App Store oder dem Google Play Store herunterzuladen.

Wenn Sie Ihren Miko zum ersten Mal aus der Verpackung holen, sollten Sie beachten, dass eine Vielzahl von Updates installiert werden muss. Bei einer Standard-Wi-Fi-Verbindung – 40 Megabyte Download – dauerte das Herunterladen und Aktualisieren des Roboters immer noch etwa 15 bis 20 Minuten. All das musste erledigt werden, bevor meine Kinder ihn benutzen konnten.

Was kann der Roboterbegleiter Miko?

Miko scheint wie ein normaler intelligenter Assistent zu funktionieren, allerdings einer, der sich bewegen kann. Nachdem Miko all die verschiedenen Möglichkeiten durchgespielt hatte – einschließlich des Aufweckens von Nutzern, indem sie „Hallo Miko“ sagten – hatten meine Kinder einen Riesenspaß bei der Auswahl verschiedener Aktivitäten. Es sei darauf hingewiesen, dass die Auswahl an Aktivitäten noch viel größer ist, wenn man sich für ein Miko Max-Abonnement entscheidet. Damit erhalten Sie Zugang zu zahlreichen lizenzierten Produkten von Disney, Paramount und anderen.

Spiele

Miko wird mit einer Reihe von tollen Spielen für Kinder ausgeliefert. Die Spiele sind relativ simpel, aber für Kinder bis 10 Jahre sind sie trotzdem sehr interessant. Meine Kinder spielten täglich Tic-Tac-Toe mit Miko und versuchten, einen KI-Gegner zu besiegen, der oft dazu bestimmt schien, nie zu verlieren. Wir fanden, dass die Spiele nichts sind, was man nicht auch auf einem Standard-Tablet im Apple iOS Store oder bei Google Play für Android finden könnte.

Körperliche Aktivität

Unsere Interaktionen mit Mikos körperlichen Aktivitäten waren etwas gemischt. Eine Tanzparty mit Miko zu veranstalten und ein Tanz- und Standbildspiel zu spielen, hat sehr gut funktioniert. Miko war in der Lage zu erkennen, wann meine Kinder tanzten und wann sie still standen. Je stiller sie standen, wenn Miko „Freeze“ sagte, desto mehr Punkte bekamen sie. Das hat gut funktioniert. Das Versteckspiel mit Miko war weniger beeindruckend, als es die Werbung vermuten ließ. Miko bewegte sich einfach durch unser offenes Obergeschoss und verwechselte oft Stühle oder Gegenstände auf dem Boden mit meinen Kindern.

Pädagogischer Inhalt

Auch wenn dies nicht der aufregendste Aspekt von Miko ist, so haben meine Kinder doch die pädagogischen Inhalte genossen. In diesem Punkt fand ich Miko am beeindruckendsten. Egal, ob er ihnen Bücher vorlas oder sie mit Statistiken oder Quizfragen versorgte, Miko beeindruckte. Während meine Kinder unsere intelligenten Geräte im Haus nutzen können, um Assistenten Fragen zu stellen und Antworten zu erhalten, haben wir festgestellt, dass Miko sich mit unseren Kindern über eine Vielzahl von Themen unterhalten hat.

Bei einem Austausch sprach Miko mit meinen Kindern über die verschiedenen interessanten Fakten und Statistiken über Chamäleons. Obwohl es für mich offensichtlich war, dass diese Interaktionen auf der Grundlage der von meinen Kindern getroffenen Entscheidungen geplant waren, hatten sie das Gefühl, dass Miko direkt mit ihnen interagierte.

Emotionale Unterstützung

Die Firma, die hinter Miko steht, wirbt damit, dass Miko ein Roboter ist, der von seinen Benutzern lernt. Er verspricht emotionale Unterstützung für Kinder. Miko verspricht zum Beispiel, dass er weiß, wann er ihnen einen Witz erzählen muss, wenn sie sich schlecht fühlen. Ich glaube zwar, dass Miko mit der Zeit einiges über meine Kinder lernen könnte, aber in den wenigen Wochen, in denen wir das Gerät benutzt haben, war das nicht zu erkennen. Wie ich bereits erwähnt habe, fühlten sich meine Kinder immer mit Miko verbunden und hatten das Gefühl, dass er ein guter KI-Freund ist. Wenn man meine Kinder fragen würde, würden sie sagen, Miko sei ihr Freund und würde sie durch Fragen und Aktivitäten kennen lernen.

Abschließende Gedanken

Alles am Miko war wunderbar. Der Roboter ist unglaublich reaktionsschnell, mit wenigen Verzögerungen und geringen Wartezeiten, wenn er Fragen stellt oder Spiele spielt. Er ist zwar nicht so flink wie andere intelligente Geräte, die ich benutzt habe, aber immer noch ziemlich schnell. Miko macht auch viel mehr als andere Geräte. Tanzpartys zu veranstalten war der Favorit meiner Kinder, etwas, das unsere stationären Geräte nicht können!

Build Your Own Voice Assistant with CircuitMess Spencer: Your Talkative Friend

Voice assistants have become a crucial component of our everyday lives in today’s technologically sophisticated society. They assist us with work, respond to our inquiries, and even provide entertainment. Have you ever wondered how voice assistants operate or how to build your own? Spencer is here to satisfy your curiosity and provide a fun DIY activity, so stop searching. This blog post will introduce you to Spencer, a voice assistant that will brighten your day with jokes and provide you with all the information you need.

Meet Spencer

Spencer is a buddy that converses with you; it is more than simply a voice assistant. It can hear you well enough to comprehend all you say. It uses its large red button as a trigger to search the internet and give you straightforward answers. It’s a wonderful addition to your everyday routine because of Spencer’s endearing nature and capacity to make you grin.

Spencer’s Features: Your Interactive Voice Assistant Companion

1. Voice Interaction

High-quality audio communication is possible because of Spencer’s microphone. It comprehends your instructions, inquiries, and chats and offers a simple and straightforward approach for you to communicate with your voice assistant. Simply talk to Spencer, and it will answer as you would expect, giving the impression that you are conversing with a genuine friend.

2. Internet Connectivity and Information Retrieval

Spencer has internet access, allowing you to access a huge information base. You may have Spencer do a real-time internet search by pushing the huge red button on his chest. Spencer can search the web and provide you clear, succinct answers, whether you need to discover the solution to a trivia question, check the most recent news headlines, or collect information on a certain issue.

3. Personalization and Customization

Being wholly original is what Spencer is all about. You are allowed to alter its features and reactions to fit your tastes. Make Spencer reflect your style and personality by altering its external elements, such as colors, decals, or even adding accessories. To further create a genuinely customized experience, you may alter its reactions, jokes, and interactions to suit your sense of humor and personal tastes.

4. Entertainment and Engagement

Spencer is aware of how important laughing is to life. It has built-in jokes and amusing replies, so talking to your voice assistant is not only educational but also interesting and fun. Spencer’s amusing features will keep you entertained and involved whether you need a quick pick-me-up or want to have a good time with friends and family.

5. Learning and Educational STEM Experience

In particular, STEM (science, technology, engineering, and mathematics) subjects are the focus of Spencer’s educational mission. You will learn useful skills in electronics, soldering, component assembly, and circuits by making Spencer. To further develop Spencer’s talents, you may go into programming, gaining practical experience with coding and computational thinking.

6. Inspiration and Creativity

Spencer acts as a springboard to spark your imagination and motivate further investigation. You may let your creativity run wild as you put together and customize your voice assistant. This do-it-yourself project promotes critical thinking, problem-solving, and invention, developing a creative and innovative mentality that may go beyond the context of making Spencer.

Recommended Age Group

Spencer is intended for those who are at least 11 years old. While the majority of the assembly procedures are simple, some, like soldering and tightening fasteners, call for prudence. Never be afraid to seek an adult for help if you need it. When using certain equipment and approaches, it is usually preferable to be guided.

Assembly Time Required

The construction of Spencer should take, on average, 4 hours to finish. However, take in mind that the timeframe may change based on your prior knowledge and expertise. Don’t worry if you’re unfamiliar with electronics! Enjoy the process, take your time, and don’t let any early difficulties get you down. You’ll grow more used to the procedures as you go along.

Skills Required

To start this DIY project, no special skills are needed. Fun and learning something new are the key goals. Your introduction to the field of electronics via Building Spencer will pique your interest in STEM fields and provide you the chance to get hands-on experience. Consider completing this assignment as the first step towards a lucrative engineering career.

Pros and Cons of Spencer

Pros of Spencer

  • Spencer provides an engaging and interactive experience, responding to voice commands and engaging in conversations to make you feel like you have a real companion.
  • With internet connectivity, Spencer can retrieve information in real-time, giving you quick answers to your questions and saving you time.
  • Spencer can be customized to reflect your style and preferences, allowing you to personalize its appearance, responses, and interactions.
  • Spencer comes with built-in jokes and entertaining responses, adding fun and amusement to your interactions with the voice assistant.
  • Building Spencer provides hands-on learning in electronics, soldering, circuitry, and programming, offering a valuable educational experience in STEM disciplines.

Cons of Spencer

  • The assembly process of Spencer may involve technical aspects such as soldering and component assembly, which can be challenging for beginners or individuals with limited experience.
  • Spencer heavily relies on internet connectivity to provide real-time answers and retrieve information, which means it may have limited functionality in areas with poor or no internet connection.
  • While Spencer offers basic voice assistant features, its capabilities may be more limited compared to advanced commercially available voice assistant devices.

Conclusion

Spencer, creating your own voice assistant is a fascinating and worthwhile endeavor. You’ll learn useful skills, expand your understanding of electronics, and enjoy the thrill of putting a complicated gadget together as you go along with the assembly process. Remember that the purpose of this project is to experience the thrill of learning, solving problems, and letting your imagination run free as well as to produce a final product. So be ready to join Spencer on this journey and discover a world of opportunities in the exciting world of voice assistants.

Get your own Spencer Building kit here: bit.ly/RobotsBlog

fruitcore robotics enthüllt neues Betriebssystem mit integriertem AI  Copiloten: ein großer Schritt in der Industrieautomation für mehr  Effizienz, Benutzerfreundlichkeit und Zeit

Konstanz, 28.06.2023 – fruitcore robotics hebt mit einem innovativen Betriebssystem, das die neueste  KI-Technologie nutzt, die Industrieautomation auf ein neues Level. Industrieunternehmen steht mit  horstOS ein neues Werkzeug zur Verfügung, das ihnen in jedem Schritt der Inbetriebnahme gesamter  Anwendungen mit dem intelligenten Industrieroboter HORST zur Seite steht und dabei die  Komplexität deutlich reduziert. Durch einen integrierten AI Copiloten können Industrieunternehmen  die Effizienz ihrer Produktionsprozesse steigern und Zeit einsparen. „Generative KI treibt eine  Veränderung vieler Aufgabenbereiche voran. Die neue Technologie wird die Automatisierung  verändern und unseren Kunden eine ganze neue Automatisierungserfahrung verschaffen“, sagt  Patrick Heimburger, Geschäftsführer (Chief Revenue Officer) von fruitcore robotics.  

Das neue Betriebssystem von fruitcore robotics vereinfacht und beschleunigt die Konfiguration und  Verwaltung aller am Prozess beteiligten Komponenten, das Programmieren sowie den Betrieb der  fertig eingerichteten Anwendung. Die gesamte Steuerung von Roboter, Komponenten und  bestehenden Industrieprozessen erfolgt über eine benutzerfreundliche Oberfläche, über die ihr  Zusammenspiel noch leichter realisiert wird. horstOS umfasst im Wesentlichen drei miteinander  verbundene Bereiche: den Bereich Komponentenmanagement, den Bereich für die  

Programmerstellung und den Bereich Prozesssteuerung. Diese drei Bereiche bieten Anwendern alle  Funktionen, mit denen sie ihre Gesamtanlagen schnell, einfach und effizient einrichten und nutzen können.  Im Bereich Komponentenmanagement können Anwender dank standardisierter Schnittstellen alle für  eine Gesamtanlage relevanten Komponenten wie beispielsweise Greifer, Kamerasysteme und  Sicherheitssysteme nahtlos integrieren und zentral verwalten. Das Hinzufügen der Komponenten  erfolgt mit einem Klick und ist bei allen Komponenten möglich, die über eine Web-App oder eine  Digitale Schnittstelle verfügen – unabhängig vom Hersteller. Sollen bei einer der angeschlossenen  Komponenten Statusinformationen abgefragt, Einstellungen kontrolliert oder Änderungen  vorgenommen werden, lässt sich dies ganz einfach über das Roboterpanel mit horstOS als zentrale  Bedienoberfläche umsetzen. Wechseln Anwender in den Bereich für die Programmerstellung finden  sie sich in der intuitiven Bediensoftware horstFX wieder, in welcher sie den Programmablauf des  Roboters unter Einbindung aller Komponenten erstellen. Steht der Programmablauf erstmal, kann  dieser im Bereich Prozesssteuerung gestartet, gestoppt oder pausiert werden. Der Bereich  Prozesssteuerung ermöglicht es Anwendern auch, den Betrieb und die Prozessüberwachung an ihre  persönlichen Bedürfnisse und spezifischen Prozesse anzupassen. Mithilfe von Widgets können  relevante Prozessdaten eingeblendet, der Status der angeschlossenen Komponenten angezeigt und häufig anzupassende Parameter zugänglich gemacht werden. So haben Anwender die volle Kontrolle  über ihren Automatisierungsprozess. 

Ask HORST Anything – AI Copilot hilft in allen Lebensphasen der Industrieroboter  

„Unser neues Betriebssystem beinhaltet die fortschrittlichste Technologie und eine tiefe  Integration künstlicher Intelligenz. Es setzt neue Standards für die schnelle Einbindung von Industrierobotern in Prozesse wie beispielsweise die Maschinenbeladung und -entladung,  Qualitätssicherung, Teilevereinzelung oder Klebe- und Dichtmittelauftrag, so Patrick Heimburger.  Mit dem AI Copiloten in horstOS erhalten Anwender einen intelligenten Assistenten, der in natürlicher  Sprache in Echtzeit dabei unterstützt, die Herausforderungen der Automatisierung erfolgreich zu  bewältigen. Ob bei der Einrichtung des Roboters und weiterer Komponenten, bei der Fehlerbehebung  oder beim Vorschlagen von Programmbausteinen oder gar dem Schreiben ganzer Programme, der AI  Copilot ermöglicht es Anwendern, schnell und präzise Lösungen für ihre Anwendungen zu finden und  den Betrieb reibungslos aufrechtzuerhalten. Möchte der Anwender beispielsweise erfahren, wie er  dem Roboter die von der Kamera ermittelte Teileposition übergeben kann, kann er diese Frage per  Text-Prompt an den AI Copiloten richten und erhält innerhalb weniger Augenblicke den  entsprechenden Code-Baustein.  

Der AI Copilot von fruitcore robotics basiert auf ChatGPT und wurde speziell für die Anforderungen  der Industrie trainiert. Er bietet Anwendern einen umfassenden Zugriff auf alle relevanten  Anleitungen, Supportinhalte und Software-Dokumentationen von fruitcore robotics. Um Anwendern  ein optimales Nutzungserlebnis zu bieten, setzt das Konstanzer Unternehmen auf stetige  Erweiterungen der Fähigkeiten des AI Copiloten. 

Zukunftsorientierte Automatisierung mit horstOS  

Auch der Leistungsumfang von horstOS soll in den kommenden Jahren stetig wachsen. Schon heute  ermöglicht der modulare Aufbau des Betriebssystems, externe Software und Services ohne großen  Aufwand einzubinden. Auch anwenderspezifische Softwareprogramme und -oberflächen von OEMs  können nahtlos integriert werden. „Durch horstOS wird die Zukunft der Automatisierung zur neuen  Realität. Das System bietet umfangreiche Unterstützung, auch bei geringen Kenntnissen, und senkt  

den Aufwand bei Einrichtung, Betrieb und After Sales maßgeblich“, erläutert Jens Riegger,  Geschäftsführer (CEO) von fruitcore robotics. „Unsere intelligenten Industrieroboter sollen unseren  Kunden nicht nur den besten Return-on-Investment auf dem Robotermarkt bieten. Insbesondere vor  dem Hintergrund des allgegenwärtigen Fachkräftemangels sollen sie auch dabei helfen, die  Produktivität zu steigern und wertvolle Zeit einzusparen“, so Jens Riegger. 

fruitcore robotics unveils new operating system with integrated AI  copilot: a big step in industrial automation for more efficiency, user friendliness and time 

Constance, June 28th, 2023 – fruitcore robotics takes industrial automation to a new level with an  innovative operating system that uses the latest AI technology. With horstOS, industrial companies  have a new tool at their disposal that assists them in every step of commissioning entire applications  with the intelligent industrial robot HORST while significantly reducing complexity. With an integrated  AI copilot, industrial companies can increase the efficiency of their production processes and save  time. „Generative AI is driving a transformation of many task areas. The new technology will transform  automation and provide our customers with a whole new automation experience,“ says Patrick  Heimburger, Managing Director (Chief Revenue Officer) of fruitcore robotics.  

The new operating system of fruitcore robotics simplifies and accelerates the configuration and  management of all components involved in the process, the programming as well as the operation of  the finished application. The entire control system of robots, components and existing industrial  processes is carried out via a user-friendly interface through which their interaction is realized even  more easily. horstOS essentially comprises three interconnected areas: the component management  area, the program creation area and the process control system area. These three areas provide users  with all the functions they need to set up and use their overall plants quickly, simple and efficiently.  

In the component management area, users can seamlessly integrate and centrally manage all  components relevant to an overall plant, such as grippers, camera systems and safety systems, thanks  to standardized interfaces. Adding components takes just one click and is possible for all components  that have a web app or a digital interface – regardless of the manufacturer. If status information needs  to be queried for one of the connected components, settings need to be checked or changes need to  be made, this can be implemented straightforward via the robot panel with horstOS as the central  user interface. When users switch to the program creation area, they find themselves in the intuitive  horstFX operating software, in which they create the robot’s program sequence with the integration  of all components. Once the program sequence has been created, it can be started, stopped or  paused in the process control system. The process control system also allows users to customize the  operation and process monitoring to their personal needs and specific processes. Widgets can be used  to show relevant process data, display the status of connected components, and access parameters  that need to be adjusted frequently. This gives users full control over their automation process.

Ask HORST Anything – AI Copilot helps in all life phases of industrial robots  

„Our new operating system incorporates the most advanced technology and deep integration of  artificial intelligence. It sets new standards for the rapid integration of industrial robots into  processes such as machine loading and unloading, quality assurance, parts separation or adhesive  and sealant application, says Patrick Heimburger. With the AI Copilot in horstOS, users get an  intelligent assistant that provides real-time support in natural language to successfully cope the  challenges of automation. Whether setting up the robot and other components, troubleshooting,  or suggesting program blocks or even writing entire programs, AI Copilot enables users to quickly  and accurately find solutions for their applications and keep operations running smoothly. For  example, if the user wants to know how to pass the part position detected by the camera to the  robot, he can address this question to the AI Copilot via text prompt and receive the  corresponding code block within a few moments.  

The AI Copilot from fruitcore robotics is based on ChatGPT and has been specially trained for industry  conditions. It offers users comprehensive access to all relevant instructions, support content and  software documentation from fruitcore robotics. In order to provide users with an optimal user  experience, the Constance-based company focuses on continuous enhancements of the AI Copilot’s  capabilities. 

Future-oriented automation with horstOS  

The scope of horstOS is also expected to grow steadily in the coming years. The modular structure of  the operating system already allows external software and services to be integrated without great  effort. User-specific software programs and interfaces from OEMs can also be seamlessly integrated.  „Through horstOS, the future of automation becomes a new reality. The system offers extensive  support, even for those with little knowledge, and significantly reduces the effort required for setup,  operation and after sales,“ explains Jens Riegger, Managing Director (CEO) of fruitcore robotics. „Our  intelligent industrial robots are not only designed to offer our customers the best return-on investment in the robotics market. Especially against the backdrop of the ubiquitous shortage of  skilled workers, they are also designed to help increase productivity and save valuable time,“ says Jens  Riegger. 

Robohood AI-Driven Robotic Painter Introduces Stable Diffusion and Text Generation Neural Network Model

Robohood, world’s first robotics & AI-driven art & technology company has integrated both Stable Diffusion and a text generation neural network

FORT LAUDERDALE, FLA. (PRWEB) APRIL 25, 2023

Robohood, an art & technology company specializing in AI-driven solutions for robotics-generated physical art, has integrated two forms of AI technology into their software, Stable Diffusion and ChaptGPT both originally developed and released in 2022.

Robohood’s integrated Stable Diffusion technology is a deep learning, text-to-image model primarily used to generate detailed images conditioned on text descriptions, though it can also be applied to other tasks such as inpainting, outpainting, and generating image-to-image translations guided by a text prompt. While the integrated ChatGPT technology is a fine-tune approach to transfer learning using both supervised and reinforcement learning techniques with detailed and articulate responses.

Through these newly integrated technologies, users now have two options when creating a painting with Robohood. The first option involves uploading an image from a device, generating a render, and then painting with a robot, which was the conventional method. Alternatively, they can use our AI-generation tool. To use this tool, users can input an image description in the „Generate by text“ field or request a topic suggestion in the „Can AI suggest a topic for you?” and the tool will offer suggestions in the interface. Once an idea has been selected, the user can copy and paste it into the image generation field and Robohood technology will generate an image that it can later render onto canvas.

“These newly integrated technologies allow for artists to have multiple options when creating artwork within the Robohood interface,” stated Victor Peppi, CEO of Robohood. “Combining art with AI-driven technology is our passion at Robohood and these technologies allow artists to have more freedom and solutions throughout their creative process.”

About Robohood

Robohood is an art & technology company specializing in AI-driven solutions for robotics-generated physical art. Robohood’s groundbreaking software/hardware solution, a brainchild of a brilliant team of software engineers, robotics experts and artists, turned Robohood into the world’s first robotics art company. Producing everything from software to robot-specific painting supplies, Robohood’s solution utilizes robotic manipulators to paint with oil and acrylic paints on various surfaces. To learn more about Robohood, visit the website at https://robohood.com/, and follow us on Facebook: https://facebook.com/RobohoodArts, and Instagram: @robohood.inc.

Free update makes third deep learning method available for IDS NXT

Update for the AI system IDS NXT: cameras can now also detect anomalies

In quality assurance, it is often necessary to reliably detect deviations from the norm. Industrial cameras have a key role in this, capturing images of products and analysing them for defects. If the error cases are not known in advance or are too diverse, however, rule-based image processing reaches its limits. By contrast, this challenge can be reliably solved with the AI method Anomaly Detection. The new, free IDS NXT 3.0 software update from IDS Imaging Development Systems makes the method available to all users of the AI vision system with immediate effect.

The intelligent IDS NXT cameras are now able to detect anomalies independently and thereby optimise quality assurance processes. For this purpose, users train a neural network that is then executed on the programmable cameras. To achieve this, IDS offers the AI Vision Studio IDS NXT lighthouse, which is characterised by easy-to-use workflows and seamless integration into the IDS NXT ecosystem. Customers can even use only „GOOD“ images for training. This means that relatively little training data is required compared to the other AI methods Object Detection and Classification. This simplifies the development of an AI vision application and is well suited for evaluating the potential of AI-based image processing for projects in the company.

Another highlight of the release is the code reading function in the block-based editor. This enables IDS NXT cameras to locate, identify and read out different types of code and the required parameters. Attention maps in IDS NXT lighthouse also provide more transparency in the training process. They illustrate which areas in the image have an impact on classification results. In this way, users can identify and eliminate training errors before a neural network is deployed in the cameras.

IDS NXT is a comprehensive AI-based vision system consisting of intelligent cameras plus software environment that covers the entire process from the creation to the execution of AI vision applications. The software tools make AI-based vision usable for different target groups – even without prior knowledge of artificial intelligence or application programming. In addition, expert tools enable open-platform programming, making IDS NXT cameras highly customisable and suitable for a wide range of applications.

More information: www.ids-nxt.com

Elephant Robotics launched ultraArm with various solutions for education

In the past year, Elephant Robotics has launched various products to meet more user needs and help them unlock more potential in research and education. To help users learn more knowledge about machine vision, Elephant Robotics launched the AI Kit, which can work with multiple robotic arms to do AI recognition and grabbing. In June 2022, Elephant Robotics released the mechArm to help individual makers and students get better learning in industrial robotics.

Nowadays, robotic arms are used in an increasingly wide range of applications, such as industry, medicine, commercial exhibitions, etc. At the very end of 2022, ultraArm is launched, and this time, Elephant Robotics doesn’t just release a new robotic arm, but brings 5 sets of solutions to education and R&D.

Small but powerful

As the core product of this launch, ultraArm is a small 4-axis desktop robotic arm. It is designed with a classic metal structure and occupies only an area of A5 paper. It is the first robotic arm equipped with a high-performance stepper motors of Elephant Robotics. It is stable and owns ±0.1mm repeated positioning accuracy. What’s more, ultraArm comes with 5 kits together including slide rail, conveyor belt, and cameras.

Multiple environments supported

As a preferred tool for the education field, ultraArm supports all major programming languages, including Python, Arduino, C++, etc. Also, it can be programmed in Mac, Windows, and Linux systems. For individual makers who are new to robotics, they can learn robotic programming with myBlockly, a visualization software that allows users to drag and drop code blocks.

Moreover, ultraArm supports ROS1 & ROS2. In the ROS environment, users can control ultraArm and verify algorithms in the virtual environment, improving experiment efficiency. With the support of ROS2, users can achieve more functions and objects in the developments.

Five robotic kits with for Robot Vision & DIY

In the past year, Elephant Robotics found that many users have to spend large amount of time creating accessories or kits to work with robotic arms. Therefore, to provide more solutions in different fields, ultraArm comes with five robotic kits, which are devided into two series: vision educational kits and DIY kits. These kits will help users, especially students program easily for a better learning experience on practical exercises about AI robot vision and DIY robotic projects.

Vision educational kits

Combined with vision functions, robotic arms can be used for more applications in industry, medicine, education, etc. In robotics education, collaborative robotic arms with vision capabilities allow students to better learn about artificial intelligence. Elephant Robotics has launched three kits for machine vision education: Vision & Picking Kit, Vision & Conveyor Belt Kit, and Vision & Slide Rail Kit to provide more choices and support to the education industry. With the camera and built-in related AI algorithms (Yolo, SIFT, ORB, etc.), ultraArm can achieve different artificial intelligence recognition applications with different identifying ways. Therefore, users can select the kit based on their needs.

For Vision & Picking Kit, users can learn about color & image recognition, intelligent grasping, robot control principle, etc. For Vision & Conveyor Belt Kit and Vision, the robotic arm can sense the distance of materials for identifying, grabbing, and classifying the objects on the belt. Users can easily create simulated industrial applications in this kit, such as color sorting. Users can have deeper learning in machine vision with the Vision & Slide Rail Kit because the robot can track and grab objects through the dynamic vision algorithm in this kit. With the multiple programming environments supported, the vision educational kits are preferred for school or STEM education production line simulation. Moreover, Elephant Robotics also offers different educational programs for students and teachers, to enable them to better understand the principles and algorithms of robot vision, helping them operate these robot kits more easily.

DIY kits

There are two kits in the DIY series: the Drawing Kit and Laser Engraving Kit. Users will enjoyonline production, nameplate and phone case DIY production, and AI drawing with the multiple accessories in the DIY kits.

To help users quickly achieve DIY production, Elephant Robotics created software called Elephant Luban. It is a platform that generates the G-Code track and provides primary cases for users. Users can select multiple functions such as precise writing and drawing, laser engravingwith, only with a few clicks. For example, users can upload the images they like to the software, Elephant Luban will automatically generate the path of images and transmit to ultraArm, then users can choose drawing or engraving with different accessories.

There is no doubt that ultraArm with different robotics kits certainly provides a great deal of help and support to the education field. These kits offer better operating environments and conditions for students, and help them get better learning in robotic programming. Elephant Robotics will continue to launch more products and projects with the concept of helping more users to enjoy the robot world.

Now ordering ultraArm in Elephant Robotics Shop can enjoy the 20% discount with the code: Ultra20