Fraunhofer-Institute, Lamarr Institut, igus und Uni Bonn entwickeln Sprachmodell für kostengünstige Automatisierungslösungen in der Kommissionierung Köln, 1. Oktober 2024 – Mitte September fand bei igus in Köln ein Hackathon statt. Unter dem Motto „Künstliche Intelligenz trifft Robotics“ arbeiteten 17 Teilnehmer in drei interdisziplinären Teams daran, fortschrittliche Automatisierungslösungen für die Logistik zu entwickeln. Ziel des Hackathons war es, mithilfe eines großen KI-basierten Sprachmodells (Large Language Models, kurz: LLM) eine automatisierte Methode zum Verpacken von Pizzen zu schaffen. Die Veranstaltung fand in Zusammenarbeit mit dem Fraunhofer-Institut für Intelligente Analyse- und Informationssysteme IAIS, der Universität Bonn, dem Lamarr Institut und dem Fraunhofer-Institut für Materialfluss und Logistik IML statt. 20-mal Tonno, 10-mal Salami und 25-mal Margherita: So lautet die Bestellung des Supermarkts um die Ecke bei einem großen Hersteller von TK-Pizzen. Das klingt nach einer einfachen Aufgabe, ist jedoch immer noch Handarbeit, die aktuell manuell erledigt wird. Eine einfache und wenig anspruchsvolle Arbeit, für die sich oftmals kein Personal findet. Eine Lösung hierfür entwickelten 17 Teilnehmer der Fraunhofer Institute IAIS und IML, des Lamarr-Instituts für Maschinelles Lernen und Künstliche Intelligenz sowie Mitarbeitende der Universität Bonn innerhalb eines 5-tägigen Hackathons bei igus in Köln. Und zwar mithilfe von kosteneffizienter „Low Cost Robotic“ made in Cologne. Die Aufgabe des Hackathons lautete: Entwickelt eine Automatisierungslösung, die mithilfe von Künstlicher Intelligenz (KI) die Pizzen automatisch verpackt. Die Herausforderung bestand darin, einen Roboter so zu steuern, dass er über Sprachanweisungen direkt von Mitarbeitenden die richtigen Produkte erkennt, greift und in Mischpakete verpackt. Durch die Kombination von Generativen KI (GenAI) und Robotik sollte eine Lösung entwickelt werden, die diese repetitive Arbeit effizienter und kostengünstiger gestaltet. Automatisierung der Industrie mit KI und kostengünstiger Robotik Während des Hackathons arbeiteten drei Teams an einem realen Use Case, bei dem ein ReBeL Roboterarm vor einem Förderband mit verschiedenen Pizzaprodukten platziert wurde. Mithilfe einer Webcam und einem KI-basierten Segmentierungssystem (Segment Anything Model, SAM) erkannten die Systeme die verschiedenen Produkte auf dem Förderband und identifizierten ihre Position. Das Sprachmodell ordnete diese Objekte den gewünschten Produkten zu, basierend auf den natürlichsprachlichen Anweisungen. Anschließend legte der Roboter die Produkte in die Boxen, indem er die Anweisungen des Sprachmodells befolgte. Alexander Zorn vom Fraunhofer IAIS zeigte sich begeistert von den Ergebnissen des Hackathons: „Wir freuen uns sehr, mit dem igus Robotik-Know-how reale Proof-of-Concepts für Kunden in der Industrie zu entwickeln. Die Kombination von Künstlicher Intelligenz und Robotik bietet enorme Potenziale, um Arbeitsprozesse zu automatisieren und effizienter zu gestalten.“ Auch Alexander Mühlens, Prokurist und Leiter der Abteilung Low Cost Automation bei der igus GmbH, betonte die Bedeutung des Hackathons: „Die Zusammenarbeit mit den Instituten gibt uns die Möglichkeit, unseren Kunden zu zeigen, was alles mithilfe von KI und Low-Cost-Robotic möglich ist. Unser Traum ist es, Roboter über einen Sprachbefehl einfach steuern zu können, und das in jeder Anwendung.“ Der Hackathon zeigte, dass durch den Einsatz von LLMs und Robotik viele weitere Automatisierungsmöglichkeiten denkbar sind, wie beispielsweise das gleichzeitige Packen mehrerer Boxen, das Sortieren von Produkten nach speziellen Anweisungen oder das Überprüfen von Inhaltsstoffen auf Allergene mithilfe von Kamerasystemen. Die erfolgreiche Zusammenarbeit zwischen igus, dem Fraunhofer IAIS, dem Lamarr-Institut der Universität Bonn und dem Fraunhofer IML unterstreicht das Potenzial dieser Technologien für die industrielle Automatisierung. **Über die Partner** Das Fraunhofer IAIS ist eines der führenden Forschungsinstitute auf den Gebieten Künstliche Intelligenz (KI), Maschinelles Lernen und Big Data, während die Universität Bonn als Zentrum für Künstliche Intelligenz und Robotik bekannt ist. Das Fraunhofer IML in Dortmund brachte seine Fachexpertise in der Logistikautomatisierung und im Materialfluss ein. Während das Lamarr-Institut führend in der internationalen KI-Forschung und dem Maschinellen Lernen ist. |
Schlagwort-Archive: IAIS
Roberta Robot-Arm Videos
Here are some Videos, found at the Roberta Youtube-Channel. They show a new robot-arm with 4 DOF, completely build of Lego. For more information about Roberta, take a loot at: http://www.roberta-home.de
Press the left or right arrow-button at the side of the video frame to switch between the videos.
First Day of Safety, Security and Rescue Robots 2010 (SSRR-2010)
Currently I’m participating at the workshop of Safety, Security and Rescue Robots 2010 in Bremen.
The first day is now gone and a lot of interesting talks have been given:
Tetsuya Kinugasa has shown a Flexible Displacement Sensor in his talk of „Measurement of Flexed Posture for Mono-tread Mobile Track Using New Flexible Displacement Sensor„. His group develops and uses this sensor to control the posture of a robot which is a combination of snake, worm and tank.
Jimmy Tran presented his works on „Canine Assisted Robot Deployment for Urban Search and Rescue„. The basic idea is as simple as brilliant, use a equipped dog to find victims and to inform operators about him. So, dogs are well used in rescue and they have a high mobility. They can easily overcome huge rubles and are able to carry video cameras or rescue material. So, his approach is to use the dogs to deploy a small robot next to a victim, which would allow to investigate medical status of the person. The idea is hilarious.
„Development of leg-track hybrid locomotion to traverse loose slopes and irregular terrain“ is so far the most interesting technical approach of this workshop. It shows a way how a tracked like vehicle can be combined with a semi-Walker.
Donny Kurnia Sutantyo presented his work on „Multi-Robot Searching Algorithm Using Levy Flight and Artificial Potential Field„, while Julian de Hoog showed a solution for team exploration in „Dynamic Team Hierarchies in Communication-Limited Multi-Robot Exploration“.
The invited speaker Bernardo Wagner showed the outcomes of his department. The Leibniz University of Hannover has worked intensively in the field of „Perception and Navigation with 3D Laser Range Data in Challenging Environments„.
„Potential Field based Approach for Coordinate Exploration with a Multi-Robot Team“ is topic of Alessandro Renzaglia.
Bin Li showed another nice approach of a shape shifting robot. His robot is able to shape shift it self by rearranging its three motion segments. „Cooperative Reconfiguration between Two Specific Configurations for A Shape-shifting Robot“
Jorge Bruno Silva presented a approach of trajectory planing while respecting time constrains in „Generating Trajectories With Temporal Constraints for an Autonomous Robot“
Noritaka Sato closed the day by presenting novel a HMI approach for teleoperation. Instead of showing only the direct camera image his group uses temporal shifted images to generate an artificial bird eye view, like it is given in computer car games. „Teleoperation System Using Past Image Records Considering Moving Objects“
I am looking forward to listen to the next talks.
Interesting designs for Rescue Robots – Part 2
Professor Dr. Satoshi Tadokoro from the Tohoku University presents his ASC. ASC is an search camera for usage in emergency situations and stands for Active Scope Camera. In basic it is a flexible endoscope which is able to move by it self. With the help of vibrating inclined cilia this endoscope can like a caterpillar crawl into smallest voids (>30 mm). Its maximum speed is 47 mm/s and the operating range is 8 m. This allows rescue workers to search in rubbles for victims or checking the structure of it.
The following video shows Professor Dr. Satoshi Tadokoro at the Tokyo International Fire and Safety Exhibition 2008 presenting the ASC.
During the Collapse of the Historical Archive of the City of Cologne (March 2009), Professor Dr. Satoshi Tadokoro, Professor Dr. Robin R. Murphy (Texas A&M University), Clint Arnett (Project Coordinator for Urban Search and Rescue in TEEX), members of the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS) were trying to support the local fire department. Therefore I was able to test the ASC which was in use during this disaster.
The ASC performs extremely well. It can crawl in a reasonable speed into the rubble and is (after a little training) easy to use. But the biggest problem is the user interface. The ASC camera system does not compensated tilting or turning if the „robot“ does flip/turn over, which happens quite often. Hence, it is hard for the Operator to keep track of the orientation. In addition the opening angle of the camera is extreme small, which does even more handicap the situational awareness.
Roberta on Facebook
FAIR Libary is online
As we have already reported is the „Fraunhofer Autonomous Intelligent Robotics Devices“ Library now open source and available at the sourceforge project „OpenVolksBot„. In addition we can know report some more details on it.
- The FAIRlib is now organized as several Eclipse-Project (fairAlgorithm, fairCore, fairDevices, fairGraphics, fairTestAlgorithm, fairTestCore, fairTestDevices and fairTestGraphics). This allows an easy extending and compiling the projects and minimizes the cross dependencies.
- The dependences are listed in the ReadMe and can be auto installed by using the script „apt-get-fair“
- A way of easy installing is given by the script „install-fair„
- The current version is tested on the Ubuntu 9.10 (64-Bit) , but will also work on other OS (by side of auto solving the dependences)
- Fair is published under the CC-by-sa-nc License .
So now we all can reuse and cooperated in a create library, instead of reinventing the „wheel“ again and again. 😉
Open source is FAIR – IAIS released the „Fraunhofer Autonomous Intelligent Robotics Devices Library“ as open source
Developing and programming robotic systems can sometimes be an unsatisfying task. This feeling is mostly not related to problems that occur during „high level“ problem solving. It is mostly appearing if you try to get the system it self up and running. So tools and solutions are needed to help us to overcome these initialization barriers.
The Fraunhofer Institute for Intelligent Analysis and Information Systems or for short Fraunhofer IAIS, does now offer a special computer library that can support the developer to get a width field of sensors and actors up and running. In addition it includes a various number of algorithmic for every day robotic problems like Simultaneous Localization And Mapping (SLAM) or image processing. The so called „Fraunhofer Autonomous Intelligent Robotics Devices Library“ or for short FAIR library, is a C/C++ development library which is actively used in the VolksBot® projects and is released as open source project under the GNU-license Creative Commons.
FAIRlib is soon available at the sourceforge project „OpenVolksBot„.
Updated: The initail version is now available (see also here) and is published under the CC-by-sa-nc.
RobotsBlog is alive
RobotsBlog is a new blog focusing on robot topics. It will included news, discussions, articles, and links around the wide field of robotic and AI.
We, the authors, are an international team of junior researchers and actively involved in robotics. And so we are every day seeing the challenging problems and extreme nice solutions and we will try to share as much as possible with you. Our hope is that this platform will help all of us to sort and structure the wide field of robotics a little bit so that other researchers, students and every interested person can maximize their personal benefit which are archival by robotics.
And now it is time to present you some of our previous work.
The first video is showing some test runs of an Unmanned Aerial Vehicle or for short UAV, which is currently tested by the Fraunhofer IAIS.
The second video shows a autonomous ground base robot using its docking station. The robot does base on a ProfiBot system and is searching autonomously for its docking station if it is needed. After finding it is performing a docking maneuver and charges it batteries.
The next video shows some tests results derived from a computer vision system that is used to detected character based landmarks in the environment. This behavior was needed to participated at the SICK robot day 2009 which we succesfully have done.
So thats all for the starting, new updates are coming and we hope to see you soon again.