Enhancing Drone Navigation with AI and IDS uEye Camera Technology

AI-driven drone from University of Klagenfurt uses IDS uEye camera for real-time, object-relative navigation—enabling safer, more efficient, and precise inspections.

High-voltage power lines. Electricity distribution station. high voltage electric transmission tower. Distribution electric substation with power lines and transformers.

The inspection of critical infrastructures such as energy plants, bridges or industrial complexes is essential to ensure their safety, reliability and long-term functionality. Traditional inspection methods always require the use of people in areas that are difficult to access or risky. Autonomous mobile robots offer great potential for making inspections more efficient, safer and more accurate. Uncrewed aerial vehicles (UAVs) such as drones in particular have become established as promising platforms, as they can be used flexibly and can even reach areas that are difficult to access from the air. One of the biggest challenges here is to navigate the drone precisely relative to the objects to be inspected in order to reliably capture high-resolution image data or other sensor data.

A research group at the University of Klagenfurt has designed a real-time capable drone based on object-relative navigation using artificial intelligence. Also on board: a USB3 Vision industrial camera from the uEye LE family from IDS Imaging Development Systems GmbH.

As part of the research project, which was funded by the Austrian Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology (BMK), the drone must autonomously recognise what is a power pole and what is an insulator on the power pole. It will fly around the insulator at a distance of three meters and take pictures. „Precise localisation is important such that the camera recordings can also be compared across multiple inspection flights,“ explains Thomas Georg Jantos, PhD student and member of the Control of Networked Systems research group at the University of Klagenfurt. The prerequisite for this is that object-relative navigation must be able to extract so-called semantic information about the objects in question from the raw sensory data captured by the camera. Semantic information makes raw data, in this case the camera images, „understandable“ and makes it possible not only to capture the environment, but also to correctly identify and localise relevant objects.

In this case, this means that an image pixel is not only understood as an independent colour value (e.g. RGB value), but as part of an object, e.g. an isolator. In contrast to classic GNNS (Global Navigation Satellite System), this approach not only provides a position in space, but also a precise relative position and orientation with respect to the object to be inspected (e.g. „Drone is located 1.5m to the left of the upper insulator“).

The key requirement is that image processing and data interpretation must be latency-free so that the drone can adapt its navigation and interaction to the specific conditions and requirements of the inspection task in real time.

Thomas Jantos with the inspection drone – Photo: aau/Müller

Semantic information through intelligent image processing
Object recognition, object classification and object pose estimation are performed using artificial intelligence in image processing. „In contrast to GNSS-based inspection approaches using drones, our AI with its semantic information enables the inspection of the infrastructure to be inspected from certain reproducible viewpoints,“ explains Thomas Jantos. „In addition, the chosen approach does not suffer from the usual GNSS problems such as multi-pathing and shadowing caused by large infrastructures or valleys, which can lead to signal degradation and thus to safety risks.“

A USB3 uEye LE serves as the quadcopter’s navigation camera

How much AI fits into a small quadcopter?
The hardware setup consists of a TWINs Science Copter platform equipped with a Pixhawk PX4 autopilot, an NVIDIA Jetson Orin AGX 64GB DevKit as on-board computer and a USB3 Vision industrial camera from IDS. „The challenge is to get the artificial intelligence onto the small helicopters.

The computers on the drone are still too slow compared to the computers used to train the AI. With the first successful tests, this is still the subject of current research,“ says Thomas Jantos, describing the problem of further optimising the high-performance AI model for use on the on-board computer.

The camera, on the other hand, delivers perfect basic data straight away, as the tests in the university’s own drone hall show. When selecting a suitable camera model, it was not just a question of meeting the requirements in terms of speed, size, protection class and, last but not least, price. „The camera’s capabilities are essential for the inspection system’s innovative AI-based navigation algorithm,“ says Thomas Jantos. He opted for the U3-3276LE C-HQ model, a space-saving and cost-effective project camera from the uEye LE family. The integrated Sony Pregius IMX265 sensor is probably the best CMOS image sensor in the 3 MP class and enables a resolution of 3.19 megapixels (2064 x 1544 px) with a frame rate of up to 58.0 fps. The integrated 1/1.8″ global shutter, which does not produce any ‚distorted‘ images at these short exposure times compared to a rolling shutter, is decisive for the performance of the sensor. „To ensure a safe and robust inspection flight, high image quality and frame rates are essential,“ Thomas Jantos emphasises. As a navigation camera, the uEye LE provides the embedded AI with the comprehensive image data that the on-board computer needs to calculate the relative position and orientation with respect to the object to be inspected. Based on this information, the drone is able to correct its pose in real time.

The IDS camera is connected to the on-board computer via a USB3 interface. „With the help of the IDS peak SDK, we can integrate the camera and its functionalities very easily into the ROS (Robot Operating System) and thus into our drone,“ explains Thomas Jantos. IDS peak also enables efficient raw image processing and simple adjustment of recording parameters such as auto exposure, auto white Balancing, auto gain and image downsampling.

To ensure a high level of autonomy, control, mission management, safety monitoring and data recording, the researchers use the source-available CNS Flight Stack on the on-board computer. The CNS Flight Stack includes software modules for navigation, sensor fusion and control algorithms and enables the autonomous execution of reproducible and customisable missions. „The modularity of the CNS Flight Stack and the ROS interfaces enable us to seamlessly integrate our sensors and the AI-based ’state estimator‘ for position detection into the entire stack and thus realise autonomous UAV flights. The functionality of our approach is being analysed and developed using the example of an inspection flight around a power pole in the drone hall at the University of Klagenfurt,“ explains Thomas Jantos.

Visualisation of the flight path of an inspection flight around an electricity pole model with three insulators in the research laboratory at the University of Klagenfurt

Precise, autonomous alignment through sensor fusion
The high-frequency control signals for the drone are generated by the IMU (Inertial Measurement Unit). Sensor fusion with camera data, LIDAR or GNSS (Global Navigation Satellite System) enables real-time navigation and stabilisation of the drone – for example for position corrections or precise alignment with inspection objects. For the Klagenfurt drone, the IMU of the PX4 is used as a dynamic model in an EKF (Extended Kalman Filter). The EKF estimates where the drone should be now based on the last known position, speed and attitude. New data (e.g. from IMU, GNSS or camera) is then recorded at up to 200 Hz and incorprated into the state estimation process.

The camera captures raw images at 50 fps and an image size of 1280 x 960px. „This is the maximum frame rate that we can achieve with our AI model on the drone’s onboard computer,“ explains Thomas Jantos. When the camera is started, an automatic white balance and gain adjustment are carried out once, while the automatic exposure control remains switched off. The EKF compares the prediction and measurement and corrects the estimate accordingly. This ensures that the drone remains stable and can maintain its position autonomously with high precision.

Electricity pole with insulators in the drone hall at the University of Klagenfurt is used for test flights

Outlook
„With regard to research in the field of mobile robots, industrial cameras are necessary for a variety of applications and algorithms. It is important that these cameras are robust, compact, lightweight, fast and have a high resolution. On-device pre-processing (e.g. binning) is also very important, as it saves valuable computing time and resources on the mobile robot,“ emphasises Thomas Jantos.

With corresponding features, IDS cameras are helping to set a new standard in the autonomous inspection of critical infrastructures in this promising research approach, which significantly increases safety, efficiency and data quality.

The Control of Networked Systems (CNS) research group is part of the Institute for Intelligent System Technologies. It is involved in teaching in the English-language Bachelor’s and Master’s programs „Robotics and AI“ and „Information and Communications Engineering (ICE)“ at the University of Klagenfurt. The group’s research focuses on control engineering, state estimation, path and motion planning, modeling of dynamic systems, numerical simulations and the automation of mobile robots in a swarm: More information

uEye LE – the cost-effective, space-saving project camera
Model used:USB3 Vision Industriekamera U3-3276LE Rev.1.2
Camera family: uEye LE

Image rights: Alpen-Adria-Universität (aau) Klagenfurt
© 2025 IDS Imaging Development Systems GmbH

Erfolgreiche Kickstarter-Kampagne: MD Robot Kit fördert Kreativität und Bildung in der Robotik

Das Kickstarter-Projekt „MD Robot Kit: Unlock your AI Robot Engineering Dream Job“ von MangDang zielt darauf ab, die Kreativität und das technische Know-how von Robotik-Enthusiasten zu fördern. Es bietet zwei Hauptroboter, die jeweils unterschiedliche Bedürfnisse und Interessen ansprechen.

Der erste Roboter, der Mini Pupper, ist in mehreren Versionen erhältlich, darunter Mini Pupper 1, Mini Pupper 2 sowie die Modelle 2G und 2GA. Dieser Roboter ist ein kostengünstiges, persönliches Quadruped-Kit, das mit Open-Source-Software ausgestattet ist. Der Mini Pupper unterstützt multimodale generative KI-Plattformen wie ChatGPT von OpenAI, Gemini von Google und Claude von AWS. Zudem ist er mit ROS1 und ROS2 kompatibel, was seine Fähigkeiten in den Bereichen SLAM (Simultaneous Localization and Mapping) und Navigation erweitert. Die Integration von OpenCV ermöglicht es dem Roboter, Deep Learning mit Kameras durchzuführen. Dank der Nutzung von Raspberry Pi und Arduino bietet der Mini Pupper eine hohe Anpassungsfähigkeit und Erweiterbarkeit, was ihn ideal für Entwickler macht, die ihre eigenen Projekte realisieren möchten.

Der zweite vorgestellte Roboter ist der Turtle Robot, der bald verfügbar sein wird. Dieser Roboter richtet sich speziell an Schulen, Homeschooling-Familien und Roboter-Enthusiasten. Während detaillierte Spezifikationen noch nicht vollständig veröffentlicht wurden, ist klar, dass der Turtle Robot ebenfalls darauf abzielt, das Lernen und die Kreativität im Bereich der Robotik zu unterstützen.

Die Kampagne selbst hat ein Finanzierungsziel von 9000€ gesetzt und dieses mit einer erreichten Summe von knapp 19000€ , was über 200% des ursprünglichen Ziels entspricht, deutlich übertroffen. Die Kampagne läuft vom 5. September 2024 bis zum 5. Oktober 2024 und zog bisher 80 Unterstützer an. Ein herausragendes Merkmal der MD Robot Kits ist ihre Open-Source-Natur, die es den Nutzern ermöglicht, die Roboter in weniger als einer Stunde zusammenzubauen. Dies macht sie besonders zugänglich für eine breite Zielgruppe, die von Bildungseinrichtungen bis hin zu DIY-Enthusiasten reicht. Das Projekt ist darauf ausgelegt, nicht nur technisches Wissen zu vermitteln, sondern auch die Freude an der Schaffung und Anpassung von Robotern zu fördern.

Roboter

1. Mini Pupper:

  • Versionen: Mini Pupper 1 (2021), Mini Pupper 2 (2022), Mini Pupper 2G & 2GA.
  • Design: Ein kostengünstiges, persönliches Quadruped-Kit mit Open-Source-Software.
  • Funktionen: Unterstützt multimodale generative KI wie ChatGPT von OpenAI, Gemini von Google und Claude von AWS. Es ist kompatibel mit ROS1 und ROS2 für SLAM & Navigation und basiert auf OpenCV für Deep Learning mit Kameras.
  • Erweiterbarkeit: Nutzt Raspberry Pi und Arduino, was hohe Anpassungsfähigkeit ermöglicht.
  • Open-Source-Plattform: Der Mini Pupper unterstützt das Robot Operating System (ROS) und bietet Funktionen wie SLAM (Simultaneous Localization and Mapping) und Navigation. Er ist mit Lidar- und Kamerasensoren ausgestattet, die es ihm ermöglichen, seine Umgebung zu kartieren und sich autonom zu bewegen.
  • Technische Spezifikationen: Der Roboter verfügt über 12 Freiheitsgrade, die durch fortschrittliche Servomotoren ermöglicht werden. Diese Motoren bieten Feedback zu Beschleunigung und Kraft, was eine präzise Steuerung erlaubt.
  • Hardware und Erweiterbarkeit: Der Mini Pupper nutzt den Raspberry Pi 4B oder das Raspberry Pi Compute Module 4 als zentrale Recheneinheit und ist mit einem ESP32 als Mikrocontroller ausgestattet. Er verfügt über ein IPS-Display mit einer Auflösung von 240 x 320 Pixeln, ein Mikrofon, Lautsprecher und einen Touch-Sensor.
  • Anpassungsfähigkeit: Dank seiner Open-Source-Natur kann der Mini Pupper tiefgreifend modifiziert werden. Nutzer können eigene Module hinzufügen und den Roboter für verschiedene Projekte anpassen, wie z.B. das Verfolgen von Objekten im Raum.
  • Bildung und Community: Der Mini Pupper ist ideal für Schulen und Homeschooling-Familien geeignet. Er wird mit umfassenden Anleitungen und Ressourcen geliefert, die den Einstieg in die Robotik erleichtern. Nutzer können Teil einer globalen Community werden, um Ideen auszutauschen und Unterstützung zu erhalten.
  • Preis und Verfügbarkeit: Im Rahmen der Kickstarter-Kampagne wird der Mini Pupper in verschiedenen Versionen angeboten, wobei das Basismodell etwa 479 Euro kostet. Die Auslieferung soll im Februar beginnen, wobei Unterstützer das finanzielle Risiko von Crowdfunding-Kampagnen beachten sollten.

2. Turtle Robot:

  • Open-Source-Projekt: Der Turtle Robot basiert auf Arduino und ist ein Open-Source-Projekt, das die Integration von generativer KI unterstützt. Dies ermöglicht den Nutzern, den Roboter individuell anzupassen und zu erweitern.
  • Erschwinglichkeit: Der Turtle Robot ist eine kostengünstige Lernplattform für multimodale generative KI und wird zu einem Einführungspreis von 60% Rabatt auf 99 US-Dollar angeboten. Dies macht ihn besonders attraktiv für Bildungseinrichtungen und DIY-Enthusiasten.
  • Benutzerfreundlichkeit: Der Roboter ist so konzipiert, dass er innerhalb einer Woche aufgebaut und in Betrieb genommen werden kann. Dies erleichtert es Anfängern, schnell mit dem Lernen und Experimentieren zu beginnen.
  • Unterstützung und Ressourcen: MangDang bietet umfassende Unterstützung über verschiedene Kanäle sowie Zugang zu allen Code- und Design-Dateien über ein GitHub-Repository. Nutzer können die STL-Design-Dateien ausdrucken und ihre eigenen Ideen einbringen.
  • Multimodale Generative KI: Der Turtle Robot nutzt fortschrittliche KI-Technologien, die kontinuierliche Sprachinteraktionen ermöglichen. Die KI kann sich an frühere Gespräche erinnern und darauf basierend personalisierte Antworten geben.
  • Anwendungsbeispiele: Es gibt zwei Arduino-Projekte, die mit dem Turtle Robot genutzt werden können: eines zum Testen einzelner Funktionen und ein weiteres, das alle Funktionen über Sprachsteuerung ausführt.
  • Verfügbarkeit: Der Turtle Robot war im Rahmen der Kickstarter-Kampagne bis Oktober 2024 erhältlich und wird danach in einem Online-Shop verfügbar sein.

https://www.kickstarter.com/projects/mdrobotkits/md-robot-kits-open-source-support-your-genai-creativity?ref=txm87x

In Celebration of National Robotics Week, iRobot® Launches the Create® 3 Educational Robot

Robot’s Smartest Developer Platform, Now with ROS 2 and Python Support

BEDFORD, Mass., April 5, 2022 /PRNewswire/ — iRobot Corp. (NASDAQ: IRBT), a leader in consumer robots, today is expanding its educational product lineup with the launch of the Create® 3 educational robot – the company’s most capable developer platform to date. Based on the Roomba® i3 Series robot vacuum platform, Create 3 provides educators and advanced makers with a reliable, out of the box alternative to costly and labor-intensive robotics kits that require assembly and testing. Instead of cleaning people’s homes,1 the robot is designed to promote higher-level exploration for those seeking to advance their education or career in robotics.

In Celebration of National Robotics Week, iRobot launched the Create® 3 Educational Robot – the company’s most capable developer platform to date. Now with ROS 2 and Python Support, Create 3 provides educators and advanced makers with a reliable, out of the box alternative to costly and labor-intensive robotics kits that require assembly and testing. Create 3 is designed to promote higher-level exploration for those seeking to advance their education or career in robotics.

The launch of Create 3 coincides with National Robotics Week, which began April 2 and runs through April 10, 2022. National Robotics Week, founded and organized by iRobot, is a time to inspire students about robotics and STEM-related fields, and to share the excitement of robotics with audiences of all ages through a range of in-person and virtual events.

„iRobot is committed to delivering STEM tools to all levels of the educational community, empowering the next generation of engineers, scientists and enthusiasts to do more,“ said Colin Angle, chairman and CEO of iRobot. „The advanced capabilities we’ve made available on Create 3 enable higher-level students, educators and developers to be in the driver’s seat of robotics exploration, allowing them to one day discover new ways for robots to benefit society.“

With ROS 2 support, forget about building the platform, and focus on your application: 
The next generation of iRobot’s affordable and trusted all-in-one mobile robot development platform, Create 3 brings a variety of new functionalities to users, including compatibility with ROS 2, an industry-standard software for roboticists worldwide. Robots require many different components, such as actuators, sensors and control systems, to communicate with each other in order to work. ROS 2 enables this communication, allowing students to speed up the development of their project by focusing more on their core application rather than the platform itself. Learning ROS 2 also gives students valuable experience that many companies are seeking from robotics developers.

Expand your coding skills even further with Python support:
iRobot also released a Python Web Playground for its iRobot Root® and Create 3 educational robots, providing a bridge for beginners to begin learning more advanced programming skills outside of the iRobot Coding App. Python, a commonly used coding language, enables users to broaden the complexity of projects that they work on. The iRobot Education Python Web Playground allows advanced learners and educators to program the iRobot Root and Create 3 educational robots with a common library written in Python. This provides users with a pathway to learn a new coding language, opening the door to further innovation and career development.

With more smarts, Create 3 lets you do more:
As a connected robot, Create 3 comes equipped with Wi-Fi, Ethernet-over-USB host, and Bluetooth. Create 3 is also equipped with a suite of intelligent technology, including an inertial measurement unit (IMU), optical floor tracking sensor, wheel encoders, and infrared sensors for autonomous localization, navigation, and telepresence applications. Additionally, the robot includes cliff, bump and slip detection, along with LED lights and a speaker.

A 3D simulation of Create 3 is also available using Ignition Gazebo for increased access to robotics education and research.

Create 3 Pricing and Availability
Create 3 is available immediately in the US and Canada for $299 USD and $399 CAD. It will be available in EMEA through authorized distributors in the coming months. Additional details can be found at https://edu.irobot.com/what-we-offer/create3.

iRobot Education Python Web Playground Availability
The iRobot Education Python Web Playground can be accessed in-browser at python.irobot.com.

AgileX Robotics Announces Launch of LIMO, the World‘s First Multi-modal Mobile Robot with AI Modules

Mobile robot experts AgileX just announced the launch of LIMO – an ROS-based multi-modal car with 4 steering modes and open-source software that is perfect for ROS beginners as well as advanced programmers. This exciting new robotics platform has virtually unlimited applications for education, business and industry and is available now here

 LIMO is an incredibly versatile and multifunctional robotic platform for designing and programming robot AI. It uses the modular programming languages ROS 1 or ROS 2 to achieve many functional purposes including mapping, navigation, obstacle avoidance, path planning, and more for educational, commercial and industrial applications.

“At AgileX Robotics our vision is to enable all industries and individuals with the ability to improve productivity and efficiency through robot technology. Our latest product, LIMO is a powerful yet easy to use mobile robotic platform that is perfect for learning ROS, completing tasks for business and education, and beginning a journey in the exciting world of robotics. We designed LIMO to be easy to use with an intuitive programming method and open source capabilities. It’s the best way to get started with robotic AI.”  CEO, AgileX Robotics. 

Four steering modes make LIMO substantially superior to other robots in its class. The available modes are: Omni Wheel Steering, Tracked Steering, Four-Wheel Differential Steering and Ackermann Steering. These advanced steering modes plus built-in 360° scanning LiDAR and RealSense infrared camera make the platform perfect for industrial and commercial tasks. 

Equipped with four other USB ports and powered by Nvidia Jetson Nano, LIMO can be fully customized with other hardware according to one’s needs. LIMO can be connected with open-source ROS 1 & ROS 2. Programming Demo, ROS Packages and Simulation powered by Gazebo are supported as well. With these incredible features, LIMO can achieve precise self-localization, SLAM & V-SLAM mapping, route planning and autonomous obstacle avoidance, reverse parking, traffic light recognition, and more.

LIMO, the world‘s first multi-modal mobile robot with AI modules is currently being launched via a Kickstarter campaign to reward early adopters with special deals and pricing. Learn more here: [LINK]

QUADRUPED A1 – Four-legged robot combines artificial intelligence and sophisticated motion sequences

The newly founded company QUADRUPED Robotics is the first and currently the only German company to introduce fully modifiable multi-legged robots to the European market. In doing so, this form of robot represents a novelty: the four-legged robots combine artificial intelligence with new motion sequences and individually customizable equipment.
The A1 robot in the QUADRUPED line is based on the Robot Operating System (ROS.org) and can thus be adapted to its environment and requirements. However, even the basic equipment enables a wide range of applications.

By means of an AI-controlled and depth-sensing smart camera, HD recordings can be transmitted in real time and to a terminal device. At the same time, the integrated multi-eye camera offers real-time tracking of objects in sight, gesture recognition and target tracking following specific movement patterns.

The basis for the development of an environment map is the visual SLAM. QUADRUPED A1 calculates paths, obstacles, routes and navigation points. This leads to vision-based autonomous obstacle avoidance. In addition, QUADRUPED A1 also recognizes obstacle shapes and an adjustment of the body position takes place. If an impact or fall does occur, the advanced dynamic balancing algorithm allows balance to be quickly restored. Further measurement data as well as more dynamic behavior can be achieved by integrating additional sensor technology, such as that of a 3D LiDAR or further camera modules.

The QUADRUPED A1 incorporates the unique patented sensitive foot contact. Each of the four feet can be controlled individually. The smart actuators provide precise footing as well as different gaits. The system is based on a low-level control developed by QUADRUPED Robotics, which can read out the position including torque and current consumption at any time. The foot end is waterproof and dustproof and can be easily replaced after wear.
The A1 impressed with its latest measured top speed of 11.8 km/h (3.3 m/s), which is unique for a robot of this type. It can also carry loads of up to 5 kg.

For simplified maintenance work, the robot was designed with a stable and lightweight body structure. The A1 has an external 24 V power input and 5 V-/12 V-/19 V power supply, which enables the use of additional external devices. Other external interfaces include 4 USB, 2 HDMI, 2 Ethernet ports.

It is equipped with a powerful redundant control system: low-level control for CAN communication with the smart actuators and NVIDIA Xavier for calculation or measurement data evaluation. The current runtime of approx. 1.5 hours varies depending on the application.

Additional equipment is available from QUADRUPED Robotics and can be delivered with implemented software packages on request. Due to in-house research and development, the end customer can order a finished and tested product. Another service is the provision of complete documentation on the website www.docs.quadruped.de. In addition, complete simulation environments based on Webots & Gazebo are also made available for download there, which can be used for application testing.

QUADRUPED Robotics is a spin-off of MYBOTSHOP uG, which emerged as an established sales and development partner in the fields of robotics, sensor technology and automation technology. Company founder Daniel Kottlarz draws from the potential of four-legged and autonomous robots the opportunity to relieve humans in particularly dangerous areas of operation and situations and to ward off dangerous situations by means of the autonomous robots.

QUADRUPED A1 – Vierbeiniger Roboter vereint künstliche Intelligenz und ausgereifte Bewegungsabläufe

Das neu gegründete Unternehmen QUADRUPED Robotics führt als erstes und derzeit einziges deutsches Unternehmen voll modifizierbare mehrbeinige Roboter in den europäischen Markt ein. Dabei stellt diese Form der Roboter eine Neuheit dar: die Vierbeiner kombinieren künstliche Intelligenz mit neuen Bewegungsabläufen und einer individuell anpassbaren Ausstattung.

Der Roboter A1 der Linie QUADRUPED basiert auf dem Robot Operating System (ROS.org) und lässt sich somit auf seine Umgebung und Anforderung anpassen. Doch auch schon die Grundausstattung ermöglicht einen breiten Anwendungsbereich.
Mittels KI-gesteuerter und tiefenerkennender Smart-Kamera lassen sich HD-Aufnahmen in Echtzeit und an ein Endgerät übertragen. Gleichzeitig bietet die integrierte Mehraugen-Kamera die Echtzeit-Verfolgung von Objekten in Sichtweite, Gestenerkennung und auf bestimmte Bewegungsmuster folgend die Zielpersonenverfolgung.
Grundlage zur Erarbeitung einer Umgebungskarte ist das visuelle SLAM. QUADRUPED A1 berechnet Wege, Hindernisse, Strecken und Navigationspunkte. Dies führt zu einer visions-basierten autonomen Hindernisvermeidung. Zusätzlich erkennt der QUADRUPED A1 auch Hindernisformen und es erfolgt eine Anpassung der Körperposition. Sollte es doch zu einem Aufprall oder Sturz kommen, ermöglicht der fortschrittliche dynamische Balancier-Algorithmus das Gleichgewicht schnell wiederherzustellen. Weitere Messdaten sowie dynamischeres Verhalten können durch die Integration zusätzlicher Sensorik, wie die eines 3D-LiDAR oder weiterer Kameramodule, erreicht werden.

Im QUADRUPED A1 ist der einzigartige patentierte sensible Fußkontakt verbaut. Jeder der vier Füße kann einzeln und individuell angesteuert werden. Durch die smarten Aktuatoren sind präzises Auftreten sowie verschiedene Gangart geboten. Das System basiert auf einem von QUADRUPED Robotics entwickelten Low-Level-Control, das zu jedem Zeitpunkt die Position samt Drehmoment und Stromaufnahme auslesen kann. Das Fußende ist wasser- und staubdicht und kann nach Abnutzung leicht ausgetauscht werden.
Der A1 überzeugte durch seine zuletzt gemessene Höchstgeschwindigkeit von 11,8 km/h (3,3 m/s), welche für einen Roboter dieser Art einmalig ist. Zudem kann er Lasten bis zu 5 kg tragen.

Für vereinfachte Wartungsarbeiten wurde bei dem Roboter auf eine stabile und leichte Karosseriestruktur geachtet. Der A1 verfügt über einen externen 24 V Stromeingang und 5 V-/12 V-/19 V-Spannungsversorgung, die den Einsatz zusätzlicher externer Geräte ermöglicht. Weitere externe Schnittstellen sind 4 USB-, 2 HDMI-, 2 Ethernet-Anschlüsse.
Ausgestattet ist er mit einer leistungsstarken redundanten Steuerung: Low-Level-Control zur CAN-Kommunikation mit den smarten Aktuatoren und NVIDIA Xavier für die Berechnung bzw. Messdatenauswertung. Die aktuelle Laufzeit von ca. 1,5 Stunden variiert je nach Anwendung.

Zusatz-Equipment ist bei QUADRUPED Robotics erhältlich und wird auf Wunsch mit implementierten Software-Packages ausgeliefert. Durch die hausinterne Forschung und Entwicklung kann der Endkunde ein fertiges und getestetes Produkt bestellen. Ein weiterer Service ist die Bereitstellung der vollständigen Dokumentation auf der Website www.docs.quadruped.de. Darüber hinaus werden dort auch vollständige Simulationsumgebungen auf Basis von Webots & Gazebo zum Download bereitgestellt, die zu Anwendungstests genutzt werden können.

QUADRUPED Robotics ist eine Ausgründung der MYBOTSHOP uG, die als etablierter Vertriebs- und Entwicklungspartner in den Bereichen Robotik, Sensorik und Automatisierungstechnik entstand. Firmengründer Daniel Kottlarz schöpft aus dem Potenzial der vierbeinigen und autonomen Roboter die Chance, den Menschen in besonders gefährlichen Einsatzbereichen und Situationen zu entlasten und mittels der autonomen Roboter Gefahrensituationen abzuwehren.

ArcBotics Launches Hubert the Humanoid on Kickstarter, Funded In 2 Hours

HAYWARD – ArcBotics, a leading educational robotics company based in California, is pleased to announce the launch of Hubert the Humanoid: Your Advanced Robotics Study Buddy, a research-grade open source humanoid robot, on Kickstarter.

ArcBotics’ mission is to help anyone learn robotics, no matter their background or current skill level. It is undeniable that robots will play a part in every part of our collective futures, and in many ways, they already do. They believe that by understanding how robots work can we control our own futures, rather than allow technology to control us.

Hubert is designed for anyone pursuing robotics and want the most affordable, top-to-bottom college-level robotics class you’ll ever find – while getting to use your own humanoid robot. Hubert is designed for educators, roboticists who want to compete in robotics competitions, researchers, pro-users, and hobbyists new to robotics who are looking for a humanoid robot that is ready-to-go.

They have created Hubert to make a full suite of college-level robotics lessons cheaper than the cost of a single robotics class. Hubert runs the same software that today’s leading robotics companies and universities are running. Similar robots have been used in the leading universities – but starting at 10x the price. Hubert is starting at $599 USD on Kickstarter, retailing for $1,199 USD, and is 100% Open Source Hardware.

ArcBotics will be releasing in-depth, free web tutorials to help train anyone to become a robotics engineer in the latest topics such as: ROS, Arduino, OpenCV, Object Recognition, TensorFlow, Inverse Kinematics, Control Theory, MoveIt!, Power Management, Path Planning, Legged Mechanics, Python, and so much more.

Hubert’s core features:

  • Dual-camera stereo HD vision cameras
  • On-board Raspberry Pi 3, preloaded with all necessary software
  • Custom smart servo – incredibly high torque, voltage independent, embedded sensors with serial communication
  • Custom Arduino-compatible Python-powered servo controller, with on-board 9-axis Motion and Bluetooth 4 LE connection
  • Rigid aluminum frame
  • Removable outer sheet metal shell – easily remove, design, and attach your own shell or parts
  • Functional grippers
  • Speaker and microphone
  • Touch-screen LCD head
  • Independent emotive ears
  • 100% Open Source Hardware
  • Future-proof with Raspberry Pi 3, C.H.I.P., and ODROID-XU4

About ArcBotics Since 2012, ArcBotics has been making robotics accessible by creating full-feature robots designed for different age groups and skill levels, with extensive, step-by-step documentation and open sourcing the hardware and software. They previously launched 2 successful Kickstarter campaigns for Hexy the Hexapod and Sparki the Easy Robot for Everyone, raising $360,000 and shipping to over 2500 backers. Since then, they have grown to ship tens of thousands of robots to homes, STEM programs, and universities around the world like Stanford, MIT, and Northwestern. Their robots can be found at global retailers like Barnes and Noble, Adafruit, RobotShop, DFRobot, and more.

PLEN Cube: The Portable Personal Assistant Robot, from PLENGoer Robotics Inc., launched on Kickstarter

PLENGoer Robotics launched a KICKSTARTER campaign to bring PLEN Cube:
The Portable Personal Assistant Robot, to the world. PLEN Cube can be your
customizable, palm-­sized companion featuring a smart camera and automation skills.
Visit PLEN Cube’s Kickstarter page to pre-­order it, meet the developers, and learn more
about the product that is changing our lives.


PLEN Cube is a portable robot that can consolidate your devices and favorite web
services;; capture moments with a smart camera that tracks your face and motions;; and
complement your life with hands-­free voice activation, and customization options.
Natsuo Akazawa, CEO at PLENGoer Robotics, explains: “Think of PLEN Cube as your
right-­hand man (in robot form). We’ve packed a lot into this playful 3-­inch powerhouse:
a powerful processor, Full-­HD camera, display, microphone array and speakers, along
with cutting-­edge software in facial recognition, speech recognition, and more.“

 

A HANDS-­FREE CAMERA THAT FOLLOWS YOUR MOVEMENTS

No need for a cameraperson. Simply use voice commands to take a photo or video. PLEN Cube also uses computer vision technology, and can rotate 360 degrees to follow you wherever you go. Because PLEN Cube is wifi and bluetooth-­enabled, you can instantly share photos or videos on social media.

KEEPS YOU INFORMED

PLEN Cube aggregates all your day’s essential info including the weather, schedule, reminders, social media and more.Use the PLEN Cube mobile app to connect your services. PLEN Cube will keep you informed through its display, movements, sound, or notifications to your mobile device.

ALL YOUR SERVICES AND DEVICES IN ONE PLACE

With PLEN Cube, you can control all your favorite devices via Wifi, Bluetooth and infrared.You can also connect and consolidate your social media, including Facebook,Instagram, and Twitter.

CUSTOMIZE TO YOUR LIFE -­ FROM MORNING TO NIGHT

Anyone can customize the PLEN Cube using IFTTT, a web-­based automation service that allows you to create chains of actions with your favorite web services. PLEN Cube can do everything else a personal assistant bot can do, and also expands the number of triggers (inputs) and actions (outputs) you can use. Advanced developers can access our open API and take customization to the next level. We’ll also launch a community forum where users can share ideas and collaborate, and PLENGoer can provide further developer information.

Specs:

  • Size: 75mm x75mm x75mm / Weight: 500g (1.1 lbs).
  • Intel Joule 570x
  • WiFi, Bluetooth
  • 320×240 LCD – full color
  • Full HD Camera – with stop motion, panoramic, and face and actiontracking
  • Microphone x2 – to locate sound source
  • Electromagnetic field-based sensor – detects hand presence and motion.
  • Program different gestures to activate certain application actions
  • Speakers – full range drivers, woofers and tweeters from GoerTek, a leaderin the global miniature electro-acoustic industry
  • Infrared home appliance control – through a partnership with Crossdoor
  • Motors – for PLEN Cube’s expressive movement
  • Linux OS
  • Robot Operating System (ROS)
  • Voice recognition software (we’re currently evaluating the best option for this)

About PLENGoer Robotics Inc.

Founded in 2016, PLENGoer Robotics Inc. develops personal service robots to achieve a richer quality of life. With the robot development technology our parent PLEN Project Company has cultivated thus far, and production technology of GoerTek group, we aim to develop unprecedented and truly functional service robots. PLENGoer Robotics will be an innovation platform for service robots.
Facebook:  http://www.facebook.com/plengoer/