Robots as helpers in the lettuce harvest

Robot solution for automating the lettuce harvest

Lettuce is a valuable crop in Europe and the USA. But labor shortages make it difficult to harvest this valuable field vegetable, as sourcing sufficient seasonal labor to meet harvesting commitments is one of the sector’s biggest challenges. Moreover, with wage inflation rising faster than producer prices, margins are very tight. In England, agricultural technology and machinery experts are working with IDS Imaging Development Systems GmbH (Obersulm, Germany) to develop a robotic solution to automate lettuce harvesting.

Robot solution for automating the lettuce harvest

The team is working on a project funded by Innovate UK and includes experts from the Grimme agricultural machinery factory, the Agri-EPI Centre (Edinburgh UK), Harper Adams University (Newport UK), the Centre for Machine Vision at the University of the West of England (Bristol) and two of the UK’s largest salad producers, G’s Fresh and PDM Produce.

Within the project, existing leek harvesting machinery is adapted to lift the lettuce clear from the ground and grip it in between pinch belts. The lettuce’s outer, or ‘wrapper’, leaves will be mechanically removed to expose the stem. Machine vision and artificial intelligence are then used to identify a precise cut point on the stem to to neatly separate the head of lettuce.

„The cutting process of an iceberg is the most technically complicated step in the process to automate, according to teammates from G subsidiary Salad Harvesting Services Ltd.“, explains IDS Product Sales Specialist Rob Webb. „The prototype harvesting robot being built incorporates a GigE Vision camera from the uEye FA family. It is considered to be particularly robust and is therefore ideally suited to demanding environments. „As this is an outdoor application, a housing with IP65/67 protection is required here“, Rob Webb points out.

GV-5280FA

The choice fell on the GV-5280FA-C-HQ model with the compact 2/3″ global shutter CMOS sensor IMX264 from Sony. „The sensor was chosen mainly because of its versatility. We don’t need full resolution for AI processing, so sensitivity can be increased by binning. The larger sensor format means that wide-angle optics are not needed either“, Rob Webb summarized the requirements. In the application, the CMOS sensor convinces with excellent image quality, light sensitivity and exceptionally high dynamic range and delivers almost noise-free, very high-contrast 5 MP images in 5:4 format at 22 fps – even in applications with fluctuating light conditions. The extensive range of accessories, such as lens tubes and trailing cables, is just as tough as the camera housing and the screwable connectors (8-pin M12 connector with X-coding and 8-pin Binder connector). Another advantage: camera-internal functions such as pixel pre-processing, LUT or gamma reduce the required computer power to a minimum.

The prototype of the robotic mower will be used for field trials in England towards the end of the 2021 season.

„We are delighted to be involved in the project and look forward to seeing the results. We are convinced of its potential to automate and increase the efficiency of the lettuce harvest, not only in terms of compensating for the lack of seasonal workers“, affirms Jan Hartmann, Managing Director of IDS Imaging Development Systems GmbH.

Prototype lettuce harvesting robot of Agri-Epicentre (UK)

The challenges facing the agricultural sector are indeed complex. According to a forecast by the United Nations Food and Agriculture Organization (FAO), agricultural productivity will have to increase by almost 50 percent by 2050 compared to 2012 due to the dramatic increase in population. Such a yield expectation means an enormous challenge for the agricultural industry, which is still in its infancy in terms of digitalization compared to other sectors and is already under high pressure to innovate in view of climatic changes and labor shortages. The agriculture of the future is based on networked devices and automation. Cameras are an important building block, and artificial intelligence is a central technology here. Smart applications such as harvesting robots can make a significant contribution to this.

Der ReBeL der Automatisierung: Smarter igus Cobot für 4.970 Euro

Mit dem weltweit ersten Cobot-Kunststoffgetriebe und einem digitalen Ökosystem beschleunigt igus die Low-Cost-Automatisierung – heute schon 20 Projekte pro Woche

Als Herzstück das Getriebe aus Kunststoff – der Cobot ReBeL ist für 4.970 Euro jetzt lieferbar mit einem digitalen Universum für die Low-Cost-Integration in wenigen Tagen. (Quelle: igus GmbH)

Köln, 16. März 2022 – igus liefert ab jetzt den Serviceroboter ReBeL aus – auch als smarte Version. Zu einem Preis von 4.970 Euro für die Plug-and-Play-Variante und mit einem Gewicht von nur rund 8 Kilogramm erhalten Kunden einen der leichtesten Cobots auf dem Markt. Digitale Services wie der RBTXpert und neue Online-Angebote ermöglichen den Kunden komplette Automatisierungslösungen in wenigen Tagen und für wenig Geld.

Der ReBeL der Automatisierung: Smarter igus Cobot für 4.970 Euro Mit dem weltweit ersten Cobot-Kunststoffgetriebe und einem digitalen Ökosystem beschleunigt igus die Low-Cost-Automatisierung – heute schon 20 Projekte pro Woche Köln, 16. März 2022 – igus liefert ab jetzt den Serviceroboter ReBeL aus – auch als smarte Version. Zu einem Preis von 4.970 Euro für die Plug-and-Play-Variante und mit einem Gewicht von nur rund 8 Kilogramm erhalten Kunden einen der leichtesten Cobots auf dem Markt. Digitale Services wie der RBTXpert und neue Online-Angebote ermöglichen den Kunden komplette Automatisierungslösungen in wenigen Tagen und für wenig Geld.

Beim ReBeL setzt igus ganz auf seine motion plastics Expertise: Der Einsatz von Kunststoff macht den Roboter mit 8,2 Kilogramm Eigengewicht zum leichtesten Serviceroboter mit Cobot-Funktion in seiner Klasse. Alle mechanischen Bauteile, aus denen sich der ReBeL zusammensetzt, sind ausnahmslos von igus entwickelt und gefertigt. Seine Traglast beträgt 2 Kilogramm und er besitzt eine Reichweite von 664 Millimetern. Die Wiederholgenauigkeit liegt bei +/- 1 Millimeter bei 7 Picks pro Minute. Das Herzstück ist das weltweit erste industrietaugliche Cobot-Getriebe aus Kunststoff. „Hinter diesen Zahlen stecken 1.041 Tests im hauseigenen Labor seit 2019, bei denen wir tribologische und thermodynamische Tests zu 15 Materialpaarungen und Toleranzketten durchgeführt haben. Eine besonders große Herausforderung war die Wärmeentwicklung in den vollintegrierten Wellgetrieben; sie werden durch den Motor thermisch beeinflusst. In der Entwicklung haben wir uns daher zusätzlich auf größere Motoren und einen besseren Wirkungsgrad konzentriert, um die Wärmeentwicklung deutlich zu verringern“, sagt Alexander Mühlens, Leiter des Geschäftsbereichs Low-Cost-Automation bei igus. „Dadurch konnten wir kontinuierlich Verbesserungen erzielen und am Ende die Zyklenzahl auf zwei Millionen sogar verfünffachen. Das entspricht einer üblichen Lebensdauer von zwei Jahren.”  

Smart Plastics – Volle Transparenz im Betrieb für präventive Wartung
igus hat sein motion plastics Knowhow auch in die Leistungselektronik eingebracht und erstmalig einen Encoder mit Hilfe von Leitplastikbahnen entwickelt. So lassen sich Dreh- und Zyklenzahl, Durchläufe, Temperatur und Strom exakt messen. Dank einer Cloudanbindung mit Webcam stellt ein Dashboard alle generierten Daten übersichtlich live dar. Der Kunde erhält so die volle Transparenz über seinen ReBeL im Betrieb, über Kennzahlen wie Verschleiß, Taktzeit und Stückzahlen.  

Günstige Komplettlösung, schnell integriert
Erhältlich ist der smarte ReBeL in zwei Varianten: einmal als Open Source Version ohne Robotersteuerung, Netzteil und Software für 3.900 Euro bei Stückzahl 1 oder als Plug-and-Play-Variante mit Roboter, Steuerungssoftware und Netzteil für 4.970 Euro bei Stückzahl 1. Gemäß dem igus Ansatz „Build or Buy“ stehen Kunden neben dem Komplettsystem auch die einzelnen ReBeL Wellgetriebe zur Verfügung, in den Durchmessern 80 und 105 Millimeter. Das Drehmoment beträgt 3 Nm (80) beziehungsweise 25 Nm (105) bei 6 RPM, mit einer Übersetzung von 50:1. Der ReBeL ist auf dem Online-Marktplatz RBTX erhältlich. Hier finden Anwender Einzelkomponenten, Integrationsunterstützung sowie Hard- und Software von inzwischen mehr als 40 Partnern – im Wissen, dass alles zu 100 Prozent miteinander kompatibel ist; darunter verschiedenste Roboterkinematiken, Kameras, Software, Gripper, Power Electronics, Motoren, Sensoren und Steuerungen.   Für die Integration per Online-Beratung mit Festpreisgarantie steht für Kunden der RBTXpert bereit: Auf einer 400 Quadratmeter großen Customer Testing Area beraten Experten täglich Kunden per Live-Video und schicken innerhalb von Stunden Lösungsangebote. Typische Hardwarekosten ohne Integration starten bei 8.500 Euro und Komplettlösungen ab 12.500 Euro. „Wir spüren, dass wir Automatisierung noch zugänglicher machen, da wir mit unserem Service RBTXpert allein in Deutschland mehr als 20 Kundenprojekte pro Woche beraten. Deshalb erweitern wir bis Ende März den Service um zehn weitere Online-Berater. International ist das Angebot bereits in sieben Ländern verfügbar, weitere 14 sind in Vorbereitung”, so Alexander Mühlens. „Aus diesen positiven Erfahrungen, den vielen umgesetzten Projekten und den zahlreichen Kundengesprächen heraus entwickelt sich zurzeit ein spannendes Ökosystem an weiteren Services.“  

Universum für die Low-Cost-Automation
In diesem Low-Cost-Automation-Universum dreht sich alles um die individuelle Kundenanwendung. Ziel ist es, mit neuen Angeboten und Businessmodellen die Integration weiter zu vereinfachen. „Wir werden einen App Store zur Verfügung stellen, in dem Anbieter von Low-Cost-Automation und freie Softwareentwickler ihre Software-Ideen einbringen können. Indem sie bestehende Software nutzen, können Anwender ihre Automatisierung noch schneller realisieren. So ist die Anbindung der Roboter an digitale Services wie IFTTT oder smarte Assistenten wie Alexa oder Siri möglich. Der Besucher kann dann beispielsweise in einer Kaffeebar per Sprache seinen Lieblingskaffee ordern und der Roboter schenkt ihn dann aus. Daraus ergeben sich ganz neue Business-Modelle wie Pay-per-Pick, bei dem Nutzer nicht für den Roboter, sondern nur für seine Aufgabe zahlen. Diese neuen Möglichkeiten werden den Robotikmarkt ebenso wie den Alltag nachhaltig verändern“, sagt Mühlens. „Ihnen wollen wir mit dem Low-Cost-Automation-Universum ein Zuhause geben.“

https://www.igus.de/info/build-or-buy-serviceroboter-rebel

Draper Teaches Robots to Build Trust with Humans – new research

New study shows methods robots can use to self-assess their own performance

CAMBRIDGE, MASS. (PRWEB) MARCH 08, 2022

Establishing human-robot trust isn’t always easy. Beyond the fear of automation going rogue, robots simply don’t communicate how they are doing. When this happens, establishing a basis for humans to trust robots can be difficult.

Now, research is shedding light on how autonomous systems can foster human confidence in robots. Largely, the research suggests that humans have an easier time trusting a robot that offers some kind of self-assessment as it goes about its tasks, according to Aastha Acharya, a Draper Scholar and Ph.D. candidate at the University of Colorado Boulder.

Acharya said we need to start considering what communications are useful, particularly if we want to have humans trust and rely on their automated co-workers. “We can take cues from any effective workplace relationship, where the key to establishing trust is understanding co-workers’ capabilities and limitations,” she said. A gap in understanding can lead to improper tasking of the robot, and subsequent misuse, abuse or disuse of its autonomy.

To understand the problem, Acharya joined researchers from Draper and the University of Colorado Boulder to study how autonomous robots that use learned probabilistic world models can compute and express self-assessed competencies in the form of machine self-confidence. Probabilistic world models take into account the impact of uncertainties in events or actions in predicting the potential occurrence of future outcomes.

In the study, the world models were designed to enable the robots to forecast their behavior and report their own perspective about their tasking prior to task execution. With this information, a human can better judge whether a robot is sufficiently capable of completing a task, and adjust expectations to suit the situation.

To demonstrate their method, researchers developed and tested a probabilistic world model on a simulated intelligence, surveillance and reconnaissance mission for an autonomous uncrewed aerial vehicle (UAV). The UAV flew over a field populated by a radio tower, an airstrip and mountains. The mission was designed to collect data from the tower while avoiding detection by an adversary. The UAV was asked to consider factors such as detections, collections, battery life and environmental conditions to understand its task competency.

Findings were reported in the article “Generalizing Competency Self-Assessment for Autonomous Vehicles Using Deep Reinforcement Learning,” where the team addressed several important questions. How do we encourage appropriate human trust in an autonomous system? How do we know that self-assessed capabilities of the autonomous system are accurate?

Human-machine collaboration lies at the core of a wide spectrum of algorithmic strategies for generating soft assurances, which are collectively aimed at trust management, according to the paper. “Humans must be able to establish a basis for correctly using and relying on robotic autonomy for success,” the authors said. The team behind the paper includes Acharya’s advisors Rebecca Russell, Ph.D., from Draper and Nisar Ahmed, Ph.D., from the University of Colorado Boulder.

The research into autonomous self-assessment is based upon work supported by DARPA’s Competency-Aware Machine Learning (CAML) program.

In addition, funds for this study were provided by the Draper Scholar Program. The program gives graduate students the opportunity to conduct their thesis research under the supervision of both a faculty adviser and a member of Draper’s technical staff, in an area of mutual interest. Draper Scholars’ graduate degree tuition and stipends are funded by Draper.

Since 1973, the Draper Scholar Program, formerly known as the Draper Fellow Program, has supported more than 1,000 graduate students pursuing advanced degrees in engineering and the sciences. Draper Scholars are from both civilian and military backgrounds, and Draper Scholar alumni excel worldwide in the technical, corporate, government, academic, and entrepreneurship sectors.

Draper

At Draper, we believe exciting things happen when new capabilities are imagined and created. Whether formulating a concept and developing each component to achieve a field-ready prototype, or combining existing technologies in new ways, Draper engineers apply multidisciplinary approaches that deliver new capabilities to customers. As a nonprofit engineering innovation company, Draper focuses on the design, development and deployment of advanced technological solutions for the world’s most challenging and important problems. We provide engineering solutions directly to government, industry and academia; work on teams as prime contractor or subcontractor; and participate as a collaborator in consortia. We provide unbiased assessments of technology or systems designed or recommended by other organizations—custom designed, as well as commercial-off-the-shelf. Visit Draper at http://www.draper.com.

Robothon® – The Grand Challenge 2022 // Call for Teams

Dear Robothon® Community!

We, the Munich Institute of Robotics and Machine Intelligence (MIRMI) of the Technical University of Munich (TUM), in collaboration with Messe München

and automatica have launched successfully a new high-tech platform calledmunich_i in 2021, an event bringing together the world’s leading thought leaders and personalities from AI and robotics.

munich_i will take place again at the next automatica from June 21-24, 2022 in Munich, therefore

Robothon®, the international competition to develop skills in robot manipulations, will also go into the second round!! 

Robothon® – The Grand Challenge Series focuses on pressing and unsolved challenges of our time and was 2021 held digitally in the run-up to the automatica sprint

with 9 international teams and a renowned Grand Challenge Jury. As a highlight, it ended with the Award Ceremony on June 22, 2021

with 4 winning teams, a total prize money of € 22,500, great recognition and an expansion of our community.

Are you a motivated robotics enthusiast looking for new challenges?

CALL FOR TEAMS is open until March 31, 2022!!

Apply HERE!

KEY FACTS:

  • Robothon® will once again will be held digitally from April 29 to June 1, 2022
  • Special highlight: the Award Ceremony will take place on-site on June 21, 2022, during automatica at the Messe München!

HOW IT WORKS: 

  • Robothon® againwill focus on single-arm robot manipulation
  • The Grand Challenge 2022: disassembly and sorting of e-waste
  • The competition is free of charge 
  • Up to 20 selected teams can participate (2-4 members) 
  • All roboticists (academic and young professionals) are encouraged to apply
  • Teams will need to provide their own robot to complete the challenge remotely
  • Each team will receive an internet connected competition task board by mail
  • The processing period of 1 month starts from receipt of the competition scorecard 
  • Team performances will be evaluated by the Grand Challenge Jury 
  • Prize money awaits the finalists!

HAVEN’T SIGNED UP YET? Apply as a team until March 31, 2022, and visit our website www.robothon-grand-challenge.com to learn more. 

Know someone who should participate? Please help spread the word!

Feel free to email us with any questions at [email protected].

With kind regards,

The Robothon® Team

Barbara Schilling & Peter So (Technical Leader)

Further development of IDS NXT ocean: focus on user-friendliness and AI transparency

All-in-one embedded vision platform with new tools and functions

(PresseBox) (ObersulmAt IDS, image processing with artificial intelligence does not just mean that AI runs directly on cameras and users also have enormous design options through vision apps. Rather, with the IDS NXT ocean embedded vision platform, customers receive all the necessary, coordinated tools and workflows to realise their own AI vision applications without prior knowledge and to run them directly on the IDS NXT industrial cameras. Now follows the next free software update for the AI package. In addition to the topic of user-friendliness, the focus is also on making artificial intelligence clear and comprehensible for the user.

An all-in-one system such as IDS NXT ocean, which has integrated computing power and artificial intelligence thanks to the „deep ocean core“ developed by IDS, is ideally suited for entry into AI Vision. It requires no prior knowledge of deep learning or camera programming. The current software update makes setting up, deploying and controlling the intelligent cameras in the IDS NXT cockpit even easier. For this purpose, among other things, an ROI editor is integrated with which users can freely draw the image areas to be evaluated and configure, save and reuse them as custom grids with many parameters. In addition, the new tools Attention Maps and Confusion Matrix illustrate how the AI works in the cameras and what decisions it makes. This helps to clarify the process and enables the user to evaluate the quality of a trained neural network and to improve it through targeted retraining. Data security also plays an important role in the industrial use of artificial intelligence. As of the current update, communication between IDS NXT cameras and system components can therefore be encrypted via HTTPS.

Just get started with the IDS NXT ocean Creative Kit

Anyone who wants to test the industrial-grade embedded vision platform IDS NXT ocean and evaluate its potential for their own applications should take a look at the IDS NXT ocean Creative Kit. It provides customers with all the components they need to create, train and run a neural network. In addition to an IDS NXT industrial camera with 1.6 MP Sony sensor, lens, cable and tripod adapter, the package includes six months‘ access to the AI training software IDS NXT lighthouse. Currently, IDS is offering the set in a special promotion at particularly favourable conditions. Promotion page: https://en.ids-imaging.com/ids-nxt-ocean-creative-kit.html.

Learn more: www.ids-nxt.com

Weiterentwicklung von IDS NXT ocean: Fokus auf Benutzerfreundlichkeit und KI-Transparenz

All-in-One Embedded Vision Plattform mit neuen Werkzeugen und Funktionen

(PresseBox) (ObersulmBei IDS bedeutet Bildverarbeitung mit künstlicher Intelligenz nicht nur, dass die KI direkt auf Kameras läuft und Anwender zusätzlich enorme Gestaltungsmöglichkeiten durch Vision Apps haben. Kunden erhalten mit der Embedded-Vision-Plattform IDS NXT ocean vielmehr alle erforderlichen, aufeinander abgestimmten Tools und Workflows, um eigene KI-Vision-Anwendungen ohne Vorwissen zu realisieren und direkt auf den IDS NXT Industriekameras auszuführen. Jetzt folgt das nächste kostenlose Softwareupdate für das KI-Paket. Im Fokus steht neben dem Thema Benutzerfreundlichkeit auch der Anspruch, die künstliche Intelligenz für den Anwender anschaulich und nachvollziehbar zu machen.

Ein All-in-One System wie IDS NXT ocean, das durch den von IDS entwickelten „deep ocean core“ über integrierte Rechenleistung und künstliche Intelligenz verfügt, eignet sich bestens für den Einstieg in AI Vision. Es erfordert weder Vorkenntnisse in Deep Learning noch in der Kameraprogrammierung. Das aktuelle Softwareupdate macht die Einrichtung, Inbetriebnahme und Steuerung der intelligenten Kameras im IDS NXT cockpit noch einfacher. Hierzu wird unter anderem ein ROI-Editor integriert, mit dem Anwender die auszuwertenden Bildbereiche frei zeichnen und als beliebige Raster mit vielen Parametern konfigurieren, speichern und wiederverwenden können. Darüber hinaus veranschaulichen die neuen Werkzeuge Attention Maps und Confusion Matrix, wie die KI in den Kameras arbeitet und welche Entscheidungen sie trifft. Das macht sie transparenter und hilft dem Anwender, die Qualität eines trainierten neuronalen Netzes zu bewerten und durch gezieltes Nachtraining zu verbessern. Beim industriellen Einsatz von künstlicher Intelligenz spielt auch Datensicherheit eine wichtige Rolle. Ab dem aktuellen Update lässt sich die Kommunikation zwischen IDS NXT Kameras und Anlagenkomponenten deshalb per HTTPS verschlüsseln. 

Einfach loslegen mit dem IDS NXT ocean Creative Kit

Wer die industrietaugliche Embedded-Vision-Plattform IDS NXT ocean testen und das Potenzial für die eigenen Anwendungen evaluieren möchte, sollte einen Blick auf das IDS NXT ocean Creative Kit werfen. Kunden erhalten damit alle Komponenten, die sie für die Erstellung, das Trainieren und das Ausführen eines neuronalen Netzes benötigen. Neben einer IDS NXT Industriekamera mit 1,6 MP Sony Sensor, Objektiv, Kabel und Stativadapter enthält das Paket u.a. einen sechsmonatigen Zugang zur KI-Trainingssoftware IDS NXT lighthouse. Aktuell bietet IDS das Set in einer Sonderaktion zu besonders günstigen Konditionen an. Aktionsseite: https://de.ids-imaging.com/ids-nxt-ocean-creative-kit.html.

Weitere Informationen: www.ids-nxt.de

Picked up and put off

Guest post by IDS Corporate Communications

Autonomously driving robotic assistance system for the automated placement of coil creels

Due to the industry standard 4.0, digitalisation, automation and networking of systems and facilities are becoming the predominant topics in production and thus also in logistics. Industry 4.0 pursues the increasing optimisation of processes and workflows in favour of productivity and flexibility and thus the saving of time and costs. Robotic systems have become the driving force for automating processes. Through the Internet of Things (IoT), robots are becoming increasingly sensitive, autonomous, mobile and easier to operate. More and more they are becoming an everyday helper in factories and warehouses. Intelligent imaging techniques are playing an increasingly important role in this.

To meet the growing demands in scaling and changing production environments towards fully automated and intelligently networked production, the company ONTEC Automation GmbH from Naila in Bavaria has developed an autonomously driving robotic assistance system. The „Smart Robot Assistant“ uses the synergies of mobility and automation: it consists of a powerful and efficient intralogistics platform, a flexible robot arm and a robust 3D stereo camera system from the Ensenso N series by IDS Imaging Development Systems GmbH.

The solution is versatile and takes over monotonous, weighty set-up and placement tasks, for example. The autonomous transport system is suitable for floor-level lifting of Euro pallets up to container or industrial format as well as mesh pallets in various sizes with a maximum load of up to 1,200 kilograms. For a customer in the textile industry, the AGV (Automated Guided Vehicle) is used for the automated loading of coil creels. For this purpose, it picks up pallets with yarn spools, transports them to the designated creel and loads it for further processing. Using a specially developed gripper system, up to 1000 yarn packages per 8-hour shift are picked up and pushed onto a mandrel of the creel. The sizing scheme and the position of the coils are captured by an Ensenso 3D camera (N45 series) installed on the gripper arm.

Application

Pallets loaded with industrial yarn spools are picked up from the floor of a predefined storage place and transported to the creel location. There, the gripper positions itself vertically above the pallet. An image trigger is sent to the Ensenso 3D camera from the N45 series, triggered by the in-house software ONTEC SPSComm. It networks with the vehicle’s PLC and can thus read out and pass on data. In the application, SPSComm controls the communication between the software parts of the vehicle, gripper and camera. This way, the camera knows when the vehicle and the grabber are in position to take a picture. This takes an image and passes on a point cloud to a software solution from ONTEC based on the standard HALCON software, which reports the coordinates of the coils on the pallet to the robot. The robot can then accurately pick up the coils and process them further. As soon as the gripper has cleared a layer of the yarn spools, the Ensenso camera takes a picture of the packaging material lying between the yarn spools and provides point clouds of this as well. These point clouds are processed similarly to provide the robot with the information with which a needle gripper removes the intermediate layers. „This approach means that the number of layers and finishing patterns of the pallets do not have to be defined in advance and even incomplete pallets can be processed without any problems,“ explains Tim Böckel, software developer at ONTEC. „The gripper does not have to be converted for the use of the needle gripper. For this application, it has a normal gripping component for the coils and a needle gripping component for the intermediate layers.“

For this task, the mobile use for 3D acquisition of moving and static objects on the robot arm, the Ensenso 3D camera is suitable due to its compact design. The Ensenso N 45’s 3D stereo electronics are completely decoupled from the housing, allowing the use of a lightweight plastic composite as the housing material. The low weight facilitates the use on robot arms such as the Smart Robotic Asstistant. The camera can also cope with demanding environmental conditions. „Challenges with this application can be found primarily in the different lighting conditions that are evident in different rooms of the hall and at different times of the day,“ Tim Böckel describes the situation. Even in difficult lighting conditions, the integrated projector projects a high-contrast texture onto the object to be imaged by means of a pattern mask with a random dot pattern, thus supplementing the structures on featureless homogenous surfaces. This means that the integrated camera meets the requirements exactly. „By pre-configuring within NxView, the task was solved well.“ This sample programme with source code demonstrates the main functions of the NxLib library, which can be used to open one or more stereo and colour cameras whose image and depth data are visualised. Parameters such as exposure time, binning, AOI and depth measuring range can – as in this case – be adjusted live for the matching method used.

The matching process empowers the Ensenso 3D camera to recognise a very high number of pixels, including their position change, by means of the auxiliary structures projected onto the surface and to create complete, homogeneous depth information of the scene from this. This in turn ensures the necessary precision with which the Smart Robot Assistant proceeds. Other selection criteria for the camera were, among others, the standard vision interface Gigabit Ethernet and the global shutter 1.3 MP sensor. „The camera only takes one image pair of the entire pallet in favour of a faster throughput time, but it has to provide the coordinates from a relatively large distance with an accuracy in the millimetre range to enable the robot arm to grip precisely,“ explains Matthias Hofmann, IT specialist for application development at ONTEC. „We therefore need the high resolution of the camera to be able to safely record the edges of the coils with the 3D camera.“ The localisation of the edges is important in order to be able to pass on as accurate as possible the position from the centre of the spool to the gripper.

Furthermore, the camera is specially designed for use in harsh environmental conditions. It has a screwable GPIO connector for trigger and flash and is IP65/67 protected against dirt, dust, splash water or cleaning agents.

Software

The Ensenso SDK enables hand-eye calibration of the camera to the robot arm, allowing easy translation or displacement of coordinates using the robot pose. In addition, by using the internal camera settings, a „FileCam“ of the current situation is recorded at each pass, i.e. at each image trigger. This makes it possible to easily adjust any edge cases later on, in this application for example unexpected lighting conditions, obstacles in the image or also an unexpected positioning of the coils in the image. The Ensenso SDK also allows the internal camera LOG files to be stored and archived for possible evaluation.

ONTEC also uses these „FileCams“ to automatically check test cases and thus ensure the correct functioning of all arrangements when making adjustments to the vision software. In addition, various vehicles can be coordinated and logistical bottlenecks minimised on the basis of the control system specially developed by ONTEC. Different assistants can be navigated and act simultaneously in a very confined space. By using the industrial interface tool ONTEC SPSComm, even standard industrial robots can be safely integrated into the overall application and data can be exchanged between the different systems.

Outlook

Further development of the system is planned, among other things, in terms of navigation of the autonomous vehicle. „With regard to vehicle navigation for our AGV, the use of IDS cameras is very interesting. We are currently evaluating the use of the new Ensenso S series to enable the vehicle to react even more flexibly to obstacles, for example, classify them and possibly even drive around them,“ says Tim Böckel, software developer at ONTEC, outlining the next development step.

ONTEC’s own interface configuration already enables the system to be integrated into a wide variety of Industry 4.0 applications, while the modular structure of the autonomously moving robot solution leaves room for adaptation to a wide variety of tasks. In this way, it not only serves to increase efficiency and flexibility in production and logistics, but in many places also literally contributes to relieving the workload of employees.

More at: https://en.ids-imaging.com/casestudies-detail/picked-up-and-put-off-ensenso.html

Low-Cost-Automation in XXL: Großer DIY-Palettierer von igus zum kleinen Preis

drylin XXL-Raumportalroboter ist bis zu 60 Prozent günstiger als vergleichbare Lösungen und besonders einfach in Betrieb zu nehmen

Köln, 8. Februar 2022 – igus erweitert sein breites Low-Cost-Automation Angebot um einen neuen drylin XXL-Raumportalroboter. Das Portal hat einen Aktionsradius von 2000 x 2000 x 1500 Millimeter und eignet sich besonders für Palettierungsanwendungen bis 10 Kilogramm. Der Roboter ist ab 7.000 Euro inklusive Steuerung erhältlich und lässt sich einfach selbst nach dem Do-it-yourself Prinzip aufbauen und programmieren – ohne Hilfe eines Systemintegrators.

Der schmier- und wartungsfreie drylin XXL-Raumportalroboter von igus hebt bis zu 10 Kilogramm und kostet bis zu 60 Prozent weniger als vergleichbare Lösungen. (Quelle: igus GmbH)

Zu teuer in der Anschaffung, zu aufwendig in der Programmierung, zu kompliziert in der Wartung: Viele kleine und mittelständische Unternehmen scheuen den Einstieg in die Automatisierung. Und gefährden damit langfristig ihre Wettbewerbsfähigkeit. Dabei geht der Einstieg ganz leicht von der Hand. Das beweist der drylin XXL-Portalroboter von igus. Der DIY-Bausatz bietet Unternehmen die Möglichkeit, schnell und unkompliziert einen Pick-and-Place Linearroboter für Aufgaben rund um Palettierung, Sortierung, Etikettierung und Qualitätsprüfung in Betrieb zu nehmen. „Palettier-Roboter, die in Zusammenarbeit mit externen Dienstleistern entstehen, kosten schnell zwischen 85.000 und 120.000 Euro. Das sprengt das Budget vieler kleiner Betriebe“, sagt Alexander Mühlens, Leiter Geschäftsbereich Low-Cost-Automation bei igus. „Wir haben deshalb eine Lösung entwickelt, die aufgrund des Einsatzes von Hochleistungskunststoffen und Leichtbaumaterialien wie Aluminium um ein Vielfaches günstiger ist. So kostet der drylin XXL-Raumportalroboter je nach Ausbaustufe zwischen 7.000 und 10.000 Euro. Eine Investition, die risikoarm ist und sich in der Regel innerhalb weniger Wochen amortisiert.“

DIY-Bausatz lässt sich ohne Vorkenntnisse schnell zusammensetzen

Das Raumportal erhält der Käufer als DIY-Bausatz. Bestandteile sind zwei Zahnriemenachsen und eine Zahnstangen-Auslegerachse mit Schrittmotoren und einem Aktionsraum von 2000 x 2000 x 1500 Millimeter. In der Maximallänge sind auch bis zu 6.000 x 6.000 x 1.500 Millimeter möglich. Zusätzlich ist im Paket ein Schaltschrank, Leitungen und Energieketten sowie die kostenlose Steuerungssoftware igus Robot Control (iRC) enthalten. Anwender können die Komponenten in wenigen Stunden zu einem betriebsfertigen Linearroboter zusammensetzen – ohne externe Hilfe, ohne Vorkenntnisse und ohne lange Einarbeitungszeit. Und werden noch zusätzliche Komponenten wie Kamerasysteme oder Greifer benötigt, so werden Anwender auf dem Robotik-Marktplatz RBTX schnell fündig.

Automatisierung entlastet Mitarbeiter

Zum Einsatz kommt der kartesische Roboter beispielsweise an Förderbändern, die Produkte von Spritzgussmaschinen abtransportieren. Hier nimmt der Roboter Artikel mit einem Maximalgewicht von 10 Kilogramm vom Band, transportiert sie mit einer Geschwindigkeit von bis zu 500 mm/s und positioniert sie mit einer Wiederholgenauigkeit von 0,8 Millimeter auf einer Palette. „Dank dieser Automatisierung können Betriebe ihre Mitarbeiter von körperlich anstrengenden und zeitaufwendigen Palettier-Aufgaben entlasten und Ressourcen für wichtigere Aufgaben gewinnen.“ Das System selbst verursacht dabei keinen Wartungsaufwand. Die Linearachsen bestehen aus korrosionsfreiem Aluminium, die Schlitten bewegen sich über Gleitlager aus Hochleistungskunststoff, die dank integrierter Festschmierstoffe über viele Jahre einen reibungsarmen Trockenlauf ohne externe Schmiermittel ermöglichen – selbst in staubigen und schmutzigen Umgebungen.

Digitaler Roboter 3D-Zwilling ermöglicht kinderleichte Programmierung

Doch nicht nur die Montage, sondern auch die Programmierung von Bewegungsabläufen stellt kein Einstiegshindernis dar. „Für viele Betriebe, die keine eigenen IT-Fachkräfte haben, ist die Programmierung von Robotern oft mit Problemen besetzt“, so Mühlens. „Wir haben deswegen mit der iRC eine kostenlose Software entwickelt, die optisch an häufig genutzte Office Software erinnert und eine intuitive Programmierung von Bewegungen ermöglicht. Das Besondere: die Software ist kostenlos und die so entstehende Low-Code-Programmierung kann dann 1:1 am realen Roboter verwendet werden.“ Herzstück der Software ist ein digitaler Zwilling des Raumportals, über den sich Bewegungen mit wenigen Klicks festlegen lassen. Auch im Vorfeld, bevor der Roboter in Betrieb ist. „Interessenten können vor dem Kauf anhand des 3D-Modells prüfen, ob gewünschte Bewegungen tatsächlich realisierbar sind. Zusätzlich laden wir alle Interessen ein, unsere Roboter live oder über das Internet kostenfrei auszuprobieren. Wir unterstützen sie bei der Inbetriebnahme und zeigen, was alles mit Low-Cost-Robotern möglich ist. Die Investition wird dadurch nahezu risikofrei.“

The Evolution of Robo-Dogs

Sophie writes on behalf of Panda Security covering cybersecurity and online safety best practices for consumers and families. Specifically, she is interested in removing the barriers of complicated cybersecurity topics and teaching data security in a way that is accessible to all. Her most recent piece is on the evolution of robotic dogs and where they're headed next.

Robots have been a point of fascination and study for centuries as researchers and inventors have sought to explore the potential for automated technology. While there’s a long history of the development and creation of autonomous machines, mobile, quadrupedal robots — or four-legged robotic dogs — have seen a significant boom in the last few decades. 

The development of quadrupedal robots stems from the necessity of mobile robots in exploring dangerous or unstructured terrains. Compared to other mobile robots (like wheeled or bipedal/two-legged robots), quadrupedal robots are a superior locomotion system in terms of stability, control and speed.

The capabilities of quadrupedal robots are being explored in a variety of fields, from construction and entertainment to space exploration and military operations. Today, modern robotic dogs can be purchased by businesses and developers to complete tasks and explore environments deemed too dangerous for humans. Read on for the evolution of robotic dogs and where they might be headed in the future. 

1966: Phony Pony

Although it technically mirrored the form of a horse, the Phony Pony was the first autonomous quadrupedal robot to emerge in the U.S. that set the precedent for robotic dogs of the future. Equipped with electrical motors, the Pony Pony had two degrees of freedom, or joints, in each leg (the hip and the knee) and one adaptive joint in the frontal plane. The hip and knee joints were identical, allowing for both forward and backward walking movements. 

The Phony Pony was capable of crawling, walking and trotting, albeit at a very slow speed. Thanks to its spring-restrained “pelvic” structure, it was able to maintain static vertical stability during movement. Since the Phony Pony was developed before the advent of microprocessors, it could only be controlled through cables connected to a remote computer in an adjacent building.  

Developer: Frank and McGhee

Use: Initial research and development of autonomous quadrupeds 

1999: AIBO

In the late 1990s, Sony’s AIBO  — one of the most iconic and advanced entertainment robotic dogs — hit the market. While the AIBO (Artificial Intelligence RoBOt) was constructed for entertainment purposes, its machinery is still highly complex. 

Developed with touch, hearing, sight and balancing capabilities, it can respond to voice commands, shake hands, walk and chase a ball. It can also express six “emotions”: happiness, sadness, fear, anger, dislike and surprise. Its emotional state is expressed through tail wagging, eye color changes and body movements, as well as through a series of sounds including barks, whines and growls. Today, the AIBO has been used across many research groups for the purpose of testing artificial intelligence and sensory integration techniques.

Developer: Sony

Use: Toys and entertainment

2005: BigDog

Boston Dynamics has become a leader in the world of robotics, specifically in their development of canine-inspired quadrupeds. Their first robotic dog, coined BigDog, arrived in 2005. Measuring three by two feet and weighing in at 240 pounds, BigDog was designed to support soldiers in the military. It can carry 340 pounds, climb up and down 35-degree inclines and successfully hike over rough terrains. 

Each of BigDog’s legs has a passive linear pneumatic compliance — a system that controls contact forces between a robot and a rigid environment — and three active joints in the knees and hips. The robot is powered by a one-cylinder go-kart engine, and its dynamic regulating system allows it to maintain balance. Its movement sensors embrace joint position, joint force, ground contact, ground load and a stereo vision system. 

In 2012, developers were still working to refine BigDog’s capabilities before plans to officially deploy it to military squads. However, the project was discontinued in 2015 after concluding its gas-powered engine was too noisy to be used in combat. 

Developer: Boston Dynamics

Use: Assist soldiers in unsafe terrains 

2009: LittleDog 

Four years after BigDog came LittleDog, Boston Dynamics’ smallest quadrupedal robot to date. LittleDog was developed specifically for research purposes to be used by third parties investigating quadrupedal locomotion. 

Each of LittleDog’s legs are powered by three electric motors fueled by lithium polymer batteries and have a maximum operation time of thirty minutes. LittleDog maintains a large range of motion and is capable of climbing, crawling and walking across rocky terrains. A PC-level computer placed on top of LittleDog is responsible for its movement sensors, controls and communications. It can be controlled remotely and includes data-logging support for data analysis purposes. 

Developer: Boston Dynamics

Use: Research on locomotion in quadrupeds 

2011: AlphaDog Proto

Continuing their efforts to develop military-grade robots, Boston Dynamics released AlphaDog Proto in 2011. Powered by a hydraulic actuation system, AlphaDog Proto is designed to support soldiers in carrying heavy gear across rocky terrains. It’s capable of carrying up to 400 pounds for as far as 20 miles, all within the span of 24 hours, without needing to refuel. 

AlphaDog Proto is equipped with a GPS navigation and computer vision system that allows it to follow soldiers while carrying their gear. Thanks to an internal combustion engine, AlphaDog Proto proved to be quieter than its predecessor BigDog, making it more suitable for field missions. 

Developer: Boston Dynamics

Use: Assist soldiers in carrying heavy gear over unsafe terrains

2012: Legged Squad Support System (LS3)

Boston Dynamics’ development of the Legged Squad Support System (LS3) came soon after the creation of BigDog in their efforts to continue refining their quadrupedal robots for soldiers and Marines. LS3 was capable of operating in hot, cold, wet and otherwise unfavorable conditions. It contained a stereo vision system with a pair of stereo cameras, which were mounted inside the robot’s head. This operated in conjunction with a light-detecting and ranging unit that allowed it to follow a soldier’s lead and record feedback obtained from the camera. 

Compared to BigDog, LS3 was around 10 times quieter at certain times and had an increased walking speed of one to three miles per hour, increased jogging speed of five miles per hour and the ability to run across flat surfaces at seven miles per hour. It was also capable of responding to ten voice commands, which was a more efficient function for soldiers who would be too preoccupied with a mission to use manual controls. 

Five years into development, LS3 had successfully been refined enough to be able to operate with Marines in a realistic combat exercise and was used to resupply combat squads in locations that were difficult for squad vehicles to reach. By 2015, however, the LS3 was shelved due to noise and repair limitations. While the Marines were ultimately unable to use the LS3 in service, it provided valuable research insights in the field of autonomous technology. 

Developer: Boston Dynamics

Use: Assist soldiers in carrying heavy gear over unsafe terrains

2016: Spot 

Spot is Boston Dynamics’ next creation in their line of quadrupedal robots, designed in an effort to move away from developing quadrupeds strictly for military use and instead move into more commercial use. Spot is significantly smaller than their previous models, weighing just 160 pounds. Spot is capable of exploring rocky terrains, avoiding objects in its path during travel and climbing stairs and hills. 

Spot’s hardware is equipped with powerful control boards and five sensor units on all sides of its body that allow it to navigate an area autonomously from any angle. Twelve custom motors power Spot’s legs, gaining speed of up to five feet per second and operating for up to 90 minutes. Its sensors are able to capture spherical images and also allow for mobile manipulation for tasks such as opening doors and grasping objects. Spot’s control methods are far more advanced than Boston Dynamics’ earlier robots, allowing for autonomous control in a wider variety of situations. 

Developer: Boston Dynamics

Use: Documenting construction process and monitoring remote high-risk environments 

2016: ANYmal

While Boston Dynamics had been the main leader in quadrupedal robots since the early 2000s, Swiss robotics company ANYbotics came out with its own iteration of the robotic dog in 2016. Positioned as an end-to-end robotic inspection solution, ANYmal was developed for industrial use, specifically the inspection of unsafe environments like energy and industrial plants. 

ANYmal is mounted with a variety of laser inspection sensors to provide visual, thermal and acoustic readings. Equipped with an on-board camera, it’s capable of remote panning and tilting settings to adjust views of the inspection site. ANYmal is capable of autonomously perceiving its environment, planning its navigation path and selecting proper footholds during travel. It can even walk up stairs and fit into difficult-to-reach areas that traditional wheeled robots can’t.

ANYmal has undergone a handful of development iterations since 2016 and is available for purchase as of 2021. ANYbotics is currently working on an upgraded version of the robot suitable for potentially explosive environments. 

Developer: ETH Zurich and ANYbotics

Use: Remote inspection of unsafe environments

2021: Vision 60 

One of the latest developments in quadrupedal robots is Ghost Robotics’ Vision 60 robotic dog, which has recently been tested at the U.S. Air Force’s Scott Air Force Base in Illinois as part of its one-year pilot testing program. Built to mitigate risks faced by Air Force pilots, Vision 60 features a rifle mounted on its back contained in a gun pod and is equipped with sensors that allow it to operate in a wide variety of unstable terrains. It’s also capable of thermal imaging, infrared configuration and high-definition video streaming. 

Vision 60 can carry a maximum of 31 pounds and can travel at up to 5.24 feet per second. It’s considered a semi-autonomous robot due to its accompanying rifle; while it can accurately line up with a target on its own, it can’t open fire without a human operator (in accordance with the U.S. military’s autonomous systems policy prohibiting automatic target engagement).

Developer: Ghost Robotics

Use: Military and Homeland Security operations

2021: CyberDog

With more companies embracing the development of quadrupeds, Xiaomi Global followed suit and released their version named CyberDog. CyberDog is an experimental, open-source robot promoted as both a human-friendly companion and an asset by law enforcement and military. CyberDog is sleeker and smaller than its other robotic dog predecessors, carrying a payload of just 6.6 pounds and running over 10 feet per second. 

CyberDog is equipped with multiple cameras and image sensors located across its body, including touch sensors and an ultra-wide fisheye lens. CyberDog can hold 128 gigabytes of storage and is powered by Nvidia’s Jetson Xavier AI platform to perform real-time analyses of its surroundings, create navigation paths, plot its destination and avoid obstacles. CyberDog can also perform backflips and respond to voice commands thanks to its six microphones. 

By making CyberDog an open-source project, Xiaomi hopes to expand its reach into the future of robot development and innovation. Its open-source nature is meant to encourage robotics enthusiasts to try their hand at writing code for CyberDog, giving the project more exposure and bolstering Xiaomi’s reputation in the robotics community. 

Developer: Xiaomi Global

Use: An open-source platform for developers to build upon 

While the market for quadrupedal robots is still in its early stages, interest is steadily growing in a wide range of industries. As for fears of robots pushing out the need for traditionally human-led jobs, these machines are more intended to support humans alongside their jobs rather than replace them outright. 

On the other hand, privacy concerns associated with robots aren’t to be ignored. As with any tech-enabled device, hacking is always possible, especially for open-source robotic models that can put users’ personal information at risk. This applies not only to the quadrupeds discussed above, but to more common commercial robotic systems like baby monitors, security systems and other WiFi-connected devices. It’s important to ensure your home network system is as strong and secure as possible with a home antivirus platform

Der durch die Gefahr kriecht: Robotergesteuerte Wartungseinsätze mit igus e-ketten an Bord

Der „Crawler“ übernimmt lebensgefährliche Wartungseinsätze in Pipelines – ausgerüstet mit Energieführungen von igus

Köln, 11. Januar 2022 – Er erledigt Arbeit, die für Menschen Lebensgefahr bedeutet: Der ferngesteuerte Roboter „Crawler“ saniert Innenwände von Pipelines in Kraftwerken. Eine Automation, die unter anderem deshalb zuverlässig gelingt, weil Energie- und Datenleitungen vor den rauen Umgebungsbedingungen geschützt sind. Hier setzt der US-amerikanische Hersteller Remote Orbital Installations LLC auf die Zusammenarbeit mit igus.

Wie gefährlich die Arbeit in Pipelines sein kann, zeigt ein tragischer Unfall, der sich 2007 in einem Kraftwerk in Colorado ereignete. Als Arbeiter im Inneren einer Druckrohrleitung, die Wasser zu den Turbinen leitet, eine Epoxidharzbeschichtung auftrugen, entzündete sich ein brennbares Lösungsmittel. Fünf Menschen kamen ums Leben. Ein Schicksal, das Remote Orbital Installations (LLC) und Big Sky Engineering anderen Arbeitern ersparen möchten. Die beiden US-amerikanischen Unternehmen haben deshalb den Crawler entwickelt – eine fernsteuerbare Roboterplattform auf vier Rädern, die durch Rohrleitungen fährt, die Innenwände mit einem Strahlwerkzeug reinigt und mit Epoxidharz neu auskleidet. Die Arbeiter befinden sich währenddessen in Sicherheit, am anderen Ende der Steuerleitung. Doch bevor der Crawler bereit für den Einsatz war, mussten die Ingenieure einige Herausforderungen meistern. Unter anderem galt es, Bauteile zu finden, die der schmutzigen und feuchten Umgebung im Inneren der Pipeline standhalten – ansonsten könnten Ausfälle dazu führen, dass Arbeiter das Rohr betreten und sich in Gefahr bringen müssten. Zu diesen Bauteilen zählten Führungen, die Energie- und Datenleitungen des Roboters vor Beschädigungen durch unkontrollierte Bewegungen schützen sollen.

„Die Langlebigkeit der igus Produkte war entscheidend für den Erfolg“

Fündig wurden die US-amerikanischen Konstrukteure jenseits des Atlantiks bei igus. Die Energieketten des plastics motion Spezialisten übernehmen an mehreren Stellen die Führung von Energie- und Datenleitungen. Sie führen beispielsweise die Leitungen des Mechanismus, der die Radbreite des Crawlers auf den Rohrdurchmesser einstellt. Ebenso Leitungen des Auslegers, der für die Höhenverstellung der Werkzeuge verantwortlich ist. Und für die Sicherheit von hängenden Leitungen sorgen geschlossene Energieketten der triflex Serie, die dank tordierbarer Kettenglieder kontrollierte, dreidimensionale Bewegungen ermöglichen. Alle Energieketten im Crawler bestehen aus einem verschleißfesten, robusten, chemikalienbeständigen und korrosionsfreien Hochleistungskunststoff, der auch widrigsten Umgebungen über Jahre zuverlässig standhält. „Die Langlebigkeit der Produkte von igus war entscheidend für den Erfolg des Crawlers“, bestätigt Ingenieur Mike Kronz. „Wir haben mehrere Projekte durchgeführt und hatten keinen einzigen Ausfall.“

Mutige Energiekettenlösungen: Anmeldung zum vector award 2022

Der Roboter Crawler ist ein Beispiel für einen Einsatz von Energieketten, der sich durch wirtschaftliche Effizienz und Mut zur Kreativität auszeichnet. Vergleichbare Anwendungen sucht igus für den vector award 2022, ein Wettbewerb, der einzigartige Energieführungslösungen prämiert. Eine Expertenjury – bestehend aus Vertretern der Wissenschaft, Industrie und Fachmedien – bestimmt die Gewinner zur Hannover Messe 2022. Das Preisgeld beträgt für den goldenen vector award beträgt 5.000 Euro. Einsendeschluss ist der 11. Februar 2022.