Insgesamt 35 Teams treffen in zwei Wettbewerben aufeinander
Den Siegern winkt eine Teilnahme an der VEX Robotics World Championship in Dallas
Hamburg, Februar 2024: In der kommenden Woche finden die Endrunden der VEX-Roboterwettbewerbe in Deutschland statt. An der Hochschule für Angewandte Wissenschaften Hamburg (HAW Hamburg) treffen sich etwa 150 Schüler von allgemeinbildenden Schulen und Berufsschulen aus ganz Deutschland, um herausfinden, welcher der von ihnen konstruierten Roboter vorgegebene Aufgaben am besten löst. Der weltweiten Wettbewerbe der in den USA beheimateten Robotics Education & Competition (REC) Foundation werden hierzulande vom Hamburger Verein roboMINT organisiert.
Die Wettbewerbskategorien
An der VEX Robotics Competition (VRC) können Schüler im Alter ab elf Jahren teilnehmen. Ein Team besteht aus mindestens zwei Schülern, es tritt in Allianzen gegen andere Teams an. Ziel eines Spiels ist es unter anderem, so viele Triballs wie möglich ins eigene Tor oder in die eigene Offensive Zone zu bringen.
Im Rahmen der VEX IQ Challenge können Schüler im Alter von acht bis 15 Jahren teilnehmen. Ein Team besteht aus mindestens zwei Schülern, es tritt zusammen mit einem anderen Team an. Ziel des Spiels ist es unter anderem, möglichst viele Blöcke in Tore zu verfrachten. Punkte gibt es auch, wenn der Roboter am Ende eines Matches in der „Supply Zone“ geparkt wird.
Über die German Masters können sich die Teilnehmer für die VEX Worlds vom 25. April bis 3. Mai in Dallas (US-Bundesstaat Texas) mit 1.000 Teams aus 50 Ländern qualifizieren.
German Masters
Veranstaltungsort: HAW Hamburg
Berliner Tor 21, Aula
Mittwoch, 06.03.: VRC, Start Qualifikation 1 um 12.30 Uhr
Begonnen hat alles in der Saison 2017/2018. Zusammen mit dem Schülercampus dEin Labor der TU Berlin führte roboMINT die ersten VEX Robotics Schülerwettbewerbe in Deutschland durch. Das erste Team, das sich damals für die alljährlich stattfindenden „Weltmeisterschaften“ in den USA qualifizierte, war das Heinitz-Gymnasium Rüdersdorf. Mittlerweile gibt es bundesweit diverse regionale Vorausscheidungen und zwei „Nationals“ (VIQC und VRC). Aktuell können sich pro Saison insgesamt sieben Teams aus Deutschland für die „Weltmeisterschaften“ in Dallas qualifizieren.
roboMINT unterstützt und koordiniert die bundesweit stattfindenden VEX Robotik Wettbewerbe. Der Verein informiert und betreut die teilnehmenden Teams, die BetreuerInnen und die regionalen Veranstalter. Ziel des Vereins ist die Förderung der MINT-Bildung in Deutschland.
I am passionate about technology and robotics. Here in my own blog, I am always taking on new tasks. But I have hardly ever worked with image processing. However, a colleague’s LEGO® MINDSTORMS® robot, which can recognize the rock, paper or scissors gestures of a hand with several different sensors, gave me an idea: „The robot should be able to ’see‘.“ Until now, the respective gesture had to be made at a very specific point in front of the robot in order to be reliably recognized. Several sensors were needed for this, which made the system inflexible and dampened the joy of playing. Can image processing solve this task more „elegantly“?
From the idea to implementation
In my search for a suitable camera, I came across IDS NXT – a complete system for the use of intelligent image processing. It fulfilled all my requirements and, thanks to artificial intelligence, much more besides pure gesture recognition. My interest was woken. Especially because the evaluation of the images and the communication of the results took place directly on or through the camera – without an additional PC! In addition, the IDS NXT Experience Kit came with all the components needed to start using the application immediately – without any prior knowledge of AI.
I took the idea further and began to develop a robot that would play the game „Rock, Paper, Scissors“ in the future – with a process similar to that in the classical sense: The (human) player is asked to perform one of the familiar gestures (scissors, stone, paper) in front of the camera. The virtual opponent has already randomly determined his gesture at this point. The move is evaluated in real time and the winner is displayed.
The first step: Gesture recognition by means of image processing
But until then, some intermediate steps were necessary. I began by implementing gesture recognition using image processing – new territory for me as a robotics fan. However, with the help of IDS lighthouse – a cloud-based AI vision studio – this was easier to realize than expected. Here, ideas evolve into complete applications. For this purpose, neural networks are trained by application images with the necessary product knowledge – such as in this case the individual gestures from different perspectives – and packaged into a suitable application workflow.
The training process was super easy, and I just used IDS Lighthouse’s step-by-step wizard after taking several hundred pictures of my hands using rock, scissor, or paper gestures from different angles against different backgrounds. The first trained AI was able to reliably recognize the gestures directly. This works for both left- and right-handers with a recognition rate of approx. 95%. Probabilities are returned for the labels „Rock“, „Paper“, „Scissor“, or „Nothing“. A satisfactory result. But what happens now with the data obtained?
Further processing
The further processing of the recognized gestures could be done by means of a specially created vision app. For this, the captured image of the respective gesture – after evaluation by the AI – must be passed on to the app. The latter „knows“ the rules of the game and can thus decide which gesture beats another. It then determines the winner. In the first stage of development, the app will also simulate the opponent. All this is currently in the making and will be implemented in the next step to become a „Rock, Paper, Scissors“-playing robot.
From play to everyday use
At first, the project is more of a gimmick. But what could come out of it? A gambling machine? Or maybe even an AI-based sign language translator?
Qviro, one of the leading robotics platforms, introduces a groundbreaking marketplace, offering unparalleled transparency and choice. Users can effortlessly compare the full robotics market and access a vast selection of 211 cobots.
The platform ensures transparent pricing, allowing buyers access to all cobot prices on Qviro. For added assistance, it provides an average cobot price of €27,158. Additionally, Qviro includes 400+ user reviews for informed decisions.
In the cobot category, Universal Robots leads with a 4.6 rating from over 41 user reviews. Their products excel in ease of use and integration, favored by engineers and enthusiasts.
For budget-conscious buyers, Elephant Robotics and Wlkata offer educational robots starting at $599. They provide cost-effective solutions for educational and hobbyist projects. Find Elephant Robotics‘ products at Elephant Robotics Products and Wlkata’s at Wlkata Products.
Sven De Donder, Co-CEO of Qviro, said, „Our user base in Europe and North America is growing exponentially due to unmatched transparency.“
Qviro transforms the robotics buying experience, offering an all-in-one solution for enthusiasts and professionals. With diverse options, transparent pricing, and a supportive user community, Qviro meets all your robotics needs.
About Qviro:
Qviro is a Belgium-based startup that is revolutionising the procurement process of industrial technology such as robots and machines through digitization. The company’s review platform, Qviro.com, provides factories and engineers with valuable insights and customer feedback to make confident purchasing decisions. At the same time, it offers vendors market intelligence and data to help them better understand their potential customers. As a SaaS platform, Qviro is dedicated to providing exceptional customer experiences and innovative solutions that drive growth and progress in the industry. To learn more about Qviro, visit www.Qviro.com.
Soft robotics represents a groundbreaking advancement in the field, standing apart from the rigid structures people usually associate with traditional robotic systems. Learn more about recent advances in this field and the many benefits.
The Era of Soft Robots
Nature and biology heavily influence soft robots, giving them the flexibility and ability to adapt to their surroundings. For example, some commercially available soft robotic designs mimic fish, octopi and worms.
Innovative materials such as shape-memory alloys, dielectric elastomers and liquid crystal elastomers are critical to soft robotics. These materials change their properties in response to various stimuli. Grippers on soft robots, made of high-tech elastomers, mold to the target object’s shape. This flexibility ensures a gentler and more adaptable grip than rigid robots, making them ideal for tasks like fruit picking.
Soft robots also use self-healing materials made from shape-memory alloys. These alloys allow the robots to repair themselves after damage, increasing their operational life span and reducing maintenance needs.
As technology progresses, scientists outfit soft robots with sensory systems, enhancing their ability to understand their surroundings. For example, soft pressure sensors can tell a robot if it’s gripping too hard. Some researchers are even developing soft robots capable of working in swarms, emulating the behavior of fish, bees and birds.
3D printing, a form of advanced manufacturing, has revolutionized how scientists design and produce intricate soft robotic parts, driving innovation and accessibility in this sector. Some robots incorporate the strengths of both rigid and soft systems, resulting in hybrids that offer improved strength, precision and flexibility. Instead of traditional motors, there’s a growing trend towards fluidic actuation. Robots use liquids or air for movement, making their movements more natural.
Soft Robotics in Medicine
Robotics is revolutionizing various aspects of modern medicine. In rehabilitation and physiotherapy, soft robotic exosuits or exoskeletons support patients recovering from strokes, spinal cord injuries or surgeries. These devices gently guide and assist patients, helping them regain motor functions, relearn movements and restore strength.
In assistive medical devices, soft wearable robots are emerging to help those with mobility issues. The Wyss Institute at Harvard University developed a soft, wearable robotic glove that assists individuals with hand disabilities in performing day-to-day activities. This glove, made from soft elastomers, can assist in gripping objects, potentially improving rehabilitation outcomes.
Scientists at the City University of Hong Kong developed a soft robot capable of maneuvering inside the stomach and intestine. The robot can change shape and size, facilitating better imaging and allowing localized drug delivery or biopsies.
A collaboration between Boston Children’s Hospital and Harvard University resulted in a soft robotic sleeve that surgeons can place around the heart. This device helps the heart pump more efficiently in patients with heart failure, providing a potential alternative to organ transplants.
In diagnostics, soft robots simplify procedures like endoscopy, making it less invasive and patient-friendly. Patients can now swallow endoscopy capsules equipped with a camera and a tissue collection mechanism to get the same results traditionally obtained by putting patients under general anesthesia.
Research teams at institutes like the Sant’Anna School of Advanced Studies in Italy have been working on developing soft robotic arms that can assist surgeons. Due to their soft and pliant design, these arms can navigate the body with minimal risk of damaging tissues or organs.
Soft Robotics in Marine Conservation
Equipped with sensors, soft robots can monitor water quality, track marine species and evaluate the health of habitats over prolonged periods. Their non-intrusive nature and versatility enable them to probe areas inaccessible to traditional robots. MIT’s Computer Science and Artificial Intelligence Laboratory developed a soft robotic fish named „SoFi“ that can swim naturally in the ocean, recording close-up videos of marine life and providing insights without alarming or disturbing the aquatic life.
Soft robots also offer the potential for marine clean-up efforts, such as removing pollutants like microplastics and oil spills. The WasteShark, developed by RanMarine Technology, is an ASV designed to „eat“ or collect trash in harbors and other waters close to the shore. This drone skims the water’s surface, collecting waste in its path, thereby aiding in marine clean-up.
The Ocean Exploration Trust’s E/V Nautilus expeditions have been using ROVs to explore and map uncharted coral reefs, helping scientists understand their structures, the species they harbor and their overall health. Similar soft robots can be deployed to plant sea grass and maintain coral reefs.
ROVs like the Hercules, also from the E/V Nautilus expedition, have robotic arms that can collect geological and biological samples from the deep sea that can help scientists study ecosystems in abyssal regions, leading to new species discoveries and insights into deep-sea conservation needs.
The Challenges Ahead
Soft robotics faces challenges, but its vast potential is undeniable. A primary focus lies in developing innovative materials that combine durability, flexibility and responsiveness. While traditional actuators, like motors, aren’t suitable for soft robots, alternatives like pneumatic and hydraulic systems are on the rise, promising unparalleled autonomy.
Manufacturing these robots at scale is now more feasible due to advanced construction techniques and materials. Even as these robots retain flexibility, integrating crucial rigid components, like batteries, is becoming smoother. The scientific community aims to enhance the response times of soft actuation mechanisms to match or exceed traditional systems.
Safety remains a top priority in soft robotics, especially in applications involving humans or medical scenarios. Although the field recognizes the higher initial research and production costs, they believe ongoing advancements will reduce expenses.
Guest article by Ellie Gabel. Ellie is a writer living in Raleigh, NC. She's passionate about keeping up with the latest innovations in tech and science. She also works as an associate editor for Revolutionized.
With the development of AI, robots have become a lot smarter. A quick Google or Youtube search will reveal many cases of people using advanced robots. For example, videos of robots packing shelves in factories or, even more impressive, the Ocean One Robot, an advanced humanoid that explores shipwrecks and plane crashes.
These videos make many wonder how far we are from using such robotics in everyday life. Learn what today’s robots are capable of, what potential challenges need to be solved and if humanoids are ready for daily life.
3 Humanoids Robots Helping Humans Today
One reason advanced humanoid robots are in demand is their ability to handle dangerous and repetitive operations. This frees up humans to focus on other essential, safer tasks. Current AI robots such as humanoids and cobots are already assisting humans by completing various tasks — bomb disposal, surgery, packing items in grocery stores, self-driving vehicles and much more.
One industry that frequently utilizes AI robots is the manufacturing sector. They mostly complete repetitive assignments such as packing items, material handling, assembly and welding. This speeds up production time and allows humans to tackle more complex or demanding tasks. Here are three different humanoid robots helping people.
Digit
Agility Robotics has developed a humanoid robot well-suited for many tedious operations. The humanoid is called Digit and has fully functional limbs making it excellent at unloading packages from trailers and also delivering them. Digit is equipped with sensors in his torso to help him easily navigate complex environments.
Nadine
Nadine is a realistic-looking social humanoid robot with various facial expressions and movements. She was developed in Singapore by researchers from the Nanyang Technological University. Nadine can recognize different gestures, faces, objects and is able to perform various social tasks associated with customer service.
Promobot
Promobot is a humanoid that is suitable for many different service-oriented roles. In hotels, promobot can recognize guests, print receipts, issue keycards and check guests in. This humanoid is customizable and can even work as a medical assistant — measure blood oxygen and blood sugar levels.
Are Humanoids Ready for Daily Life?
Today’s humanoids are undoubtedly impressive, but AI robots have yet to reach the level of generative artificial intelligence — an advanced form of AI capable of holding detailed conversations when prompted. Many companies aim to combine generative AI with advanced robotics to make it more applicable for a wider variety of use cases.
Since most AI machines are developed for the use of single tasks, they tend to struggle when taking on multiple operations simultaneously. In other words, they aren’t very good at multitasking. This complex aspect would need to be addressed for AI robots to become a reality in daily life. The most advanced form of AI robots available today are self-driving cars, which have a long way to go before they are truly self-driving.
It is the same with humanoid robots. Although many of the AI robots available are amazing, it is clear there are still advancements needed, especially in the case of processing abilities. AI robots will need to understand a wide variety of interactions no matter how they are carried out — voice, keyboard commands, hand gestures and sometimes even facial expressions.
For AI humanoids to be applicable in daily life, humans need a deeper understanding of how they operate — training might be required.
Potential Challenges to Overcome With Future Humanoids
One of the biggest problems with AI humanoids today is their battery life. They can usually only work for an hour or two and then require charging. While the goal would be to use them for multiple hours on end, another approach might be to increase the battery life by a few hours and add fast charging.
In terms of complex and challenging tasks, many humanoids and cobots are quite advanced and can solve them with relevant ease. However, this usually means they lack in other areas, such as movement. In most cases, the humanoid has advanced movement or impressive processing abilities, but not both.
In addition, the technology today’s humanoids use will also need further improvements. Better censoring capabilities are necessary in terms of in-depth cameras, voice and visual sensors to make them more applicable in modern life. For humanoids to become more widely used, their movement and processing abilities require further refinement.
Humanoids also need to operate safely and effectively while working with multiple humans at the same time. The robot will need to comprehend numerous interactions with different people simultaneously to react appropriately. The current training methods used with humanoids today are slow and would need further refinements to make them available for daily life.
Humanoids Robots Still Have a Long Way to Go
The advance of technology and AI is astounding, especially when combined to create robots that assist humans with numerous tasks. However, there are still a few areas where humanoids need refinement to become suitable for everyday use. Undoubtedly, humans will benefit significantly from utilizing advanced AI robotics in their daily life, but for this to become a reality, humanoids still have a long way to go.
Guest article by Ellie Gabel. Ellie is a writer living in Raleigh, NC. She's passionate about keeping up with the latest innovations in tech and science. She also works as an associate editor for Revolutionized.
Sebastian Trella from Robots-Blog had the opportunity to conduct a short interview with Roy Barazani from „the OffBits“.
Roy Barazani from „the OffBits“
· Robots-Blog: Who are you and what is your job at the OffBits?
Roy: I am Roy Barazani – The OffBits founding father
· Robots-Blog: What inspired you to create the OffBits toy?
Roy: I guess I’ve always been fascinated by the idea of re-design. Even as a child I used to break my toys apart and try to rebuild them in new and different ways. Looking back, this allowed me to develop my imagination and ability to fantasize.
· Robots-Blog: How did you come up with the name „the OffBits„?
Roy: The OFFBITS started out by playing with random parts I found in my toolbox while working on an exercise before starting my design studies in college. Suddenly I found I had made a little robot! I kept this idea with me during my studies, and for my final project built a whole city of them made from recycled parts. It was really well received and I knew I had a great idea in my hands. It took a few years of development and testing in Maker Faire events and boutique design stores and now I finally feel they are ready for the world.
· Robots-Blog: Do you have a lot to do with robots in your job/everyday life?
Roy: Yes, I love robots and I have collected robots for years!
· Robots-Blog: How did sustainability factor into the creation of the OffBits?
Roy: I see The OFFBITS as a new way of playing with toys, an “open-source” platform with no rules, so each creation can be as unique as the one who built it. I love the way people can take the original design and then make something completely different. Also, the fact it is made of components we all have around us means there are no cost or availability limits to what people can do with them.
Offbits are a sustainable and eco-friendly educational toy that encourages zero-waste principles and practices through a fun and interactive experience. It’s a great way to bring awareness and inspire positive actions in preserving the environment.
We don’t think you need many toys, but what you do choose should be made to last.
The possibilities are endless with The Offbits, you can create anything from a simple robot to a complex model while using sustainable materials and reducing waste.
· Robots-Blog: And which is your favorite robot from the OffBits?
Roy: I love all my robots family!
· Robots-Blog: Can you tell us about any challenges you faced while designing the OffBits?
Roy: From a design perspective, working with standard hardware components brought many unique challenges. For example, choosing the right mix of bits in the kits (both functionally and aesthetically) required a lot of trial-and-error type work, and then of course there is the issue of connecting parts that weren’t designed to fit together.
· Robots-Blog: How do you envision the OffBits evolving in education fields?
Roy: This is a S.T.E.A.M toy helping young minds grow in the fields of Science, Technology, Engineering, Art and Math. The OffBits toys encourage the development of fine motor skills, spatial reasoning, problem-solving, and imaginative play.
The hands-on building process allows users to learn about robotics and engineering concepts in a fun and interactive way. The step-by-step instructions for building and programming the Offbits kits, making it easy for users of all ages to understand the principles behind the technology.
The OffBits offer educational resources such as workshops, online tutorials, and educational material that could help kids to learn about sustainable practices and technologies.
· Robots-Blog: What can you tell us about your community?
Roy: The Offbits community feature allows users to share their creations and ideas, providing inspiration and a sense of collaboration. The Offbits users have challenges for every level. When they create, share and pass the challenge they can earn tokens for their future purchases.