Das Deutsche Museum Bonn erfindet sich gerade neu und entwickelt sich mit großen Schritten zu einem innovativen und lebendigen Informationsforum zum Thema Künstliche Intelligenz. Das Ziel der „Mission KI“: Die Besucherinnen und Besucher aktiv in die Vermittlung und in die Diskussion rund um die Künstliche Intelligenz einzubeziehen. Dies geschieht nicht nur in der rundumerneuerten Ausstellung mit vielen Mitmachstationen, sondern auch regelmäßig in Form von Vorträgen und Diskussionsrunden mit namhaften Experten und Expertinnen.
Am 9.November 2023 lud das Deutsche Museum Bonn wieder zum „KI-Talk“ ein, der diesmal unter dem Motto „Cybersicherheit in Zeiten der KI“ stand. Als Talkgäste diskutierten Prof. Ulrich Kelber (Bundesbeauftragter für den Datenschutz und die Informationsfreiheit), Thomas Tschersich (Chief Security Officer Deutsche Telekom AG), Journalistin und Autorin Eva Wolfangel („Ein falscher Klick“) und Oberst Guido Schulte vom Cyber- und Informationsraum der Bundeswehr) über gängige Bedrohungen im Netz, den Schutz unserer Daten, den Einsatz Künstlicher Intelligenz in der Cybersicherheit und vieles mehr. Moderiert wurde das in Kooperation mit dem Cyber Security Cluster Bonn veranstaltete Event von „Quarks“-Moderatorin Florence Randrianarisoa.
Die sehr interessante und lebhafte Talkrunde mit einem sehr interessierten Publikum im vollbesetzten Museum wurde live im YouTube-Kanal des Fördervereins WISSENschaf(f)t SPASS übertragen und ist dort weiterhin als Aufzeichnung zu sehen:
IDS NXT malibu marks a new class of intelligent industrial cameras that act as edge devices and generate AI overlays in live video streams. For the new camera series, IDS Imaging Development Systems has collaborated with Ambarella, leading developer of visual AI products, making consumer technology available for demanding applications in industrial quality. It features Ambarella’s CVflow® AI vision system on chip and takes full advantage of the SoC’s advanced image processing and on-camera AI capabilities. Consequently, Image analysis can be performed at high speed (>25fps) and displayed as live overlays in compressed video streams via the RTSP protocol for end devices.
Thanks to the SoC’s integrated image signal processor (ISP), the information captured by the light-sensitive onsemi AR0521 image sensor is processed directly on the camera and accelerated by its integrated hardware. The camera also offers helpful automatic features, such as brightness, noise and colour correction, which significantly improve image quality.
„With IDS NXT malibu, we have developed an industrial camera that can analyse images in real time and incorporate results directly into video streams,” explained Kai Hartmann, Product Innovation Manager at IDS. “The combination of on-camera AI with compression and streaming is a novelty in the industrial setting, opening up new application scenarios for intelligent image processing.“
These on-camera capabilities were made possible through close collaboration between IDS and Ambarella, leveraging the companies’ strengths in industrial camera and consumer technology. „We are proud to work with IDS, a leading company in industrial image processing,” said Jerome Gigot, senior director of marketing at Ambarella. “The IDS NXT malibu represents a new class of industrial-grade edge AI cameras, achieving fast inference times and high image quality via our CVflow AI vision SoC.“
IDS NXT malibu has entered series production. The camera is part of the IDS NXT all-in-one AI system. Optimally coordinated components – from the camera to the AI vision studio – accompany the entire workflow. This includes the acquisition of images and their labelling, through to the training of a neural network and its execution on the IDS NXT series of cameras.
I am passionate about technology and robotics. Here in my own blog, I am always taking on new tasks. But I have hardly ever worked with image processing. However, a colleague’s LEGO® MINDSTORMS® robot, which can recognize the rock, paper or scissors gestures of a hand with several different sensors, gave me an idea: „The robot should be able to ’see‘.“ Until now, the respective gesture had to be made at a very specific point in front of the robot in order to be reliably recognized. Several sensors were needed for this, which made the system inflexible and dampened the joy of playing. Can image processing solve this task more „elegantly“?
From the idea to implementation
In my search for a suitable camera, I came across IDS NXT – a complete system for the use of intelligent image processing. It fulfilled all my requirements and, thanks to artificial intelligence, much more besides pure gesture recognition. My interest was woken. Especially because the evaluation of the images and the communication of the results took place directly on or through the camera – without an additional PC! In addition, the IDS NXT Experience Kit came with all the components needed to start using the application immediately – without any prior knowledge of AI.
I took the idea further and began to develop a robot that would play the game „Rock, Paper, Scissors“ in the future – with a process similar to that in the classical sense: The (human) player is asked to perform one of the familiar gestures (scissors, stone, paper) in front of the camera. The virtual opponent has already randomly determined his gesture at this point. The move is evaluated in real time and the winner is displayed.
The first step: Gesture recognition by means of image processing
But until then, some intermediate steps were necessary. I began by implementing gesture recognition using image processing – new territory for me as a robotics fan. However, with the help of IDS lighthouse – a cloud-based AI vision studio – this was easier to realize than expected. Here, ideas evolve into complete applications. For this purpose, neural networks are trained by application images with the necessary product knowledge – such as in this case the individual gestures from different perspectives – and packaged into a suitable application workflow.
The training process was super easy, and I just used IDS Lighthouse’s step-by-step wizard after taking several hundred pictures of my hands using rock, scissor, or paper gestures from different angles against different backgrounds. The first trained AI was able to reliably recognize the gestures directly. This works for both left- and right-handers with a recognition rate of approx. 95%. Probabilities are returned for the labels „Rock“, „Paper“, „Scissor“, or „Nothing“. A satisfactory result. But what happens now with the data obtained?
Further processing
The further processing of the recognized gestures could be done by means of a specially created vision app. For this, the captured image of the respective gesture – after evaluation by the AI – must be passed on to the app. The latter „knows“ the rules of the game and can thus decide which gesture beats another. It then determines the winner. In the first stage of development, the app will also simulate the opponent. All this is currently in the making and will be implemented in the next step to become a „Rock, Paper, Scissors“-playing robot.
From play to everyday use
At first, the project is more of a gimmick. But what could come out of it? A gambling machine? Or maybe even an AI-based sign language translator?
After a series of successful Kickstarter Campaigns, Geek Club and CircuitMess launch their most ambitious project yet – a NASA-approved AI-powered scale model Replica of the Perseverance Space Rover
Zagreb, Croatia – October 31st, 2023. – Today, Geek Club and CircuitMess announced their Kickstarter space exploration campaign designed to teach children eleven and up about engineering, AI, and coding by assembling the iconic NASA Perseverance Space Rover, as well as a series of other NASA-inspired space vehicles.
This new space-themed line of DIY educational products was born out of both companies‘ shared vision to aim for the stars and to take their fans with them. The Kickstarter campaign starts today, October 31st, and will last for 35 days.
The collaboration was a logical union of the two companies. Both companies create educational STEM DIY kits that are targeted towards kids and adults. Both share the same mission: To make learning STEM skills easy and fun.
“For decades, the team and I have been crafting gadgets for geeks always inspired by space exploration,” says Nicolas Deladerrière, co-founder of Geek Club. “Inspired by Mars exploration, we’ve studied thousands of official documents and blueprints to craft an authentic Mars exploration experience. The product comes alive thanks to microchips, electromotors, and artificial intelligence. Imagine simulating your own Mars mission right from your desk!”
Geek Club is an American company that specializes in designing and producing DIY robotics kits that educate their users on soldering and electronics. They focus primarily on space exploration and robotics, all to make learning engineering skills easy and fun for kids, adults, and everyone in between.
“We have successfully delivered seven Kickstarter campaigns, raised more than 2.5 million dollars, and made hundreds of thousands of geeks all around the world extremely happy,” says Albert Gajšak, CEO of CircuitMess. “In a universe where space and technology are constantly growing, we’re here to ensure you’re never left behind.”
The new product line consists of five unique space-themed products:
1. The Perseverance Space Rover Kit
This kit is designed to be an educational journey into programming, electronics, robotics, and AI. The model comes with four electromotors, six wheels, a control system with a dual-core Espressif ESP32 processor, Wi-Fi, and Bluetooth connectivity, a sample collection arm based on the real thing with two servo motors, a Wi-Fi-connected remote controller, and support for programming in Python or via a Scratch-inspired drag-and-drop visual coding environment.
Alongside the Perseverance Space Rover, you’ll be able to get more iconic space vehicles:
2. The Voyager: A DIY kit made as a tribute to NASA’s longest-lasting mission, which has been beaming back data for an incredible 45 years and counting.
3. Juno: A solar-powered DIY kit celebrating the mission that gave us the most detailed and breathtaking images of Jupiter.
4. Discovery: A DIY kit honoring the legendary space shuttle with 39 successful orbital flights under its belt.
5. The Artemis Watch: A sleek, space-themed wrist gadget inspired by NASA’s upcoming Artemis space suit design. The watch is a programmable device equipped with an LCD display, Bluetooth, and a gyroscope.
The Perseverance Educational Space Rover Kit is available for pre-order now on Kickstarter, starting at $149.
No previous experience or knowledge is needed for assembling your very own space rover. The kit is designed for anyone aged 11+ and comes with detailed video instructions.
Roboverse Reply, das auf die Integration von Robotik-Lösungen spezialisierte Unternehmen der weltweit agierenden Reply Gruppe, leitet das von der EU finanzierte Projekt „Fluently“. Das Projekt zielt darauf ab, eine Plattform zu schaffen, die eine echte soziale Zusammenarbeit zwischen Menschen und Robotern in industriellen Umgebungen ermöglicht, indem sie die neuesten Fortschritte in der KI-basierten Entscheidungsfindung nutzt.
Ziel dieses dreijährigen Projekts ist es, eine Plattform sowie ein tragbares Gerät für Industriearbeiter und Roboter zu entwickeln, die den Maschinen ermöglichen, Sprache, Inhalt und Tonfall genauer zu interpretieren und Gesten automatisch in Roboteranweisungen umzuwandeln. Weiterer Bestandteil des Projekts ist der Aufbau des Trainingszentrums „Fluently RoboGym“, in dem Fabrikarbeiter und Roboter trainieren können, im Industrieprozess reibungslos zu interagieren.
Praktische Anwendungsfälle für die Mensch-Roboter-Kollaboration beziehen sich auf für die europäische Wirtschaft wichtige Wertschöpfungsketten, die hohe physische Belastungen und hohe Anforderungen an die menschliche Erfahrung sowie Kompetenz mit sich bringen. Dies betrifft z. B. die Demontage und das Recycling von Lithiumzellenbatterien, Prüf- und Montageprozesse in der Luft- und Raumfahrtindustrie sowie die Aufarbeitung komplexer Industrieteile mittels additiver Fertigung.
An dem Projekt sind zweiundzwanzig Partner beteiligt, darunter die Schweizer Universität SUPSI. Anna Valente, Leiterin des SUPSI-Labors für Automation, Robotik und Maschinen und Mitglied des Schweizer Wissenschaftsrats, fügt hinzu: „Das Projekt Fluently zielt darauf ab, Roboter zu Teamplayern auszubilden, die menschliche Arbeiter bestmöglich unterstützen. Als wissenschaftliche und technische Koordinatoren wollten wir mit Fluently einen wichtigen Beitrag zur Weiterentwicklung der Mensch-Roboter-Kollaboration leisten und gleichzeitig eine Best Practice und einen Proof of Concept (PoC) für integrativere sowie interaktive Ökosysteme schaffen.“
Das Projekt hat das erste Entwicklungsjahr erfolgreich abgeschlossen und erste Meilensteine erreicht. Das Team konzentriert sich aktuell auf drei Hauptarbeitspakete:
Design des Fluently Interfaces, bestehend aus dem Design des Fluently Geräts, Softwaretests und Integration in tragbare Steuerungs- und Robotersysteme;
Entwicklung von KI-Modellen, einschließlich Architekturdesign, Edge Computing, Training von RoboGym-Modellen und Unterstützung von Mensch-Roboter-Teamarbeit;
RoboGym-Design und -Implementierung, d. h. Festlegung der RoboGym-Spezifikationen und -Ziele sowie Entwicklung und Aufbau von drei Trainingsbereichen.
Das Fluently-System stützt sich auf innovative Technologien, um eine nahtlose Kommunikation zwischen Menschen und Robotern sicherzustellen. Die Verarbeitung natürlicher Sprache, Hardware für die freihändige Steuerung von Robotern aus der Ferne, Monitoring physiologischer Signale und Eye-Tracking werden im Rahmen dieses Projekts erforscht und integriert.
„Wir sind stolz darauf, das innovative Fluently-Projekt zu koordinieren, das Partner aus Wissenschaft und Industrie zusammenbringt, um eine empathische Roboterplattform zu entwickeln, die Sprachinhalte, Tonfall und Gesten interpretieren kann und Industrieroboter für jedes Qualifikationsprofil einsetzbar macht“, kommentiert Filippo Rizzante, CTO von Reply. „Roboter, die mit Fluently ausgestattet sind, werden den Menschen bei physischen wie kognitiven Aufgaben unterstützen, lernen und Erfahrungen mit den menschlichen Teamkollegen sammeln.“
Intelligente Geräte haben das Leben von Erwachsenen auf der ganzen Welt einfacher gemacht. Sie ermöglichen es uns, Dinge auszuschalten, die wir eingeschaltet gelassen haben, und Dinge einzuschalten, für die wir zu faul sind, aufzustehen und sie zu erledigen. Die meisten dieser Geräte sind verständlicherweise für ein erwachsenes Publikum gedacht. Aber was ist mit den Kindern? In den letzten zwei Wochen hatten meine Kinder viel Spaß mit einem der vielseitigsten intelligenten Roboter auf dem Markt, dem Miko . Mit vielen tollen Aktivitäten und eingebauten Assistenten haben auch sie ihr eigenes intelligentes Gerät, das sie nutzen können. Mal sehen, ob dieser kinderfreundliche Roboter sein Geld wert ist!
Der Miko-Roboter bringt Spaß und Lernen zusammen. Er wird durch künstliche Intelligenz angetrieben und wurde für Kinder im Alter von 5 bis 10 Jahren entwickelt. Die Technologie des Unternehmens, die in den Miko-Roboter eingebaut wurde, ermöglicht es ihm, mit Menschen zu interagieren, Emotionen zu erkennen und sogar sein eigenes Verhalten auf der Grundlage des Gelernten anzupassen. Je länger man also mit ihm spielt, desto mehr lernt er und passt sich an verschiedene Aktivitäten an.
Der Miko ist einfach einzurichten
Wie bei jedem elektronischen Gerät ist die Einrichtung des Miko sehr einfach und überschaubar. Das Gerät führt Sie durch die verschiedenen Schritte und hält Sie dabei an der Hand. Sie stellen eine Verbindung zu Ihrem lokalen Wi-Fi-Netzwerk her, fügen Benutzer hinzu und vieles mehr. Die Eltern werden sogar aufgefordert, die kostenlose Miko-Mobil-App aus dem iOS App Store oder dem Google Play Store herunterzuladen.
Wenn Sie Ihren Miko zum ersten Mal aus der Verpackung holen, sollten Sie beachten, dass eine Vielzahl von Updates installiert werden muss. Bei einer Standard-Wi-Fi-Verbindung – 40 Megabyte Download – dauerte das Herunterladen und Aktualisieren des Roboters immer noch etwa 15 bis 20 Minuten. All das musste erledigt werden, bevor meine Kinder ihn benutzen konnten.
Was kann der Roboterbegleiter Miko?
Miko scheint wie ein normaler intelligenter Assistent zu funktionieren, allerdings einer, der sich bewegen kann. Nachdem Miko all die verschiedenen Möglichkeiten durchgespielt hatte – einschließlich des Aufweckens von Nutzern, indem sie „Hallo Miko“ sagten – hatten meine Kinder einen Riesenspaß bei der Auswahl verschiedener Aktivitäten. Es sei darauf hingewiesen, dass die Auswahl an Aktivitäten noch viel größer ist, wenn man sich für ein Miko Max-Abonnement entscheidet. Damit erhalten Sie Zugang zu zahlreichen lizenzierten Produkten von Disney, Paramount und anderen.
Spiele
Miko wird mit einer Reihe von tollen Spielen für Kinder ausgeliefert. Die Spiele sind relativ simpel, aber für Kinder bis 10 Jahre sind sie trotzdem sehr interessant. Meine Kinder spielten täglich Tic-Tac-Toe mit Miko und versuchten, einen KI-Gegner zu besiegen, der oft dazu bestimmt schien, nie zu verlieren. Wir fanden, dass die Spiele nichts sind, was man nicht auch auf einem Standard-Tablet im Apple iOS Store oder bei Google Play für Android finden könnte.
Körperliche Aktivität
Unsere Interaktionen mit Mikos körperlichen Aktivitäten waren etwas gemischt. Eine Tanzparty mit Miko zu veranstalten und ein Tanz- und Standbildspiel zu spielen, hat sehr gut funktioniert. Miko war in der Lage zu erkennen, wann meine Kinder tanzten und wann sie still standen. Je stiller sie standen, wenn Miko „Freeze“ sagte, desto mehr Punkte bekamen sie. Das hat gut funktioniert. Das Versteckspiel mit Miko war weniger beeindruckend, als es die Werbung vermuten ließ. Miko bewegte sich einfach durch unser offenes Obergeschoss und verwechselte oft Stühle oder Gegenstände auf dem Boden mit meinen Kindern.
Pädagogischer Inhalt
Auch wenn dies nicht der aufregendste Aspekt von Miko ist, so haben meine Kinder doch die pädagogischen Inhalte genossen. In diesem Punkt fand ich Miko am beeindruckendsten. Egal, ob er ihnen Bücher vorlas oder sie mit Statistiken oder Quizfragen versorgte, Miko beeindruckte. Während meine Kinder unsere intelligenten Geräte im Haus nutzen können, um Assistenten Fragen zu stellen und Antworten zu erhalten, haben wir festgestellt, dass Miko sich mit unseren Kindern über eine Vielzahl von Themen unterhalten hat.
Bei einem Austausch sprach Miko mit meinen Kindern über die verschiedenen interessanten Fakten und Statistiken über Chamäleons. Obwohl es für mich offensichtlich war, dass diese Interaktionen auf der Grundlage der von meinen Kindern getroffenen Entscheidungen geplant waren, hatten sie das Gefühl, dass Miko direkt mit ihnen interagierte.
Emotionale Unterstützung
Die Firma, die hinter Miko steht, wirbt damit, dass Miko ein Roboter ist, der von seinen Benutzern lernt. Er verspricht emotionale Unterstützung für Kinder. Miko verspricht zum Beispiel, dass er weiß, wann er ihnen einen Witz erzählen muss, wenn sie sich schlecht fühlen. Ich glaube zwar, dass Miko mit der Zeit einiges über meine Kinder lernen könnte, aber in den wenigen Wochen, in denen wir das Gerät benutzt haben, war das nicht zu erkennen. Wie ich bereits erwähnt habe, fühlten sich meine Kinder immer mit Miko verbunden und hatten das Gefühl, dass er ein guter KI-Freund ist. Wenn man meine Kinder fragen würde, würden sie sagen, Miko sei ihr Freund und würde sie durch Fragen und Aktivitäten kennen lernen.
Abschließende Gedanken
Alles am Miko war wunderbar. Der Roboter ist unglaublich reaktionsschnell, mit wenigen Verzögerungen und geringen Wartezeiten, wenn er Fragen stellt oder Spiele spielt. Er ist zwar nicht so flink wie andere intelligente Geräte, die ich benutzt habe, aber immer noch ziemlich schnell. Miko macht auch viel mehr als andere Geräte. Tanzpartys zu veranstalten war der Favorit meiner Kinder, etwas, das unsere stationären Geräte nicht können!
The presence of robots in our modern environment is getting increasingly casual to see. Robots are progressing rapidly in terms of both their capabilities and the potential uses they have. Examples of this include self-driving automobiles and drones. The VariAnt, a robot created by Variobot, is another amazing example.
VariAnt: At the First Glance
VariAnt, a robot ant, moves and acts almost exactly like its biological model. It independently explores its environment using a sensor system to detect obstructions or markers. The Variobot programmable kit is appropriate for researchers who are passionate and young at heart.
Advanced Autonomy
Like the majority of living things, the variAnt adjusts to the surroundings by detecting relative brightness. Using a network of patented sensors is made feasible. The autonomous robot ant has light sensors connected to its body, legs, antennae, and jaw claws that can be positioned as needed.
A processor is housed on an Arduino-compatible nano board, which serves as the ant robot’s central processing unit (CPU). The small control unit provides connections for two motors, 12 analog sensors, 8 digital I/Os, 2 programmed buttons, 2 reed switches for step numbers, that may be used in any way, and 15 status LEDs that can be plugged in and switched as needed.
The state of the sensors, motors, and reed switches may all be indicated by the LEDs. Inside the ant’s head is a tiny circuit board that is equipped with plug-in ports, which enables the flexible combination and extension of environmental sensors.
The lithium-ion battery that comes standard with the variAnt has a run time of around 3 hours and can be recharged using the provided USB cord.
The Walking Mechanism
The robotic ant makes use of these to identify objects, lines, light sources, or shadows in its surroundings, and then either follows them or stays away from them in an intentional manner.
The purpose of the walking mechanism that was created and patented by Variobot is to mimic the natural mobility of an ant as closely as possible. This is doable with only 24 different components made of acrylic.
VariAnt: Best for
For individuals of all ages, the robot ant is also an engaging and entertaining toy. You can use this set to design your own robot to behave, move, and appear like an actual, but much bigger, ant. The robot is an interesting thing to watch due to its distinct motions and behaviors, and due to its size, it can be used in a number of scenarios. The variAnt kit costs around €199.
Conclusion
The VariAnt might revolutionize robotics and our understanding of nature. Since it mimics ants, the VariAnt can perform many tasks that conventional robots cannot. Whether employed for research, environmental monitoring, or as a toy, the VariAnt is a groundbreaking robotics innovation that will captivate people worldwide.
With the development of AI, robots have become a lot smarter. A quick Google or Youtube search will reveal many cases of people using advanced robots. For example, videos of robots packing shelves in factories or, even more impressive, the Ocean One Robot, an advanced humanoid that explores shipwrecks and plane crashes.
These videos make many wonder how far we are from using such robotics in everyday life. Learn what today’s robots are capable of, what potential challenges need to be solved and if humanoids are ready for daily life.
HD-Foto von Possessed Photography (@possessedphotography) https://unsplash.com/
3 Humanoids Robots Helping Humans Today
One reason advanced humanoid robots are in demand is their ability to handle dangerous and repetitive operations. This frees up humans to focus on other essential, safer tasks. Current AI robots such as humanoids and cobots are already assisting humans by completing various tasks — bomb disposal, surgery, packing items in grocery stores, self-driving vehicles and much more.
One industry that frequently utilizes AI robots is the manufacturing sector. They mostly complete repetitive assignments such as packing items, material handling, assembly and welding. This speeds up production time and allows humans to tackle more complex or demanding tasks. Here are three different humanoid robots helping people.
Digit
Agility Robotics has developed a humanoid robot well-suited for many tedious operations. The humanoid is called Digit and has fully functional limbs making it excellent at unloading packages from trailers and also delivering them. Digit is equipped with sensors in his torso to help him easily navigate complex environments.
Nadine
Nadine is a realistic-looking social humanoid robot with various facial expressions and movements. She was developed in Singapore by researchers from the Nanyang Technological University. Nadine can recognize different gestures, faces, objects and is able to perform various social tasks associated with customer service.
Promobot
Promobot is a humanoid that is suitable for many different service-oriented roles. In hotels, promobot can recognize guests, print receipts, issue keycards and check guests in. This humanoid is customizable and can even work as a medical assistant — measure blood oxygen and blood sugar levels.
Are Humanoids Ready for Daily Life?
Today’s humanoids are undoubtedly impressive, but AI robots have yet to reach the level of generative artificial intelligence — an advanced form of AI capable of holding detailed conversations when prompted. Many companies aim to combine generative AI with advanced robotics to make it more applicable for a wider variety of use cases.
Since most AI machines are developed for the use of single tasks, they tend to struggle when taking on multiple operations simultaneously. In other words, they aren’t very good at multitasking. This complex aspect would need to be addressed for AI robots to become a reality in daily life. The most advanced form of AI robots available today are self-driving cars, which have a long way to go before they are truly self-driving.
It is the same with humanoid robots. Although many of the AI robots available are amazing, it is clear there are still advancements needed, especially in the case of processing abilities. AI robots will need to understand a wide variety of interactions no matter how they are carried out — voice, keyboard commands, hand gestures and sometimes even facial expressions.
For AI humanoids to be applicable in daily life, humans need a deeper understanding of how they operate — training might be required.
Potential Challenges to Overcome With Future Humanoids
One of the biggest problems with AI humanoids today is their battery life. They can usually only work for an hour or two and then require charging. While the goal would be to use them for multiple hours on end, another approach might be to increase the battery life by a few hours and add fast charging.
In terms of complex and challenging tasks, many humanoids and cobots are quite advanced and can solve them with relevant ease. However, this usually means they lack in other areas, such as movement. In most cases, the humanoid has advanced movement or impressive processing abilities, but not both.
In addition, the technology today’s humanoids use will also need further improvements. Better censoring capabilities are necessary in terms of in-depth cameras, voice and visual sensors to make them more applicable in modern life. For humanoids to become more widely used, their movement and processing abilities require further refinement.
Humanoids also need to operate safely and effectively while working with multiple humans at the same time. The robot will need to comprehend numerous interactions with different people simultaneously to react appropriately. The current training methods used with humanoids today are slow and would need further refinements to make them available for daily life.
Humanoids Robots Still Have a Long Way to Go
The advance of technology and AI is astounding, especially when combined to create robots that assist humans with numerous tasks. However, there are still a few areas where humanoids need refinement to become suitable for everyday use. Undoubtedly, humans will benefit significantly from utilizing advanced AI robotics in their daily life, but for this to become a reality, humanoids still have a long way to go.
Guest article by Ellie Gabel. Ellie is a writer living in Raleigh, NC. She's passionate about keeping up with the latest innovations in tech and science. She also works as an associate editor for Revolutionized.
Voice assistants have become a crucial component of our everyday lives in today’s technologically sophisticated society. They assist us with work, respond to our inquiries, and even provide entertainment. Have you ever wondered how voice assistants operate or how to build your own? Spencer is here to satisfy your curiosity and provide a fun DIY activity, so stop searching. This blog post will introduce you to Spencer, a voice assistant that will brighten your day with jokes and provide you with all the information you need.
Meet Spencer
Spencer is a buddy that converses with you; it is more than simply a voice assistant. It can hear you well enough to comprehend all you say. It uses its large red button as a trigger to search the internet and give you straightforward answers. It’s a wonderful addition to your everyday routine because of Spencer’s endearing nature and capacity to make you grin.
Spencer’s Features: Your Interactive Voice Assistant Companion
1. Voice Interaction
High-quality audio communication is possible because of Spencer’s microphone. It comprehends your instructions, inquiries, and chats and offers a simple and straightforward approach for you to communicate with your voice assistant. Simply talk to Spencer, and it will answer as you would expect, giving the impression that you are conversing with a genuine friend.
2. Internet Connectivity and Information Retrieval
Spencer has internet access, allowing you to access a huge information base. You may have Spencer do a real-time internet search by pushing the huge red button on his chest. Spencer can search the web and provide you clear, succinct answers, whether you need to discover the solution to a trivia question, check the most recent news headlines, or collect information on a certain issue.
3. Personalization and Customization
Being wholly original is what Spencer is all about. You are allowed to alter its features and reactions to fit your tastes. Make Spencer reflect your style and personality by altering its external elements, such as colors, decals, or even adding accessories. To further create a genuinely customized experience, you may alter its reactions, jokes, and interactions to suit your sense of humor and personal tastes.
4. Entertainment and Engagement
Spencer is aware of how important laughing is to life. It has built-in jokes and amusing replies, so talking to your voice assistant is not only educational but also interesting and fun. Spencer’s amusing features will keep you entertained and involved whether you need a quick pick-me-up or want to have a good time with friends and family.
5. Learning and Educational STEM Experience
In particular, STEM (science, technology, engineering, and mathematics) subjects are the focus of Spencer’s educational mission. You will learn useful skills in electronics, soldering, component assembly, and circuits by making Spencer. To further develop Spencer’s talents, you may go into programming, gaining practical experience with coding and computational thinking.
6. Inspiration and Creativity
Spencer acts as a springboard to spark your imagination and motivate further investigation. You may let your creativity run wild as you put together and customize your voice assistant. This do-it-yourself project promotes critical thinking, problem-solving, and invention, developing a creative and innovative mentality that may go beyond the context of making Spencer.
Recommended Age Group
Spencer is intended for those who are at least 11 years old. While the majority of the assembly procedures are simple, some, like soldering and tightening fasteners, call for prudence. Never be afraid to seek an adult for help if you need it. When using certain equipment and approaches, it is usually preferable to be guided.
Assembly Time Required
The construction of Spencer should take, on average, 4 hours to finish. However, take in mind that the timeframe may change based on your prior knowledge and expertise. Don’t worry if you’re unfamiliar with electronics! Enjoy the process, take your time, and don’t let any early difficulties get you down. You’ll grow more used to the procedures as you go along.
Skills Required
To start this DIY project, no special skills are needed. Fun and learning something new are the key goals. Your introduction to the field of electronics via Building Spencer will pique your interest in STEM fields and provide you the chance to get hands-on experience. Consider completing this assignment as the first step towards a lucrative engineering career.
Pros and Cons of Spencer
Pros of Spencer
Spencer provides an engaging and interactive experience, responding to voice commands and engaging in conversations to make you feel like you have a real companion.
With internet connectivity, Spencer can retrieve information in real-time, giving you quick answers to your questions and saving you time.
Spencer can be customized to reflect your style and preferences, allowing you to personalize its appearance, responses, and interactions.
Spencer comes with built-in jokes and entertaining responses, adding fun and amusement to your interactions with the voice assistant.
Building Spencer provides hands-on learning in electronics, soldering, circuitry, and programming, offering a valuable educational experience in STEM disciplines.
Cons of Spencer
The assembly process of Spencer may involve technical aspects such as soldering and component assembly, which can be challenging for beginners or individuals with limited experience.
Spencer heavily relies on internet connectivity to provide real-time answers and retrieve information, which means it may have limited functionality in areas with poor or no internet connection.
While Spencer offers basic voice assistant features, its capabilities may be more limited compared to advanced commercially available voice assistant devices.
Conclusion
Spencer, creating your own voice assistant is a fascinating and worthwhile endeavor. You’ll learn useful skills, expand your understanding of electronics, and enjoy the thrill of putting a complicated gadget together as you go along with the assembly process. Remember that the purpose of this project is to experience the thrill of learning, solving problems, and letting your imagination run free as well as to produce a final product. So be ready to join Spencer on this journey and discover a world of opportunities in the exciting world of voice assistants.