Robotistan, a pioneer in innovative educational solutions, has introduced a new STEM robot called BerryBot. Designed to help children develop coding and robotics skills, this creative and educational robot offers practical experiences in science, technology, engineering, and mathematics (STEM).
A New Era in Robotics Education
BerryBot is more than just a robot; it’s a learning platform that nurtures creativity and problem-solving skills. With its compact and customizable design, BerryBot allows users to paint and personalize their robot, thanks to its wooden body.
The wooden structure not only reflects an eco-friendly design approach but also allows children to shape their robots according to their own style. BerryBot is designed to be flexible enough for both classroom and home use.
Key Features
BerryBot’s innovative features support the development of both technical and creative skills:
Customizable Wooden Body:
Kids can paint their robots to reflect their unique style.
Durable and sustainable construction.
Versatile Coding Options:
Block-based programming for beginners.
Advanced programming with Python and Arduino IDE.
User-friendly interface for an easy learning process.
Multi-Mode Movement:
Line-following, sumo, and free movement modes.
Real-time visual feedback via the LED matrix display.
Wireless control through Bluetooth connectivity.
An Educational Tool Combining Coding Skills with Creativity
BerryBot offers young aspiring engineers a fun and educational experience. While creating different movement scenarios, children not only enhance their problem-solving skills but also develop creative projects.
Thanks to BerryBot’s multi-mode structure, users can program the robot’s movement capabilities in various ways. This feature transforms BerryBot from being just a toy to a valuable educational tool that encourages creative thinking.
Mehmet Akçalı, Marketing and Product Director at Robotistan, highlighted the educational features of BerryBot:
„BerryBot offers a practical and fun way for children to step into the world of STEM. Combining coding skills with creative projects, this robot will become an indispensable educational tool for both teachers and parents.“
He also mentioned a special opportunity for early supporters:
„We’re thrilled to announce our Kickstarter campaign, where backers can take advantage of the Early Bird offer. Additionally, as part of our commitment to supporting STEM education, for every 10 pledges, we will gift one BerryBot to a school or an educational organization in need. This way, our supporters not only get an amazing educational robot but also help inspire the next generation of young innovators.“
Shaping the Engineers of the Future
As the importance of STEM education continues to grow, innovative tools like BerryBot are preparing young minds for the future. With its fun yet educational nature, BerryBot strengthens children’s coding, algorithmic thinking, and problem-solving skills while also inspiring creative projects.
Robotistan emphasizes that BerryBot is not just an educational robot but also a platform where children can express themselves. Produced with this vision in mind, BerryBot is ready to make a difference both at home and in educational environments.
To learn more about BerryBot and to be among the first to own it and take advantage of the Early Bird offer, visit the Kickstarter page!
AI-driven drone from University of Klagenfurt uses IDS uEye camera for real-time, object-relative navigation—enabling safer, more efficient, and precise inspections.
High-voltage power lines. Electricity distribution station. high voltage electric transmission tower. Distribution electric substation with power lines and transformers.
The inspection of critical infrastructures such as energy plants, bridges or industrial complexes is essential to ensure their safety, reliability and long-term functionality. Traditional inspection methods always require the use of people in areas that are difficult to access or risky. Autonomous mobile robots offer great potential for making inspections more efficient, safer and more accurate. Uncrewed aerial vehicles (UAVs) such as drones in particular have become established as promising platforms, as they can be used flexibly and can even reach areas that are difficult to access from the air. One of the biggest challenges here is to navigate the drone precisely relative to the objects to be inspected in order to reliably capture high-resolution image data or other sensor data.
A research group at the University of Klagenfurt has designed a real-time capable drone based on object-relative navigation using artificial intelligence. Also on board: a USB3 Vision industrial camera from the uEye LE family from IDS Imaging Development Systems GmbH.
As part of the research project, which was funded by the Austrian Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology (BMK), the drone must autonomously recognise what is a power pole and what is an insulator on the power pole. It will fly around the insulator at a distance of three meters and take pictures. „Precise localisation is important such that the camera recordings can also be compared across multiple inspection flights,“ explains Thomas Georg Jantos, PhD student and member of the Control of Networked Systems research group at the University of Klagenfurt. The prerequisite for this is that object-relative navigation must be able to extract so-called semantic information about the objects in question from the raw sensory data captured by the camera. Semantic information makes raw data, in this case the camera images, „understandable“ and makes it possible not only to capture the environment, but also to correctly identify and localise relevant objects.
In this case, this means that an image pixel is not only understood as an independent colour value (e.g. RGB value), but as part of an object, e.g. an isolator. In contrast to classic GNNS (Global Navigation Satellite System), this approach not only provides a position in space, but also a precise relative position and orientation with respect to the object to be inspected (e.g. „Drone is located 1.5m to the left of the upper insulator“).
The key requirement is that image processing and data interpretation must be latency-free so that the drone can adapt its navigation and interaction to the specific conditions and requirements of the inspection task in real time.
Thomas Jantos with the inspection drone – Photo: aau/Müller
Semantic information through intelligent image processing Object recognition, object classification and object pose estimation are performed using artificial intelligence in image processing. „In contrast to GNSS-based inspection approaches using drones, our AI with its semantic information enables the inspection of the infrastructure to be inspected from certain reproducible viewpoints,“ explains Thomas Jantos. „In addition, the chosen approach does not suffer from the usual GNSS problems such as multi-pathing and shadowing caused by large infrastructures or valleys, which can lead to signal degradation and thus to safety risks.“
A USB3 uEye LE serves as the quadcopter’s navigation camera
How much AI fits into a small quadcopter? The hardware setup consists of a TWINs Science Copter platform equipped with a Pixhawk PX4 autopilot, an NVIDIA Jetson Orin AGX 64GB DevKit as on-board computer and a USB3 Vision industrial camera from IDS. „The challenge is to get the artificial intelligence onto the small helicopters.
The computers on the drone are still too slow compared to the computers used to train the AI. With the first successful tests, this is still the subject of current research,“ says Thomas Jantos, describing the problem of further optimising the high-performance AI model for use on the on-board computer.
The camera, on the other hand, delivers perfect basic data straight away, as the tests in the university’s own drone hall show. When selecting a suitable camera model, it was not just a question of meeting the requirements in terms of speed, size, protection class and, last but not least, price. „The camera’s capabilities are essential for the inspection system’s innovative AI-based navigation algorithm,“ says Thomas Jantos. He opted for the U3-3276LE C-HQ model, a space-saving and cost-effective project camera from the uEye LE family. The integrated Sony Pregius IMX265 sensor is probably the best CMOS image sensor in the 3 MP class and enables a resolution of 3.19 megapixels (2064 x 1544 px) with a frame rate of up to 58.0 fps. The integrated 1/1.8″ global shutter, which does not produce any ‚distorted‘ images at these short exposure times compared to a rolling shutter, is decisive for the performance of the sensor. „To ensure a safe and robust inspection flight, high image quality and frame rates are essential,“ Thomas Jantos emphasises. As a navigation camera, the uEye LE provides the embedded AI with the comprehensive image data that the on-board computer needs to calculate the relative position and orientation with respect to the object to be inspected. Based on this information, the drone is able to correct its pose in real time.
The IDS camera is connected to the on-board computer via a USB3 interface. „With the help of the IDS peak SDK, we can integrate the camera and its functionalities very easily into the ROS (Robot Operating System) and thus into our drone,“ explains Thomas Jantos. IDS peak also enables efficient raw image processing and simple adjustment of recording parameters such as auto exposure, auto white Balancing, auto gain and image downsampling.
To ensure a high level of autonomy, control, mission management, safety monitoring and data recording, the researchers use the source-available CNS Flight Stack on the on-board computer. The CNS Flight Stack includes software modules for navigation, sensor fusion and control algorithms and enables the autonomous execution of reproducible and customisable missions. „The modularity of the CNS Flight Stack and the ROS interfaces enable us to seamlessly integrate our sensors and the AI-based ’state estimator‘ for position detection into the entire stack and thus realise autonomous UAV flights. The functionality of our approach is being analysed and developed using the example of an inspection flight around a power pole in the drone hall at the University of Klagenfurt,“ explains Thomas Jantos.
Visualisation of the flight path of an inspection flight around an electricity pole model with three insulators in the research laboratory at the University of Klagenfurt
Precise, autonomous alignment through sensor fusion The high-frequency control signals for the drone are generated by the IMU (Inertial Measurement Unit). Sensor fusion with camera data, LIDAR or GNSS (Global Navigation Satellite System) enables real-time navigation and stabilisation of the drone – for example for position corrections or precise alignment with inspection objects. For the Klagenfurt drone, the IMU of the PX4 is used as a dynamic model in an EKF (Extended Kalman Filter). The EKF estimates where the drone should be now based on the last known position, speed and attitude. New data (e.g. from IMU, GNSS or camera) is then recorded at up to 200 Hz and incorprated into the state estimation process.
The camera captures raw images at 50 fps and an image size of 1280 x 960px. „This is the maximum frame rate that we can achieve with our AI model on the drone’s onboard computer,“ explains Thomas Jantos. When the camera is started, an automatic white balance and gain adjustment are carried out once, while the automatic exposure control remains switched off. The EKF compares the prediction and measurement and corrects the estimate accordingly. This ensures that the drone remains stable and can maintain its position autonomously with high precision.
Electricity pole with insulators in the drone hall at the University of Klagenfurt is used for test flights
Outlook „With regard to research in the field of mobile robots, industrial cameras are necessary for a variety of applications and algorithms. It is important that these cameras are robust, compact, lightweight, fast and have a high resolution. On-device pre-processing (e.g. binning) is also very important, as it saves valuable computing time and resources on the mobile robot,“ emphasises Thomas Jantos.
With corresponding features, IDS cameras are helping to set a new standard in the autonomous inspection of critical infrastructures in this promising research approach, which significantly increases safety, efficiency and data quality.
The Control of Networked Systems (CNS) research group is part of the Institute for Intelligent System Technologies. It is involved in teaching in the English-language Bachelor’s and Master’s programs „Robotics and AI“ and „Information and Communications Engineering (ICE)“ at the University of Klagenfurt. The group’s research focuses on control engineering, state estimation, path and motion planning, modeling of dynamic systems, numerical simulations and the automation of mobile robots in a swarm: More information
uEye LE – the cost-effective, space-saving project camera Model used:USB3 Vision Industriekamera U3-3276LE Rev.1.2 Camera family: uEye LE
The HP Robots Otto is a versatile, modular robot designed specifically for educational purposes. It offers students and teachers an exciting opportunity to immerse themselves in the world of robotics, 3D printing, electronics and programming. The robot was developed by HP as part of their robotics initiative and is particularly suitable for use in science, technology, engineering and mathematics (STEM) classes.
Key features of Otto:
Modular design: Otto is a modular robot that allows students to build, program and customize it through extensions. This promotes an understanding of technology and creativity. The modular structure allows various components such as motors, sensors and LEDs to be added or replaced, which increases the learning curve for students.
Programmability: The robot can be programmed with various programming languages, including block-based programming for beginners and Python and C++ for advanced programmers. This diversity allows students to continuously improve their coding skills and adapt to the complexity of the tasks.
Sensors and functions: Equipped with ultrasonic sensors for obstacle detection, line tracking sensors and RGB LEDs, Otto offers numerous interactive possibilities. These features allow students to program complex tasks such as navigating courses or tracing lines. The sensors help to detect the environment and react accordingly.
3D printing and customizability: Students can design Otto’s outer parts themselves and produce them with a 3D printer. This allows for further personalization and customization of the robot. This creative freedom not only promotes technical understanding, but also artistic skills. Own parts can be designed and sensors can be attached to desired locations.
Educational approach:
Otto is ideal for use in schools and is aimed at students from the age of 8. Younger students can work under supervision, while older students from the age of 14 can also use and expand the robot independently. The kit contains all the necessary components to build a functioning robot, including motors, sensors, and a rechargeable battery.
Programming environments:
Otto is programmed via a web-based platform that runs on all operating systems. This platform offers different modes:
Block-based programming: Similar to Scratch Jr., ideal for beginners. This visual programming makes it easier to get started in the world of programming and helps students understand basic concepts such as loops and conditions.
Python: A Python editor is available for advanced users. Python is a popular language that works well for teaching because it is easy to read and write. Students can use Python to develop more complex algorithms and expand their programming skills.
C++: Compatible with the Arduino IDE for users who have deeper programming knowledge. C++ offers a high degree of flexibility and allows students to access the hardware directly, allowing for their own advanced projects.
Expansion Kits:
In addition to the Starter Kit, there are several expansion kits. All expansion kits require the starter kit, as they are built on top of it.
Emote Expansion Kit:
It includes components such as an LED matrix display, OLED display, and an MP3 player that allow the robot to display visual and acoustic responses.
This kit is particularly suitable for creative projects where Otto should act as an interactive companion.
The emote kit allows Otto to show emotions, mirror human interactions, and develop different personalities.
Sense Expansion Kit:
With the Sense Kit, Otto can perceive its surroundings through various sensors.
Included are sensors for temperature, humidity, light and noise as well as an inclination sensor. These enable a wide range of interactions with the environment.
The kit is ideal for projects that focus on environmental detection and data analysis.
Interact Expansion Kit:
The Interact kit expands Otto’s tactile interaction capability through modules such as push buttons, rotary knobs and accelerometers.
It enables precise inputs and reactions, as well as measurement of acceleration.
This kit is great for playful activities and interactive games.
Invent Expansion Kit:
The Invent kit is specifically designed to encourage users‘ creativity. It allows the individual adaptation of Otto’s functionalities and design through 3D printing and additional modules as well as compatible clamping blocks.
Users can design and print new accessories to make the robot unique.
Equip Otto with legs and teach him to walk or make him fit for outdoor use off-road with chains.
Use in the classroom:
Otto comes with extensive resources developed by teachers. These materials help teachers design effective STEM lessons without the need for prior knowledge. The robot can be used both in the classroom and at home. The didactic materials include:
Curricula: Structured lesson plans that help teachers plan and execute lessons.
Project ideas and worksheets: A variety of projects that encourage students to think creatively and expand their skills.
Tutorials and videos: Additional learning materials to help students better understand complex concepts.
Conclusion:
The HP Robots Otto is an excellent tool for fostering technical understanding and creativity in students. Thanks to its modular design and diverse programming options, it offers a hands-on learning experience in the field of robotics and electronics. Ideal for use in schools, Otto provides teachers with a comprehensive platform to accompany students on an exciting journey into the world of technology. In particular, Otto’s versatility through the 3D-printed parts and expansion packs offers the opportunity to build the personal learning robot.
The world of robotics is evolving – and right in the middle of it: pib. This humanoid robot, entirely 3D-printable, has received a prestigious award. The German Design Award 2025 has been granted to the printable intelligent bot, recognizing not only its technological sophistication but also its innovative design. But what makes pib so special?
A Robot for Everyone – and by Everyone
Imagine a robot that anyone can build and program themselves. A robot that isn’t just a technical gadget but an inspiration to create, research, and explore new paths in robotics. That’s exactly what pib is. Its open-source approach has a central goal: to make robotics and AI more accessible while breaking down technological barriers. Whether you’re a tinkerer, a student, or simply a technology enthusiast, pib invites everyone to be part of its ever-growing community.
German Design Award 2025: Recognition for Visionary Product Design
The German Design Award is one of the most prestigious awards for outstanding design. Every year, an international panel of experts honors innovative concepts in product design, communication, and architecture. This year, pib impressed the jury with its “Excellent Product Design” in the category „AI in Product Design Processes“ – a testament to how technology and aesthetics can go hand in hand.
Technology Meets Creativity
pib is more than just a robot – it is a platform for innovative technologies. The project enables curious minds to experiment with 3D printing, robotics, and artificial intelligence in a playful and hands-on way. No prior knowledge is required; anyone can contribute, co-create, and learn. The community plays a crucial role: the newly designed, human-like body that won the German Design Award was developed by a community member using CAD software.
Jürgen Baier, founder of pib, is thrilled about the recognition: „We are proud that pib has won the German Design Award! For us, this confirms that we are on the right path to making robotics and AI more accessible and tangible for everyone. It’s great to see that our vision of inspiring people to create and explore resonates so well.“
Learning with pib: Schools and Media Centers Adopt the Humanoid Robot
But it’s not just the maker community that’s excited about pib. More than 35 schools and media centers are already using it as an innovative learning platform. Students and teachers alike are leveraging this humanoid robot to explore future technologies in an interactive, hands-on manner. The focus goes beyond technical skills to include creative problem-solving and teamwork. By bringing knowledge to life, pib makes robotics and AI tangible – opening doors to the careers of tomorrow.
Open Source and Limitless Possibilities
Behind pib stands isento GmbH, a Nuremberg-based company specializing in software development and AI solutions. However, the robot thrives not only due to the work of isento employees but also through the contributions of its community. 3D printing files, detailed assembly instructions, programming code, AI skills, and a knowledge database are all freely available online – an open invitation to help shape the future of robotics.
Winning the German Design Award is a well-deserved honor for pib. But for this project, the award means much more: it is motivation to push the boundaries of what’s possible with open-source robotics even further. So if you’ve ever wondered how to create your own humanoid robot – pib has the answer.
Embodied, the creators of Moxie, have announced significant updates for the Moxie community as they transition away from cloud-based services. Below is a summary of the key points:
1. OpenMoxie Release Embodied has launched **OpenMoxie**, a locally hosted solution that allows users to operate Moxie independently of Embodied’s cloud servers. This ensures that Moxie can continue functioning in home environments even after Embodied’s cloud operations cease by **January 30th**.
Action Required: To use OpenMoxie, users must update their Moxie with the latest Over-The-Air (OTA) update and follow the instructions in the OpenMoxie Setup Guide.
2. Critical OTA Update (v.24.10.803) The latest OTA update, version 24.10.803, is essential to ensure compatibility with OpenMoxie. Users who haven’t already updated their Moxie should do so immediately.
Steps to update: 1. Power on Moxie and connect it to local Wi-Fi. 2. Allow up to one hour for the download and installation process. 3. Reboot Moxie after installation is complete.
3. Embodied’s Ongoing Mission While Embodied continues to search for a long-term solution for Moxie, they have no new updates regarding its future. However, OpenMoxie provides a way for users to maintain their connection with Moxie in a cloud-free environment.
The company expressed gratitude to its community for their unwavering support and emphasized pride in creating a path forward for Moxie’s continued use.
Legal and Regulatory Disclaimer Embodied has outlined several disclaimers regarding the use of OpenMoxie: – **No Guarantees or Warranties:** The functionality, security, or reliability of OpenMoxie is not guaranteed. Users assume full responsibility for its implementation and operation. – **Community-Driven and Unsupported:** OpenMoxie is provided „as is,“ with no technical support, updates, or maintenance from Embodied. – **Regulatory Compliance:** Users are responsible for ensuring compliance with applicable laws and regulations when using or modifying OpenMoxie. – **No Ongoing Obligations:** Embodied has no obligation to continue supporting or maintaining Moxie or its services; all future efforts will be community-led.
Conclusion The release of OpenMoxie marks an important step in preserving the functionality of Moxie as Embodied winds down its cloud operations. While this transition places responsibility on users and the community, it offers a way to keep Moxie’s unique companionship alive in a self-hosted environment.
UPDATE YOUR MOXIE NOW (before update servers are shut down on 30.01.2025l
Ho-ho-ho! 🎅 The holiday season is here, and even the robots are taking a break from their 24/7 work shifts! 🤖💼 (Well, at least the ones who *don’t* live in the cloud.) May your Christmas be as glitch-free as a perfectly coded algorithm and your New Year as smooth as a freshly updated firmware! Please follow Robots-Blog also in 2025!
Helping Students Learn Python within a Familiar Coding Environment and at Their Own Pace
GREENVILLE, Texas, Dec. 9, 2024 /PRNewswire-PRWeb/ — VEX Robotics, a leader in K-12 STEM education, announces the launch of “Switch,” a revolutionary method for learning Computer Science. Switch is a research-based, patented feature within VEXcode, VEX Robotics’ coding platform for all its products. To date, VEXcode has offered students both block-based and python coding languages. With the introduction of Switch inside VEXcode, students can simplify their transition between these two languages by integrating Python commands directly within their block-based code.
Research has consistently shown that block-based coding is best for novice learners to begin programming. However, as students progress they are motivated by the authenticity and power of text-based coding. Research also shows that this transition, from blocks-based to text-based coding, is not trivial, and is often the reason students do not continue to study Computer Science. Switch provides educators with a new tool that fosters a deeper understanding of programming concepts.
Students can now learn Python syntax, editing, and writing at their own pace—all within the familiar block-based environment. Switch offers several key features to facilitate this learning process:
Convert: Instantly convert one or more normal blocks into a Switch block with a single click, allowing you to see the underlying Python code.
Edit: Within a Switch block, you can edit the Python code directly, just as you would with regular text editing.
Write: Add new blank Switch blocks to write Python code from scratch, complete with auto-complete suggestions to assist you.
Drag and Drop: Rearrange and move Switch blocks just like normal blocks, enabling you to edit the program’s structure through drag-and-drop actions.
Syntax: Begin with converting a single block to a Switch block to see and learn the Python syntax, and progress to more complex code when you’re ready.
Learn More: Advance to writing multi-line Python code with proper indentations to deepen your understanding, all within a Switch block.
Familiar: All of this is done within the comfort of the block-based environment you’re already familiar with, making the transition to text-based programming smoother and more intuitive.
Switch’s scaffolded approach supports learners transitioning from block-based to text-based coding, building confidence and proficiency in a single, supportive environment. The development of Switch demonstrates VEX Robotics’ commitment to providing schools with programs that strengthen STEM education for students of all skill levels.
“Teaching Computer Science is important but also challenging,” said Jason McKenna, Vice President of Global Education Strategy. “Educators are seeking ways to teach programming in an approachable manner that allows students to transition from block-based to text-based coding. Switch is an innovative solution in our ongoing efforts to make STEM and Computer Science Education accessible to all students.”
In addition to facilitating a seamless transition from blocks to text-based coding, Switch assists students in the following key areas:
Enhanced Differentiated Learning: Switch enables students to progress at their own pace by only converting specific parts of their code to Python when they are ready. This adaptability supports differentiated learning, allowing educators to personalize instruction for students who may excel or need additional support.
Syntax Guidance and Error Reduction: With built-in autocomplete functionality and automated indentation, Switch helps users learn Python syntax with fewer errors. This guidance allows students to focus on understanding programming concepts rather than being hindered by syntax errors, thereby reducing frustration and fostering confidence.
Integrated Learning Pathway within VEXcode: Switch is an integral feature of VEXcode, allowing students to start with block-based coding, incorporate Python using Switch, and eventually move to full text-based coding—all in one platform. This structured pathway supports students’ progression from novice to advanced levels in a cohesive environment, reinforcing continuity in their programming journey.
“Research conducted by our team offers empirical evidence for the effectiveness of Switch,” said Dr. Jimmy Lin, Director of Computer Science Education. “The findings contributed to our understanding of how to design environments that support students of varying experience levels and confidence in transitioning from blocks-based modalities to Python”
VEXcode with Switch is free and compatible with the following VEX Robotics platforms: IQ, EXP, V5, and CTE Workcell. Additionally, VEXcode with Switch is available with a subscription in VEXcode VR, an online platform that enables users to learn programming by coding Virtual Robots (VR) in interactive, video game-like environments. VEXcode with Switch is accessible on Chromebooks, Windows, and Mac computers.
“Throughout December, in celebration of Computer Science Education Week, we’re inviting everyone to try Switch with VEXcode VR or with their VEX hardware,” said Tim Friez, Vice President of Educational Technology. “Our new Hour of Code activities and resources enable students to explore Switch coding across both hardware and virtual platforms.”
Transitioning from blocks to text can be challenging, but with the patented Switch features, it doesn’t have to be.
Discover how Switch and VEXcode can empower your students to master Python at their own pace. Visit switch.vex.com to learn more.