At The Bleeding Edge Of Robotics: 2 Year Milestone For pib

2 years ago, the open source robotics project pib was launched. The goal of pib, the printable intelligent bot anyone can build themselves, is to lower the barriers and make robotics and AI accessible to anyone who is interested. Over the past two years, pib has built an active and dedicated community that supports the project in moving forward. Therefore, a lot has happened since the project launch – time to look back on how far pib has come.

Milestones, Challenges and What Comes Next

It’s not every day that a robot turns two years old, so the team celebrated with a big party. The all new pib documentary was streamed to kick off the event, followed by different stations for guests to experience pib’s newest features hands-on.

pib started out as an idea that slowly took shape in the form of a master thesis and a robotic arm. From there, a humanoid robot was created that can easily be 3D printed with the free 3D print files on the website and then built with the help of the building manuals online. pib offers many ways to implement AI trainings such as voice assistant technology, object detection, imitation and more.

For starters, the pib team and the community have optimized pib’s mobility in a joint effort. The result is impressive: In its newest version, pib can now move its arms at basically all angles.  Another rapidly progressing topic is pib’s digital twin which received a birthday present by the community members that took on this project: The camera now works in the virtual environment, enabling the camera stream to be transmitted to the outside world to be analyzed there and then become the base of control processes.

Talk To Me, pib!

Aside from that, there has been some significant progress in the field of human-machine interaction, particularly focusing on enabling voice-based communication with pib through advanced voice assistant technology. Exploring the potential of natural speech interaction has become a significant area of the team’s current efforts and the project is committed to advancing pib’s capabilities in this direction.

One of the newest features that were revealed at the pib party is communication in a multimodal world. The robot captures an image, analyzes it, and then answers questions in relation to the image. For example, when asking pib “where are we right now?” it interprets the room and its setting and will answer something like “we are in an office space”.

With this new feature, pib was also able to play its first round of Tic Tac Toe. The team drew the gameboard on a whiteboard so that pib was able to analyze the current state of the game and determine the next move with commands such as “place the next X in the top right corner”.

Join The Community

The pib community is rapidly growing and consists of 3D printing, robotics and AI enthusiasts. Whether you’re a rookie or an expert, anyone is invited to join, share their ideas and work on exciting projects together.

Exploring Elephant Robotics LIMO Cobot

1. Introduction:

This article primarily introduces the practical application of LIMO Cobot by Elephant Robotics in a simulated scenario. You may have seen previous posts about LIMO Cobot’s technical cases, A[LINK]B[LINK]. The reason for writing another related article is that the original testing environment, while demonstrating basic functionality, often appears overly idealized and simplified when simulating real-world applications. Therefore, we aim to use it in a more operationally consistent environment and share some of the issues that arose at that time.

2. Comparing the Old and New Scenarios:

First, let’s look at what the old and new scenarios are like.

Old Scenario: A simple setup with a few obstacles, relatively regular objects, and a field enclosed by barriers, approximately 1.5m*2m in size.

New Scenario: The new scenario contains a wider variety of obstacles of different shapes, including a hollowed-out object in the middle, simulating a real environment with road guidance markers, parking spaces, and more. The size of the field is 3m*3m.

The change in environment is significant for testing and demonstrating the comprehensiveness and applicability of our product.

3. Analysis of Practical Cases:

Next, let’s briefly introduce the overall process.

The process is mainly divided into three modules: one is the functionality of LIMO PRO, the second is machine vision processing, and the third is the functionality of the robotic arm. (For a more detailed introduction, please see the previous article [link].)

LIMO PRO is mainly responsible for SLAM mapping, using the gmapping algorithm to map the terrain, navigate, and ultimately achieve the function of fixed-point patrol.

myCobot 280 M5 is primarily responsible for the task of grasping objects. A camera and a suction pump actuator are installed at the end of the robotic arm. The camera captures the real scene, and the image is processed by the OpenCV algorithm to find the coordinates of the target object and perform the grasping operation.

Overall process:

1. LIMO performs mapping.⇛

2. Run the fixed-point cruising program.⇛

3. LIMO goes to point A ⇛ myCobot 280 performs the grasping operation ⇒ goes to point B ⇛ myCobot 280 performs the placing operation.

4. ↺ Repeat step 3 until there are no target objects, then terminate the program.

Next, let’s follow the practical execution process.

Mapping:

First, you need to start the radar by opening a new terminal and entering the following command:

roslaunch limo_bringup limo_start.launch pub_odom_tf:=false

Then, start the gmapping mapping algorithm by opening another new terminal and entering the command:

roslaunch limo_bringup limo_gmapping.launch

After successful startup, the rviz visualization tool will open, and you will see the interface as shown in the figure.

At this point, you can switch the controller to remote control mode to control the LIMO for mapping.

After constructing the map, you need to run the following commands to save the map to a specified directory:

1. Switch to the directory where you want to save the map. Here, save the map to `~/agilex_ws/src/limo_ros/limo_bringup/maps/`. Enter the command in the terminal:

cd ~/agilex_ws/src/limo_ros/limo_bringup/maps/

2. After switching to `/agilex_ws/limo_bringup/maps`, continue to enter the command in the terminal:

rosrun map_server map_saver -f map1

This process went very smoothly. Let’s continue by testing the navigation function from point A to point B.

Navigation:

1. First, start the radar by entering the following command in the terminal:

roslaunch limo_bringup limo_start.launch pub_odom_tf:=false

2. Start the navigation function by entering the following command in the terminal:

roslaunch limo_bringup limo_navigation_diff.launch

Upon success, this interface will open, displaying the map we just created.

Click on „2D Pose Estimate, “ then click on the location where LIMO is on the map. After starting navigation, you will find that the shape scanned by the laser does not overlap with the map. You need to manually correct this by adjusting the actual position of the chassis in the scene on the map displayed in rviz. Use the tools in rviz to publish an approximate position for LIMO. Then, use the controller to rotate LIMO, allowing it to auto-correct. When the shape of the laser scan overlaps with the shapes in the map’s scene, the correction is complete, as shown in the figure where the scanned shape and the map overlap.

Click on „2D Nav Goal“ and select the destination on the map for navigation.

The navigation test also proceeds smoothly.

Next, we will move on to the part about the static robotic arm’s grasping function.

Identifying and Acquiring the Pose of Aruco Codes

To precisely identify objects and obtain the position of the target object, we processed Aruco codes. Before starting, ensure the specific parameters of the camera are set.

Initialize the camera parameters based on the camera being used.

def __init__(self, mtx: np.ndarray, dist: np.ndarray, marker_size: int):
self.mtx = mtx
self.dist = dist
self.marker_size = marker_size
self.aruco_dict = cv2.aruco.Dictionary_get(cv2.aruco.DICT_6X6_250)
self.parameters = cv2.aruco.DetectorParameters_create()

Then, identify the object and estimate its pose to obtain the 3D position of the object and output the position information.

def estimatePoseSingleMarkers(self, corners):
"""
This will estimate the rvec and tvec for each of the marker corners detected by:
corners, ids, rejectedImgPoints = detector.detectMarkers(image)
corners - is an array of detected corners for each detected marker in the image
marker_size - is the size of the detected markers
mtx - is the camera matrix
distortion - is the camera distortion matrix
RETURN list of rvecs, tvecs, and trash (so that it corresponds to the old estimatePoseSingleMarkers())
"""
marker_points = np.array([[-self.marker_size / 2, self.marker_size / 2, 0],
[self.marker_size / 2, self.marker_size / 2, 0],
[self.marker_size / 2, -self.marker_size / 2, 0],
[-self.marker_size / 2, -self.marker_size / 2, 0]], dtype=np.float32)
rvecs = []
tvecs = []
for corner in corners:
retval, rvec, tvec = cv2.solvePnP(marker_points, corner, self.mtx, self.dist, False,
cv2.SOLVEPNP_IPPE_SQUARE)
if retval:
rvecs.append(rvec)
tvecs.append(tvec)

rvecs = np.array(rvecs)
tvecs = np.array(tvecs)
(rvecs - tvecs).any()
return rvecs, tvecs

The steps above complete the identification and acquisition of the object’s information, and finally, the object’s coordinates are returned to the robotic arm to execute the grasping.

Robotic Arm Movement and Grasping Operation

Based on the position of the Aruco marker, calculate the target coordinates the robotic arm needs to move to and convert the position into a coordinate system suitable for the robotic arm.

def homo_transform_matrix(x, y, z, rx, ry, rz, order="ZYX"):
rot_mat = rotation_matrix(rx, ry, rz, order=order)
trans_vec = np.array([[x, y, z, 1]]).T
mat = np.vstack([rot_mat, np.zeros((1, 3))])
mat = np.hstack([mat, trans_vec])
return mat

If the Z-axis position is detected as too high, it will be corrected:

if end_effector_z_height is not None:  
p_base[2] = end_effector_z_height

After the coordinate correction is completed, the robotic arm will move to the target position.

# Concatenate x, y, z, and the current posture into a new array
new_coords = np.concatenate([p_base, curr_rotation[3:]])
xy_coords = new_coords.copy()

Then, control the end effector’s API to suction the object.

The above completes the respective functions of the two robots. Next, they will be integrated into the ROS environment.

#Initialize the coordinates of point A and B
    goal_1 = [(2.060220241546631,-2.2297520637512207,0.009794792000444471,0.9999520298742676)] #B
    goal_2 = [(1.1215190887451172,-0.002757132053375244,-0.7129997613218174,0.7011642748707548)] #A
    #Start navigation and link the robotic arm
    map_navigation = MapNavigation()
    arm = VisualGrasping("10.42.0.203",9000)
    print("connect successful")

    arm.perform_visual_grasp(1,-89)
    # Navigate to location A and perform the task
        for goal in goal_1:
        x_goal, y_goal, orientation_z, orientation_w = goal
        flag_feed_goalReached = map_navigation.moveToGoal(x_goal, y_goal, orientation_z, orientation_w)
        if flag_feed_goalReached:
            time.sleep(1)
            # executing 1 grab and setting the end effector's Z-axis height to -93.
            arm.unload()
            print("command completed")
        else:
            print("failed")

4. Problems Encountered

Mapping Situation:

When we initially tried mapping without enclosing the field, frequent errors occurred during navigation and localization, and it failed to meet our requirements for a simulated scenario.

Navigation Situation:

In the new scenario, one of the obstacles has a hollow structure.

During navigation from point A to point B, LIMO may fail to detect this obstacle and assume it can pass through, damaging the original obstacle. This issue arises because LIMO’s radar is positioned low, scanning only the empty space. Possible solutions include adjusting the radar’s scanning range, which requires extensive testing for fine-tuning, or adjusting the radar’s height to ensure the obstacle is recognized as impassable.

Robotic Arm Grasping Situation:

In the video, it’s evident that our target object is placed on a flat surface. The grasping did not consider obstacle avoidance for the object. In the future, when setting special positions for grasping, this situation needs to be considered.

5. Conclusion

Overall, LIMO Cobot performed excellently in this scenario, successfully meeting the requirements. The entire simulated scenario covered multiple core areas of robotics, including motion control of the robotic arm, path planning, machine vision recognition and grasping, and radar mapping navigation and fixed-point cruising functions of the mobile chassis. By integrating these functional modules in ROS, we built an efficient automated process, showcasing LIMO Cobot’s broad adaptability and advanced capabilities in complex environments.

Credits

Elephant Robotics

Elephant Robotics

Elephant Robotics Unveils myArm M&C Series Robots to Advance Embodied Intelligence

Explore myArm M&C series robots for versatile, high-performing solutions in robotics, offering precise control and diverse applications.


SHENZHEN, GUANGDONG, CHINA, May 10, 2024 /EINPresswire.com/ — Embodied intelligence research, as a critical branch of artificial intelligence, is striving to endow robots with new capabilities in precise motion control, high-level autonomous decision-making, and seamless human-machine interaction.

Against this backdrop, Elephant Robotics has recently unveiled the myArm M&C series robots. These powerful and cost-effective lightweight robots empower researchers and developers in both data collection and execution, driving forward the advancements in embodied intelligence technology and its practical applications..

The myArm M&C series robots are meticulously designed to meet the diverse needs of users, prioritizing flexibility and adaptability. They play a pivotal role in various research and application scenarios, making them the ideal robotics solution for education and research purposes.

myArm C650

The myArm C650 is a universal 6 DOF robot motion information collection device designed to meet the diverse needs of education, research, and industry in robot motion data collection and analysis. With its lightweight design of weighing only 1.8kg, the myArm C650 boasts a horizontal working radius of 650mm, minimizing inertial forces during operation for enhanced response speed and precision.

Equipped with high-precision digital servo motors and 4096-bit encoders on all 6 joints, the myArm C650 mimics human arm motion with remarkable accuracy, enabling a wide range of tasks. Its intuitive control method, featuring dual-finger remote control and dual customizable buttons, supports recording functions for precise command execution and immediate feedback on robot behavior. This flexibility makes the myArm C650 an ideal choice for precise motion tracking and data collection in various experimental and educational settings. With an impressive information acquisition speed of up to 50Hz, it has become indispensable for robot algorithm development and higher education institutions, offering real-time data support for complex control systems.

In remote control applications, the myArm C650 excels, delivering outstanding performance regardless of the robot’s configuration complexity. Moreover, its compatibility with Python and ROS, coupled with open-source remote control demonstration files, expands its application scope, enabling seamless integration with advanced robot platforms like the myArm M750, myCobot Pro 630, and Mercury B1.

The myArm C650 sets a new standard for versatility and performance in robot motion data collection, empowering users to explore the full potential of advanced robotics across diverse fields.

myArm M750

The myArm M750 is a universal intelligent 6 DOF robotic arm. It not only meets the demand for high-precision robot motion control but is particularly suitable for entry-level robot motion algorithm verification and practical teaching scenarios. Its standardized mechanical arm structure provides an ideal learning platform for students and beginners to grasp the basic principles and applications of robot kinematics.

Dedicated to achieving precise motion control and verification, the myArm M750 excels in applications requiring strict operational accuracy, such as precision assembly, fine manipulation, and quality monitoring. Equipped with industrial-grade high-precision digital servo motors and advanced control algorithms, the myArm M750 delivers exceptional torque control and positional accuracy, supporting a rated load capacity of 500g and a peak load of up to 1kg.

The myArm M750’s versatility extends to its end effector design, featuring a standard parallel gripper and vision module that empower users with basic grasping and recognition capabilities. Furthermore, the myArm M750 offers compatibility with a range of optional accessories, significantly expanding its application scenarios and adaptability to diverse tasks.

myArm M&C Teleoperation Robotic Arm Kit

Teleoperation Robotic Arm Kit represents a leap forward in robotics innovation, offering an advanced solution tailored for remote control and real-time interaction through cutting-edge teleoperation technology. By seamlessly integrating the versatility of the myArm C650 with the precise control capabilities of the myArm M750, this kit forms a dynamic and adaptable platform suitable for a myriad of research, educational, and commercial applications.

Engineered to mimic human behavior, the kit enables researchers and developers to validate and test remote control systems and robot motion planning models akin to the ALOHA robot. Empowered by millisecond-level data acquisition and control capability, real-time drag control functionality, and multi-robot collaborative operation capabilities, the myArm M&C Kit facilitates the execution of complex tasks, including advanced simulations of human behavior. This technology not only showcases the precision and efficiency of robots in mimicking human actions but also propels research and development in robot technology for simulating human behavior and performing everyday tasks.

Moreover, integrated AI technology equips robots with learning and adaptability, enabling autonomous navigation, object recognition, and complex decision-making capabilities, thereby unlocking vast application potential across diverse research fields.

myArm M&C Embodied Humanoid Robot Compound Kit

Stanford University’s Mobile ALOHA project has garnered global attention for its groundbreaking advancements in robotics technology. It has developed an advanced system that allows users to execute complex dual-arm tasks through human demonstrations, thereby enhancing imitation learning algorithms‘ efficiency through data accumulation and collaborative training. The Mobile ALOHA system showcases its versatility by seamlessly executing various real-world tasks, from cleaning spilled drinks to cooking shrimp and washing frying pans. This innovation not only marks a significant milestone in robotics but also paves the way for a future where humans and robots coexist harmoniously.

Drawing inspiration from Stanford’s Mobile ALOHA project, this kit adopts the same Tracer mobile chassis. With an open-source philosophy, minimalist design, modular construction, and robust local community support, this kit serves as a cost-effective solution for real-time robot teleoperation and control, mirroring the capabilities of Mobile ALOHA with a more accessible price.

Designed to cater to the needs of small and medium-sized enterprises, as well as educational and research institutions, this kit offers a more accessible price, user-friendly features, and easy accessibility to cutting-edge robot technology.

The myArm M&C series robots are a versatile robotics solution catering to diverse needs from fundamental research to intricate task execution. In combination with optional kits, they seamlessly adapt to various application scenarios, from precision manufacturing to medical assistance, education, training, and household support. The myArm M&C series robots stand out as dependable and high-performing solutions, promising reliability and excellence. The inclusion of the Embodied Humanoid Robot Compound Kit and Quadruped Bionic Robot Compound Kit further expands the possibilities in robotics, encouraging interdisciplinary exploration and fostering innovation.

Festo at Hannover Fair unveils Bionic Honeybees that fly in swarms

For more than 15 years, the Bionic Learning Network has been focusing on the fascination of flying. In addition to the technical decoding of bird flight, the team has researched and technologically implemented numerous other flying objects and their natural principles. With the BionicBee, the Bionic Learning Network has now for the first time developed a flying object that can fly in large numbers and completely autonomously in a swarm. The BionicBee will present its first flight show at the Hannover Messe 2024.

At around 34 grams, a length of 220 millimetres and a wingspan of 240 millimetres, the BionicBee is the smallest flying object created by the Bionic Learning Network to date. For the first time, the developers used the method of generative design: after entering just a few parameters, a software application uses defined design principles to find the optimal structure to use as little material as necessary while maintaining the most stable construction possible. This consistent lightweight construction is essential for good manoeuvrability and flying time.

Autonomous flying in a swarm

The autonomous behavior of the bee swarm is achieved with the help of an indoor locating system with ultra-wideband (UWB) technology. For this purpose, eight UWB anchors are installed in the space on two levels. This enables an accurate time measurement and allows the bees to locate themselves in the space. The UWB anchors send signals to the individual bees, which can independently measure the distances to the respective transmitting elements and calculate their own position in the space using the time stamps.

To fly in a swarm, the bees follow the paths specified by a central computer. To ensure safe and collision-free flight in close formation, a high degree of spatial and temporal accuracy is required. When planning the path, the possible mutual interaction through air turbulence “downwash” must also be taken into account.

As every bee is handmade and even the smallest manufacturing differences can influence its flight behavior, the bees additionally have an automatic calibration function: After a short test fl ight, each bee determines its individually optimized controller parameters. The intelligent algorithm can thus calculate the hardware differences between the individual bees, allowing the entire swarm to be controlled from outside, as if all bees were identical.

„ReBeLs on Wheels“ make driverless transport systems affordable through modern plastic technology

Cologne/Hanover, April 24, 2024 – Mobile robotics systems are being used in more and more work areas, in e-commerce warehouses as well as in modern restaurants. Conventional models on the market start at around 25,000 euros, while solutions with an integrated robot arm start at around 70,000 euros. However, widespread use in the market is often unaffordable for small and medium-sized enterprises due to the high prices. igus wants to change this with new low-cost robotics offerings and is presenting a series of low-cost mobile plastic robots at the Hannover Messe.

The market for Automated Guided Vehicles (AGV) and Autonomous Mobile Robots (AMR) is booming: The global market for mobile robotics, including service robotics, is currently worth around 20.3 billion US dollars, and experts expect it to almost double by 2028. 1 Mobile robots are particularly common in intralogistics and industrial applications. And even in the catering industry or in hospitals, the smart helpers are increasingly making their rounds. This is also the case at motion plastics specialist igus: For four years now, the plastics experts have been successfully testing AGVs in-house – driverless racks that deliver mail and deliveries to offices, as well as mobile robots in production that move transports and stack-and-turn containers. The experience gained flows directly into the development of a new low-cost automation product line, the „ReBeL on Wheels“. Their goal: to pave the way for small and medium-sized enterprises (SMEs) to use cost-effective mobile robotics.

Mobile ReBeL solutions for education, logistics and service
The basis of any mobile robotics system is the ReBeL. The use of plastic makes the robot particularly affordable at 4,970 euros and, with a dead weight of 8.2 kilograms, the lightest service robot with cobot function in its class. All mechanical components that make up the ReBeL are developed and manufactured by igus without exception. It has a load capacity of 2 kilograms and a reach of 664 millimetres. Various mobile systems are planned in which the ReBeL is centrally integrated: igus is launching an affordable version for the education sector for 14,699 euros – including the robot arm. The ReBeL EduMove equipped with a gripper serves as an autonomous learning platform for educational institutions thanks to open source. It has a modular design and can be flexibly expanded to include additional functions such as lidar, camera technology or slam algorithm. Another variant is an automated guided vehicle system for SMEs. It can carry up to 30 kilograms. With the optional ReBeL, simple A-to-B positioning can be made. It dispenses with expensive sensor technology and instead relies on 3D sensor technology developed in-house. The price is 17,999 euros. In addition, igus will be showcasing a study of a service robot at a low price in Hanover. The ReBeL Butler is suitable for simple but time-consuming pick-up and drop-off services, for example in the hotel and catering industry.

A lighthouse project on wheels
The goal of all these developments is the lighthouse project, a mobile robot with integrated HMI and vision that could even tidy up an office on its own. „With this project, we are pursuing a bottom-to-top strategy, in which certain components such as safety laser scanners are not included in the basic package in order to keep the price low,“ explains Alexander Mühlens, authorized signatory and head of the low-cost automation business unit at igus. „Nevertheless, it ensures that the solution can be retrofitted for industrial requirements.“ Among other things, igus is presenting an affordable gripper with a large stroke and travel this year, which offers a high degree of flexibility when gripping different geometries. Alexander Mühlens: „The areas of application for this targeted low-cost AMR are extremely diverse and go far beyond simple transport tasks. They encompass a huge range of applications in various areas of life, such as cleaning tasks or serving coffee directly at the workplace.“

IDS NXT malibu now available with the 8 MP Sony Starvis 2 sensor IMX678

Intelligent industrial camera with 4K streaming and excellent low-light performance

IDS expands its product line for intelligent image processing and launches a new IDS NXT malibu camera. It enables AI-based image processing, video compression and streaming in full 4K sensor resolution at 30 fps – directly in and out of the camera. The 8 MP sensor IMX678 is part of the Starvis 2 series from Sony. It ensures impressive image quality even in low light conditions and twilight.

Industrial camera with live AI: IDS NXT malibu is able to independently perform AI-based image analyses and provide the results as live overlays in compressed video streams via RTSP (Real Time Streaming Protocol). Hidden inside is a special SoC (system-on-a-chip) from Ambarella, which is known from action cameras. An ISP with helpful automatic features such as brightness, noise and colour correction ensures that optimum image quality is attained at all times. The new 8 MP camera complements the recently introduced camera variant with the 5 MP onsemi sensor AR0521.

To coincide with the market launch of the new model, IDS Imaging Development Systems has also published a new software release. Users now also have have the the option of displaying live images from the IDS NXT malibu camera models via MJPEG-compressed HTTP stream. This enables visualisation in any web browser without additional software or plug-ins. In addition, the AI vision studio IDS lighthouse can be used to train individual neural networks for the Ambarella SoC of the camera family. This simplifies the use of the camera for AI-based image analyses with classification, object recognition and anomaly detection methods.

PiCockpit: Innovative Web Solution for Managing Multiple Raspberry Pis

pi3g Unveils Groundbreaking New Features for Business Users

Leipzig, March 27, 2024 – pi3g GmbH & Co. KG introduces new web-based Terminal and File Editor Apps to its PiCockpit.com platform, enhancing Raspberry Pi remote management with an updated Script Scheduler, Video Streaming App, and the PiCockpit Pro Plus plan for custom software needs. These innovations reflect pi3g’s dedication to simplifying and boosting productivity for global business users without the need for in-depth Linux expertise. In particular, teams with a Windows® background require less training and specialized skills when using PiCockpit to manage the company’s Raspberry Pi fleet, resulting in substantial time and cost savings for the business.

The PiCockpit Terminal App offers a seamless web-based terminal interface, eliminating the need for complex setup or additional software like PuTTY. Leveraging WebRTC technology, it ensures a secure, encrypted connection for managing all Raspberry Pi devices from any location. This app simplifies remote device management for businesses, enabling straightforward web-based remote access beyond the limitations of traditional methods.

The File Editor App, enhanced with RaspiGPT technology powered by OpenAI’s GPT-4, simplifies file and directory management on Raspberry Pis. By accessing PiCockpit.com, users can remotely edit files from any web browser. This app provides smart assistance for script writing, log analysis, and file content explanation, streamlining file management and speeding up business development processes.

PiCockpit: Simplified management, reduced costs

Maximilian Batz, founder of pi3g, stated, “Our goal has always been to make Raspberry Pi management as accessible and efficient as possible for our users. The launch of our new PiCockpit apps, along with enhancements to our existing services, represents a significant step forward in achieving that goal. We’re particularly excited about the possibilities that RaspiGPT opens up, streamlining tasks that previously required extensive technical knowledge.”

PiCockpit’s Pro Plan introduces a comprehensive solution for businesses looking to scale their operations beyond the first five free Pis. With Two Factor Authentication, PiCockpit ensures secure access to user accounts and their Raspberry Pis. PiCockpit Pro Plus takes customization to the next level, offering bespoke software development and system integration services. This plan is not just an offering but a partnership, providing businesses with a tailored solution that meshes seamlessly with their existing systems, paving the way for significant cost savings and operational efficiencies.

About PiCockpit

PiCockpit.com is a comprehensive web interface designed to simplify the management and operation of Raspberry Pi devices. Offering a range of applications including PiStats, Video Streaming, Script Scheduler, Terminal, and File Editor, PiCockpit enables users to monitor performance, schedule scripts, stream Raspberry Pi cameras, and manage files from anywhere in the world.

About pi3g

Based in Leipzig, Germany, pi3g has been involved in the Raspberry Pi ecosystem since its very beginning in 2012. As an official Raspberry Pi approved reseller, pi3g offers a comprehensive suite of services for businesses, including hardware sourcing, software development, hardware development and consulting.

Robotics competitions in Hamburg: Winners are alliances from Berlin and Brandenburg as well as Rockenhausen and Berlin

VRC und VIQC German Masters Winners:

▪ Winners of the VEX Robotics Competition: Alexander-von-Humboldt-Gymnasium (Berlin) and Heinitz-Gymnasium (Rüdersdorf)
▪ Winners of the VEX IQ Challenge: IGS Rockenhausen (Rhineland-Palatinate) and BEST-Sabel (Berlin)
▪ Almost 35 teams met at the German finals from 6 to 8 March
▪ Students from IGS Rockenhausen (Rhineland-Palatinate) and Ernst-Abbe Gymnasium in Oberkochen secured tickets for the VEX Robotics World Championship in Dallas

Hamburg, March 8, 2024. Hectic activity has reigned over the past three days at the Hamburg University of Applied Sciences (HAW Hamburg). Around 150 pupils from general education schools and vocational schools from all over Germany worked on robots that they had designed themselves over the past few months. Their goal: For the final rounds of the German VEX robot competitions, they wanted to get the best out of their babies. A total of 14 trophies were up for grabs, which were ultimately awarded to twelve different teams. 

Winners of the cooperative tournament competitions at the German Masters  In the VEX Robotics Competition (VRC), the Alexander-von-Humboldt Gymnasium (Berlin) and the Heinitz-Gymnasium (Rüdersdorf) prevailed. The VEX IQ Challenge (VIQC) was won by an alliance of IGS Rockenhausen  (Rhineland-Palatinate) and BEST-Sabel educational institutions (Berlin). 

Luca Eckert (from left) and Jonas Köhler (IGS) as well as Tim Heintze and Konrad Möhring (BEST-Sabel) won the VEX IQ Teamwork Challenge

The German Masters gives you the opportunity to qualify for the VEX Worlds. These „World Championships“ will take place from April 25 to May 3 in Dallas, Texas, with 1,000 teams from 50 countries. The prerequisite for flying overseas: winning the Excellence Awards. A jury awards them on the basis of the performance in the competition and other criteria such as the capabilities of a robot in comparison. Students from IGS Rockenhausen (High and Middle School) and the Ernst-Abbe-Gymnasium in Oberkochen (Middle and Elementary School) will travel to Dallas. 

Tobit Gries (from left), Sebastian Gasior and Jakob Bachmann from IGS Rockenhausen snatched the Excellence Award/High School

The worldwide competitions of the Robotics Education &  Competition (REC) Foundation, which is based in the USA, are organized in Germany by the Hamburg-based association  roboMINT. 

The VEX Robotics Competition (VRC) is open to students from the age of eleven . A team consists of at least two students, it competes in alliances  against other teams. The aim of a game in autonomous and remote-controlled  driving modes is, among other things, to get as many tripballs as possible into your own goal or into  your own offensive zone.  

Till Schneider (l.) and Vincent Fratzscher (Heinitz-Gymnsaium) won the trophy in the VRC team competition

The VEX IQ Challenge (VIQC) is open to students between the ages of eight and 15. A team consists of at least two students, it competes together with another team. One of the goals of the game is to convert as many blocks as possible into goals. Points are also awarded if the robot is parked in the „Supply Zone“ at the end of a match.  

Anes Rebahi (from left), Nico Menge, Karl Steinbach, Maximilian Marschner and Erik Tunsch (Alexander-von Humboldt-Gymnasium) won the VRC team competition

Introducing BlockBot: an Innovative STEAM Toy in Robotics Education

In the rapidly evolving landscape of educational toys, one product stands out for its innovative approach to STEAM (Science, Technology, Engineering, Arts, and Mathematics) education: BlockBot. Developed by 130T Inc., BlockBot is a groundbreaking smart block system that combines the familiarity of LEGO with cutting-edge robotics technology, offering children a hands-on learning experience like never before.

At the heart of BlockBot lies its modular design, reminiscent of traditional LEGO blocks. Each block is equipped with different robotics functionalities, allowing users to assemble them into various configurations, from simple structures to complex robots. What sets BlockBot apart is its patented connector technology, which enables seamless power supply and communication between blocks without the need for wires, making it not only convenient but also aesthetically pleasing, particularly for young learners.

One of the key advantages of BlockBot is its versatility in STEAM education. Through Bluetooth communication, BlockBot can interface with any computing devices, enabling students to engage in programming and coding activities that promote critical thinking and problem-solving skills. Whether it’s programming a robot to navigate a maze or designing an automated task, BlockBot offers endless opportunities for creative exploration and learning.

BlockBot has already made waves on the international stage, receiving acclaim at exhibitions in Vietnam, Thailand, and HongKong. Its recent showcase at the Spielwarenmesse 2024 in Germany further solidified its reputation as a game-changer in the world of educational toys. With interest from overseas buyers continuing to grow, BlockBot is poised to revolutionize robotics education worldwide.

In addition to its educational benefits, BlockBot also promotes inclusivity and accessibility in STEAM learning. Its intuitive design and user-friendly interface make it accessible to children of all ages and abilities, fostering a collaborative learning environment where creativity knows no bounds.

In conclusion, BlockBot represents the next generation of STEAM toys, combining the timeless appeal of building blocks with the limitless possibilities of robotics technology. As it continues to gain traction in the global market, BlockBot is poised to inspire the next generation of innovators, engineers, and problem solvers.

VRC and VIQC German Masters in Hamburg: German finals of robotics competitions

Hamburg, February 2024: Next week, the final rounds of the VEX robot competitions will take place in Germany. Around 150 students from general education schools and vocational schools from all over Germany meet at the Hamburg University of Applied Sciences (HAW Hamburg) to find out which of the robots they have designed best solves given tasks. The worldwide competitions of the Robotics Education & Competition (REC) Foundation, which is based in the USA, are organized in Germany by the Hamburg-based association roboMINT. 

The Competition Categories

The VEX Robotics Competition (VRC) is open to students from the age of eleven . A team consists of at least two students, it competes in alliances  against other teams. One of the goals of a game is to get as many tripballs as  possible into your own goal or into your own offensive zone. 

As part of the VEX IQ Challenge, students between the ages of eight and 15 can participate. A team consists of at least two students, it competes together with another team. One of the goals of the game is to convert as many blocks as possible into goals. Points are also awarded if the robot is parked in the „Supply Zone“ at the end of a match. 

Through the German Masters, participants can qualify for the VEX Worlds from April 25  to May 3 in Dallas (US state of Texas) with 1,000 teams from 50 countries .

German Masters 

Venue: HAW Hamburg 

Berliner Tor 21, Aula 

Wednesday, 06.03.: VRC, start qualification 1 at 12.30 p.m. 

Thursday, 07.03.: VRC, start qualification 2 at 9.30 a.m., final: 1.00 p.m.

Friday, 08.03.: VIQC, start qualification at 11.00 a.m., final: 3.45 p.m.

Contac persont: 

Ralph Schanz
Chairman of roboMINT e.V.

About the roboMINT e.V.:

It all started in the 2017/2018 season. Together with the student campus dEin Labor of the TU Berlin, roboMINT conducted the first VEX Robotics student competitions in Germany. The first team to qualify for the annual „World Championships“ in the USA was the Heinitz-Gymnasium Rüdersdorf. In the meantime, there are various regional preliminaries and two „Nationals“ (VIQC and VRC) nationwide. Currently, a total of seven teams from Germany can qualify for the „World Championships“ in Dallas each season. 

roboMINT supports and coordinates the nationwide VEX robotics competitions. The association informs and supports the participating teams, the supervisors and the regional organizers. The aim of the association is to promote STEM education in Germany.