Das Start-Up LoCo CORP. möchte mit Roboterbausätzen  europaweit Schüler:innen für MINT-Themen begeistern

Zaragoza, 21. April 2022 LoCo CORP. gründete sich an der Universität von  Zaragoza als Antwort auf den Mangel an fortschrittlichen und gleichzeitig  erschwinglichen Robotiklösungen für Lernzwecke. Das Start-Up begann 2021 mit der  Entwicklung erster Roboterbausätze für den Unterricht. Zu den zentralen  Lerninhalten gehören zum Beispiel Themen wie Programmierung, Mechanik, Logik  und Design. Vor wenigen Wochen stellte das Unternehmen dann die ersten  Lernroboter vor: NOCTIS und AUREL. 

Mit NOCTIS und AUREL können Technikinteressierte erste Erfahrungen mit dem Bauen,  Programmieren und Steuern von Robotern sammeln. © LoCo CORP.

LoCo CORP. arbeitet bereits mit Bildungs- und Kulturzentren in Zaragoza zusammen. Die Einrichtungen konnten ihren Schülern mithilfe der Bausätze eine  neuartige Lernerfahrung ermöglichen, da die Roboter die Schüler:innen spielerisch  an technische Themen heran führen. Parallel dazu hat LoCo CORP. einen Großteil  seines Kapitals in die Entwicklung, Erprobung und Validierung einer eigenen Serie  von Robotern investiert.  

„Die Entwicklung von Lernrobotern ist keine leichte Aufgabe: Es geht um weit mehr als nur die reine Technik. Es kann sehr frustrierend sein sich technisches Wissen  anzueignen. Wenn ein Projekt funktioniert, ist das sehr lohnend, allerdings muss man  in der Regel viel Zeit und Mühe in den Prozess investieren. Wir wollen uns abheben,  indem wir unseren Robotern eine fantasievolle Rahmengeschichte geben, die junge Roboterfans zusätzlich motivieren soll.“, sagt Manuel Bernal Lecina, Gründer von  LoCo CORP. 

Viele autodidaktische Bastler (auch Maker genannt) zitieren in ihren Projekten  beliebte Strömungen der Popkultur. Auch deshalb hat sich LoCo CORP. dazu  entschlossen sich nicht nur auf die Entwicklung der Roboterbausätze zu  beschränken, sondern diese auch in ein eigenes fiktives Universum einzubetten – das „LoCoVerse“. Dabei handelt es sich um ein pädagogisches Ökosystem voller  Geschichten, Kurse, Tutorials, Tipps und Unterhaltung. 

Dazu Manuel Bernal Lecina, Gründer von LoCo CORP.: „Wir wollen qualitativ  hochwertige Inhalte bereitstellen, die junge Leute für MINT-Themen begeistern. So  fördern wir die Ausbildung angehender Ingenieur:innen und Wissenschaftler:innen.“ 

Das spanische Unternehmen möchte in Europa, wo es ein großes  Wachstumspotenzial gibt, zu einer festen Größe bei der Ausbildung von MINT interessierten Menschen werden und so die Bildungsziele von Familien und Schulen unterstützen. 

Weiterführende Links: 

Web: http://www.lococorp.org 

Kickstarter: https://bit.ly/3uUdOla 

Instagram: https://www.instagram.com/lococorp/ 

TikTok: https://www.tiktok.com/@thisrobotisdancing

In Celebration of National Robotics Week, iRobot® Launches the Create® 3 Educational Robot

Robot’s Smartest Developer Platform, Now with ROS 2 and Python Support

BEDFORD, Mass., April 5, 2022 /PRNewswire/ — iRobot Corp. (NASDAQ: IRBT), a leader in consumer robots, today is expanding its educational product lineup with the launch of the Create® 3 educational robot – the company’s most capable developer platform to date. Based on the Roomba® i3 Series robot vacuum platform, Create 3 provides educators and advanced makers with a reliable, out of the box alternative to costly and labor-intensive robotics kits that require assembly and testing. Instead of cleaning people’s homes,1 the robot is designed to promote higher-level exploration for those seeking to advance their education or career in robotics.

In Celebration of National Robotics Week, iRobot launched the Create® 3 Educational Robot – the company’s most capable developer platform to date. Now with ROS 2 and Python Support, Create 3 provides educators and advanced makers with a reliable, out of the box alternative to costly and labor-intensive robotics kits that require assembly and testing. Create 3 is designed to promote higher-level exploration for those seeking to advance their education or career in robotics.

The launch of Create 3 coincides with National Robotics Week, which began April 2 and runs through April 10, 2022. National Robotics Week, founded and organized by iRobot, is a time to inspire students about robotics and STEM-related fields, and to share the excitement of robotics with audiences of all ages through a range of in-person and virtual events.

„iRobot is committed to delivering STEM tools to all levels of the educational community, empowering the next generation of engineers, scientists and enthusiasts to do more,“ said Colin Angle, chairman and CEO of iRobot. „The advanced capabilities we’ve made available on Create 3 enable higher-level students, educators and developers to be in the driver’s seat of robotics exploration, allowing them to one day discover new ways for robots to benefit society.“

With ROS 2 support, forget about building the platform, and focus on your application: 
The next generation of iRobot’s affordable and trusted all-in-one mobile robot development platform, Create 3 brings a variety of new functionalities to users, including compatibility with ROS 2, an industry-standard software for roboticists worldwide. Robots require many different components, such as actuators, sensors and control systems, to communicate with each other in order to work. ROS 2 enables this communication, allowing students to speed up the development of their project by focusing more on their core application rather than the platform itself. Learning ROS 2 also gives students valuable experience that many companies are seeking from robotics developers.

Expand your coding skills even further with Python support:
iRobot also released a Python Web Playground for its iRobot Root® and Create 3 educational robots, providing a bridge for beginners to begin learning more advanced programming skills outside of the iRobot Coding App. Python, a commonly used coding language, enables users to broaden the complexity of projects that they work on. The iRobot Education Python Web Playground allows advanced learners and educators to program the iRobot Root and Create 3 educational robots with a common library written in Python. This provides users with a pathway to learn a new coding language, opening the door to further innovation and career development.

With more smarts, Create 3 lets you do more:
As a connected robot, Create 3 comes equipped with Wi-Fi, Ethernet-over-USB host, and Bluetooth. Create 3 is also equipped with a suite of intelligent technology, including an inertial measurement unit (IMU), optical floor tracking sensor, wheel encoders, and infrared sensors for autonomous localization, navigation, and telepresence applications. Additionally, the robot includes cliff, bump and slip detection, along with LED lights and a speaker.

A 3D simulation of Create 3 is also available using Ignition Gazebo for increased access to robotics education and research.

Create 3 Pricing and Availability
Create 3 is available immediately in the US and Canada for $299 USD and $399 CAD. It will be available in EMEA through authorized distributors in the coming months. Additional details can be found at https://edu.irobot.com/what-we-offer/create3.

iRobot Education Python Web Playground Availability
The iRobot Education Python Web Playground can be accessed in-browser at python.irobot.com.

Robots as helpers in the lettuce harvest

Robot solution for automating the lettuce harvest

Lettuce is a valuable crop in Europe and the USA. But labor shortages make it difficult to harvest this valuable field vegetable, as sourcing sufficient seasonal labor to meet harvesting commitments is one of the sector’s biggest challenges. Moreover, with wage inflation rising faster than producer prices, margins are very tight. In England, agricultural technology and machinery experts are working with IDS Imaging Development Systems GmbH (Obersulm, Germany) to develop a robotic solution to automate lettuce harvesting.

Robot solution for automating the lettuce harvest

The team is working on a project funded by Innovate UK and includes experts from the Grimme agricultural machinery factory, the Agri-EPI Centre (Edinburgh UK), Harper Adams University (Newport UK), the Centre for Machine Vision at the University of the West of England (Bristol) and two of the UK’s largest salad producers, G’s Fresh and PDM Produce.

Within the project, existing leek harvesting machinery is adapted to lift the lettuce clear from the ground and grip it in between pinch belts. The lettuce’s outer, or ‘wrapper’, leaves will be mechanically removed to expose the stem. Machine vision and artificial intelligence are then used to identify a precise cut point on the stem to to neatly separate the head of lettuce.

„The cutting process of an iceberg is the most technically complicated step in the process to automate, according to teammates from G subsidiary Salad Harvesting Services Ltd.“, explains IDS Product Sales Specialist Rob Webb. „The prototype harvesting robot being built incorporates a GigE Vision camera from the uEye FA family. It is considered to be particularly robust and is therefore ideally suited to demanding environments. „As this is an outdoor application, a housing with IP65/67 protection is required here“, Rob Webb points out.

GV-5280FA

The choice fell on the GV-5280FA-C-HQ model with the compact 2/3″ global shutter CMOS sensor IMX264 from Sony. „The sensor was chosen mainly because of its versatility. We don’t need full resolution for AI processing, so sensitivity can be increased by binning. The larger sensor format means that wide-angle optics are not needed either“, Rob Webb summarized the requirements. In the application, the CMOS sensor convinces with excellent image quality, light sensitivity and exceptionally high dynamic range and delivers almost noise-free, very high-contrast 5 MP images in 5:4 format at 22 fps – even in applications with fluctuating light conditions. The extensive range of accessories, such as lens tubes and trailing cables, is just as tough as the camera housing and the screwable connectors (8-pin M12 connector with X-coding and 8-pin Binder connector). Another advantage: camera-internal functions such as pixel pre-processing, LUT or gamma reduce the required computer power to a minimum.

The prototype of the robotic mower will be used for field trials in England towards the end of the 2021 season.

„We are delighted to be involved in the project and look forward to seeing the results. We are convinced of its potential to automate and increase the efficiency of the lettuce harvest, not only in terms of compensating for the lack of seasonal workers“, affirms Jan Hartmann, Managing Director of IDS Imaging Development Systems GmbH.

Prototype lettuce harvesting robot of Agri-Epicentre (UK)

The challenges facing the agricultural sector are indeed complex. According to a forecast by the United Nations Food and Agriculture Organization (FAO), agricultural productivity will have to increase by almost 50 percent by 2050 compared to 2012 due to the dramatic increase in population. Such a yield expectation means an enormous challenge for the agricultural industry, which is still in its infancy in terms of digitalization compared to other sectors and is already under high pressure to innovate in view of climatic changes and labor shortages. The agriculture of the future is based on networked devices and automation. Cameras are an important building block, and artificial intelligence is a central technology here. Smart applications such as harvesting robots can make a significant contribution to this.

Web-based VEXcode EXP

VEXcode EXP is now available in a web-based version for Chrome browsers. The web-based version can be reached by navigating to codeexp.vex.com and contains all of the features and functionality of VEXcode EXP, but without the need to download or install anything! The new web-based version of VEXcode makes it easier for teachers and students to access projects from anywhere, at any time, on any device – including Chromebooks!

In addition to the built-in Help and Tutorials, the STEM Library contains additional resources and support for using web-based VEXcode EXP. Within the STEM Library you can find device-specific articles for connecting to web-based VEXcode EXP, loading and saving projects, updating firmware, and more. View the VEXcode EXP section of the STEM Library to learn more.

Web-based versions of VEXcode IQ and VEXcode V5 are in the works and will be available soon.

Maicat, the Cybernetic Companion Cat

Macroact, the personal robotics development lab operating out of South Korea, has released  their first AI based companion pet. Designed for education and entertainment, Maicat is now live on Kickstarter after years of design and testing. 

CAPABLE – Ready to use directly from the box, Maicat is an autonomous robot pet. Using its  sensors, Maicat is capable of detecting obstacles and walking around the house on its own.  With its laser range finder and gyroscope, it is able to adjust for thick carpets and door frames. 

CARING Maicat has facial, voice pattern and emotional recognition software. When paired  with the AI learning algorithm, Maicat is able to identify its owners and react to their moods. 

CONNECTED – Integrated IoT connectivity allows you to add Maicat’s sensors and capabilities  to your existing home network. The Maicat SDK will allow the creation of apps which will let Maicat talk to most modern IoT devices.

CREATIVE Maicat is an excellent platform to get students interested in STEM topics. With an  app and the Maicat SDK, students can study AI, programming, robotics, facial recognition…the  list goes on and on. 

CELEBRATED Maicat was a CES 2022 Innovation Award nominee for its IoT integration and  support. That’s more than you can say for most other pets. 

CUDDLY Maicat is small and light enough to pick up and pet. Sensors within its body let  Maicat know it’s being petted and Maicat will respond lovingly. 

To learn more about the Maicat project checkout the promotional link below.

Meet Maicat 

Maicat Kickstarter 

About Macroact Inc. 

Macroact is an AI and robotics startup that develops machine learning solutions for adaptive robots. The company focuses on the implementation of artificial intelligence solutions throughout the  whole robot development process to reduce time and costs of the robot development and enhance the  learning ability of robots. Their core technology is Maidynamics, an autonomous robot control solution.  Maicat is their first adaptive robot. 

Further development of IDS NXT ocean: focus on user-friendliness and AI transparency

All-in-one embedded vision platform with new tools and functions

(PresseBox) (ObersulmAt IDS, image processing with artificial intelligence does not just mean that AI runs directly on cameras and users also have enormous design options through vision apps. Rather, with the IDS NXT ocean embedded vision platform, customers receive all the necessary, coordinated tools and workflows to realise their own AI vision applications without prior knowledge and to run them directly on the IDS NXT industrial cameras. Now follows the next free software update for the AI package. In addition to the topic of user-friendliness, the focus is also on making artificial intelligence clear and comprehensible for the user.

An all-in-one system such as IDS NXT ocean, which has integrated computing power and artificial intelligence thanks to the „deep ocean core“ developed by IDS, is ideally suited for entry into AI Vision. It requires no prior knowledge of deep learning or camera programming. The current software update makes setting up, deploying and controlling the intelligent cameras in the IDS NXT cockpit even easier. For this purpose, among other things, an ROI editor is integrated with which users can freely draw the image areas to be evaluated and configure, save and reuse them as custom grids with many parameters. In addition, the new tools Attention Maps and Confusion Matrix illustrate how the AI works in the cameras and what decisions it makes. This helps to clarify the process and enables the user to evaluate the quality of a trained neural network and to improve it through targeted retraining. Data security also plays an important role in the industrial use of artificial intelligence. As of the current update, communication between IDS NXT cameras and system components can therefore be encrypted via HTTPS.

Just get started with the IDS NXT ocean Creative Kit

Anyone who wants to test the industrial-grade embedded vision platform IDS NXT ocean and evaluate its potential for their own applications should take a look at the IDS NXT ocean Creative Kit. It provides customers with all the components they need to create, train and run a neural network. In addition to an IDS NXT industrial camera with 1.6 MP Sony sensor, lens, cable and tripod adapter, the package includes six months‘ access to the AI training software IDS NXT lighthouse. Currently, IDS is offering the set in a special promotion at particularly favourable conditions. Promotion page: https://en.ids-imaging.com/ids-nxt-ocean-creative-kit.html.

Learn more: www.ids-nxt.com

Weiterentwicklung von IDS NXT ocean: Fokus auf Benutzerfreundlichkeit und KI-Transparenz

All-in-One Embedded Vision Plattform mit neuen Werkzeugen und Funktionen

(PresseBox) (ObersulmBei IDS bedeutet Bildverarbeitung mit künstlicher Intelligenz nicht nur, dass die KI direkt auf Kameras läuft und Anwender zusätzlich enorme Gestaltungsmöglichkeiten durch Vision Apps haben. Kunden erhalten mit der Embedded-Vision-Plattform IDS NXT ocean vielmehr alle erforderlichen, aufeinander abgestimmten Tools und Workflows, um eigene KI-Vision-Anwendungen ohne Vorwissen zu realisieren und direkt auf den IDS NXT Industriekameras auszuführen. Jetzt folgt das nächste kostenlose Softwareupdate für das KI-Paket. Im Fokus steht neben dem Thema Benutzerfreundlichkeit auch der Anspruch, die künstliche Intelligenz für den Anwender anschaulich und nachvollziehbar zu machen.

Ein All-in-One System wie IDS NXT ocean, das durch den von IDS entwickelten „deep ocean core“ über integrierte Rechenleistung und künstliche Intelligenz verfügt, eignet sich bestens für den Einstieg in AI Vision. Es erfordert weder Vorkenntnisse in Deep Learning noch in der Kameraprogrammierung. Das aktuelle Softwareupdate macht die Einrichtung, Inbetriebnahme und Steuerung der intelligenten Kameras im IDS NXT cockpit noch einfacher. Hierzu wird unter anderem ein ROI-Editor integriert, mit dem Anwender die auszuwertenden Bildbereiche frei zeichnen und als beliebige Raster mit vielen Parametern konfigurieren, speichern und wiederverwenden können. Darüber hinaus veranschaulichen die neuen Werkzeuge Attention Maps und Confusion Matrix, wie die KI in den Kameras arbeitet und welche Entscheidungen sie trifft. Das macht sie transparenter und hilft dem Anwender, die Qualität eines trainierten neuronalen Netzes zu bewerten und durch gezieltes Nachtraining zu verbessern. Beim industriellen Einsatz von künstlicher Intelligenz spielt auch Datensicherheit eine wichtige Rolle. Ab dem aktuellen Update lässt sich die Kommunikation zwischen IDS NXT Kameras und Anlagenkomponenten deshalb per HTTPS verschlüsseln. 

Einfach loslegen mit dem IDS NXT ocean Creative Kit

Wer die industrietaugliche Embedded-Vision-Plattform IDS NXT ocean testen und das Potenzial für die eigenen Anwendungen evaluieren möchte, sollte einen Blick auf das IDS NXT ocean Creative Kit werfen. Kunden erhalten damit alle Komponenten, die sie für die Erstellung, das Trainieren und das Ausführen eines neuronalen Netzes benötigen. Neben einer IDS NXT Industriekamera mit 1,6 MP Sony Sensor, Objektiv, Kabel und Stativadapter enthält das Paket u.a. einen sechsmonatigen Zugang zur KI-Trainingssoftware IDS NXT lighthouse. Aktuell bietet IDS das Set in einer Sonderaktion zu besonders günstigen Konditionen an. Aktionsseite: https://de.ids-imaging.com/ids-nxt-ocean-creative-kit.html.

Weitere Informationen: www.ids-nxt.de

Picked up and put off

Guest post by IDS Corporate Communications

Autonomously driving robotic assistance system for the automated placement of coil creels

Due to the industry standard 4.0, digitalisation, automation and networking of systems and facilities are becoming the predominant topics in production and thus also in logistics. Industry 4.0 pursues the increasing optimisation of processes and workflows in favour of productivity and flexibility and thus the saving of time and costs. Robotic systems have become the driving force for automating processes. Through the Internet of Things (IoT), robots are becoming increasingly sensitive, autonomous, mobile and easier to operate. More and more they are becoming an everyday helper in factories and warehouses. Intelligent imaging techniques are playing an increasingly important role in this.

To meet the growing demands in scaling and changing production environments towards fully automated and intelligently networked production, the company ONTEC Automation GmbH from Naila in Bavaria has developed an autonomously driving robotic assistance system. The „Smart Robot Assistant“ uses the synergies of mobility and automation: it consists of a powerful and efficient intralogistics platform, a flexible robot arm and a robust 3D stereo camera system from the Ensenso N series by IDS Imaging Development Systems GmbH.

The solution is versatile and takes over monotonous, weighty set-up and placement tasks, for example. The autonomous transport system is suitable for floor-level lifting of Euro pallets up to container or industrial format as well as mesh pallets in various sizes with a maximum load of up to 1,200 kilograms. For a customer in the textile industry, the AGV (Automated Guided Vehicle) is used for the automated loading of coil creels. For this purpose, it picks up pallets with yarn spools, transports them to the designated creel and loads it for further processing. Using a specially developed gripper system, up to 1000 yarn packages per 8-hour shift are picked up and pushed onto a mandrel of the creel. The sizing scheme and the position of the coils are captured by an Ensenso 3D camera (N45 series) installed on the gripper arm.

Application

Pallets loaded with industrial yarn spools are picked up from the floor of a predefined storage place and transported to the creel location. There, the gripper positions itself vertically above the pallet. An image trigger is sent to the Ensenso 3D camera from the N45 series, triggered by the in-house software ONTEC SPSComm. It networks with the vehicle’s PLC and can thus read out and pass on data. In the application, SPSComm controls the communication between the software parts of the vehicle, gripper and camera. This way, the camera knows when the vehicle and the grabber are in position to take a picture. This takes an image and passes on a point cloud to a software solution from ONTEC based on the standard HALCON software, which reports the coordinates of the coils on the pallet to the robot. The robot can then accurately pick up the coils and process them further. As soon as the gripper has cleared a layer of the yarn spools, the Ensenso camera takes a picture of the packaging material lying between the yarn spools and provides point clouds of this as well. These point clouds are processed similarly to provide the robot with the information with which a needle gripper removes the intermediate layers. „This approach means that the number of layers and finishing patterns of the pallets do not have to be defined in advance and even incomplete pallets can be processed without any problems,“ explains Tim Böckel, software developer at ONTEC. „The gripper does not have to be converted for the use of the needle gripper. For this application, it has a normal gripping component for the coils and a needle gripping component for the intermediate layers.“

For this task, the mobile use for 3D acquisition of moving and static objects on the robot arm, the Ensenso 3D camera is suitable due to its compact design. The Ensenso N 45’s 3D stereo electronics are completely decoupled from the housing, allowing the use of a lightweight plastic composite as the housing material. The low weight facilitates the use on robot arms such as the Smart Robotic Asstistant. The camera can also cope with demanding environmental conditions. „Challenges with this application can be found primarily in the different lighting conditions that are evident in different rooms of the hall and at different times of the day,“ Tim Böckel describes the situation. Even in difficult lighting conditions, the integrated projector projects a high-contrast texture onto the object to be imaged by means of a pattern mask with a random dot pattern, thus supplementing the structures on featureless homogenous surfaces. This means that the integrated camera meets the requirements exactly. „By pre-configuring within NxView, the task was solved well.“ This sample programme with source code demonstrates the main functions of the NxLib library, which can be used to open one or more stereo and colour cameras whose image and depth data are visualised. Parameters such as exposure time, binning, AOI and depth measuring range can – as in this case – be adjusted live for the matching method used.

The matching process empowers the Ensenso 3D camera to recognise a very high number of pixels, including their position change, by means of the auxiliary structures projected onto the surface and to create complete, homogeneous depth information of the scene from this. This in turn ensures the necessary precision with which the Smart Robot Assistant proceeds. Other selection criteria for the camera were, among others, the standard vision interface Gigabit Ethernet and the global shutter 1.3 MP sensor. „The camera only takes one image pair of the entire pallet in favour of a faster throughput time, but it has to provide the coordinates from a relatively large distance with an accuracy in the millimetre range to enable the robot arm to grip precisely,“ explains Matthias Hofmann, IT specialist for application development at ONTEC. „We therefore need the high resolution of the camera to be able to safely record the edges of the coils with the 3D camera.“ The localisation of the edges is important in order to be able to pass on as accurate as possible the position from the centre of the spool to the gripper.

Furthermore, the camera is specially designed for use in harsh environmental conditions. It has a screwable GPIO connector for trigger and flash and is IP65/67 protected against dirt, dust, splash water or cleaning agents.

Software

The Ensenso SDK enables hand-eye calibration of the camera to the robot arm, allowing easy translation or displacement of coordinates using the robot pose. In addition, by using the internal camera settings, a „FileCam“ of the current situation is recorded at each pass, i.e. at each image trigger. This makes it possible to easily adjust any edge cases later on, in this application for example unexpected lighting conditions, obstacles in the image or also an unexpected positioning of the coils in the image. The Ensenso SDK also allows the internal camera LOG files to be stored and archived for possible evaluation.

ONTEC also uses these „FileCams“ to automatically check test cases and thus ensure the correct functioning of all arrangements when making adjustments to the vision software. In addition, various vehicles can be coordinated and logistical bottlenecks minimised on the basis of the control system specially developed by ONTEC. Different assistants can be navigated and act simultaneously in a very confined space. By using the industrial interface tool ONTEC SPSComm, even standard industrial robots can be safely integrated into the overall application and data can be exchanged between the different systems.

Outlook

Further development of the system is planned, among other things, in terms of navigation of the autonomous vehicle. „With regard to vehicle navigation for our AGV, the use of IDS cameras is very interesting. We are currently evaluating the use of the new Ensenso S series to enable the vehicle to react even more flexibly to obstacles, for example, classify them and possibly even drive around them,“ says Tim Böckel, software developer at ONTEC, outlining the next development step.

ONTEC’s own interface configuration already enables the system to be integrated into a wide variety of Industry 4.0 applications, while the modular structure of the autonomously moving robot solution leaves room for adaptation to a wide variety of tasks. In this way, it not only serves to increase efficiency and flexibility in production and logistics, but in many places also literally contributes to relieving the workload of employees.

More at: https://en.ids-imaging.com/casestudies-detail/picked-up-and-put-off-ensenso.html

QUBS – The toymaker merging traditional designs and screen-free technology in early years learning

QUBS (www.qubs.toys) is a Swiss company producing traditionally-designed wooden toys with hidden high-tech magic: liberating children to explore their imagination, safely learn future skills and engage in educational, screen-free fun.

Inspired by the Montessori method, QUBS STEM toys educate as well as entertain. Playing with QUBS toys provides children, through play, with developmental skills in science, technology, engineering, and mathematics.

Loved by parents, teachers and, most importantly, young users (3 to 12 years), QUBS’ intuitive, gender neutral toys – made from responsibly sourced and long lasting beechwood – contain patented technology which brings them to life. Unlike other tech-enabled STEM children’s toys, QUBS’ toys have an eternal shelf life, do not require updates nor access to the internet, and are completely screen-less, empowering children to become creators, rather than passive users of laptop or smartphone screens.

Each block and toy component contains a QUBS-developed and patented version of RFID (Radio Frequency Identification) technology (the innovation most commonly-used in contactless payments and key fobs). RFID technology is 100% safe and secure for children and grown-ups, allowing the individual tiles and blocks to interact, all within their own secure universe.

Cody Block

QUBS’ first product, CodyBlock- to be showcased at Nuremberg Toy Fair – Spielwarenmesse Digital (where it has been shortlisted for the prestigious annual ‘Toy Award’) – features an independently-moving car (Cody), whose journey changes in response to a child’s placement and arrangement of wooden blocks within its environment. Encouraging creativity and teamwork, Cody Block introduces children to computer programming concepts, robotics, and the Internet Of Things through fun and accessible play.

Learning computational skills in early years is essential. Cody the car, and the wooden toy blocks which shape his journey, teach kids to think like a programmer: being introduced to principles of debugging (the process of identifying a problem and correcting it) and sequencing (the specific order in which instructions are performed in an algorithm) through physical play.

The task is to plan a path that leads Cody through the city and back home, his movements changing in response to the child’s arrangement and rearrangement of the wooden blocks (each containing RFID tech). Each block denotes a different directional command (e.g. ‘turn left’, ‘turn right’, ‘u-turn’ etc.), creating a sequence of instructions. This allows children to improve their motor skills, critical thinking, creativity and spatial awareness.

Cody Blockis designed for kids aged 3-12, and will be available to ship in Q2 2022.

Matty Block

QUBS’ second product, MattyBlock, is designed for ages 3-9, it helps children develop self confidence in mathematics by introducing the concepts of addition, subtraction and multiplication.

Children place Matty the farmer on a board above a sum of their own creation, formed by numbered tiles (representing seeds). With a nod or shake of his head, Matty guides young users to the right answer to the sum. MattyBlockfeatures voice feedback in six languages (English, German, French, Spanish, Italian and Mandarin), making it the perfect tool for children to play and learn autonomously. Its story setting provides a fun and comprehensive introduction to numbers and equations, while exploring the delicate and ever-changing world of nature.

Matty Blockwill be available in 2023.

About QUBS

Based in Zurich, Paris and London, QUBS Toys was founded by Hayri Bulman in 2019, a Swiss entrepreneur with over 30 years of IT expertise, working for GE (General Electric) and Xerox. Hayri’s own fatherhood, passion for wooden toys and firm grasp of technology motivated him to create QUBS to better equip the future generations for the digital world. Inspired by the toy company TEGU in 2015, Hayri sought out to merge classic wooden toys with modern technology and soon started working on concepts that combined RFID technology with wooden blocks. Since then, QUBS has expanded into a vast team of designers, engineers and creatives from all across Europe.

In April 2020, at the very beginning of the global pandemic, QUBS raised CHF 88,887 (~£70,000) by 503 backers during a Kickstarter campaign.

QUBS Toys will be available for purchase online from www.qubs.toys, as well as from major stockists.