Creality has lifted the veil over its latest 3D scanner. In an effort to further diversify its 3D cosmos, Creality, the well-known manufacturer of 3D printing-community-favorites such as the Ender 3, has announced its new and improved 3D scanner: the CR-Scan Lizard.
This entry-level 3D scanner for consumers follows the company’s CR-Scan 01 — which was released a fairly short time ago as an affordable option for users to digitalize objects. The new Lizard is smaller in size for better portability and feel but promises improved features such as accuracy up to 0.05 mm, and better handling of bright environments and dark objects. All that for less money than its predecessor, even.
With the Lizard, you can scan small or large objects with ease. The CR Studio software does the heavy lifting of optimizing models and even sends those files via the Creality Cloud directly to your 3D printer. The applications seem almost endless.
With some early bird specials, the CR-Scan Lizard has made a debut on Kickstarter on February 2022, and, unsurprisingly, smashed its campaign goal in next to no time.
We have gathered all the information revealed so far about this new consumer-grade 3D scanner to give you an overview of what the Lizard has in store. Creality has also already sent us a scanner to try for ourselves, so keep an eye out for our upcoming hands-on experience.
Features
HIGH ACCURACY
With the CR-Scan Lizard, Creality wants to bring professional-grade accuracy to the budget market. According to its spec sheet, the scanner has an accuracy of up to 0.05 mm allowing it to capture small parts and intricate details with high precision. Thanks to the scanner’s binoculars and improved precision calibration, Creality says it can pick up rich detail from objects as small as 15 x 15 x 15 mm, or as large as objects like car doors, engines, rear bumpers, and so on.
SCAN MODES
The CR-Scan Lizard comes with three different scanning modes. You can either use it in turntable mode, handheld mode, or a mixture of the two to scan an object.
Turntable mode is suitable for 15 – 300 mm objects and will scan automatically. You can use the combination mode for larger objects up to 500 mm, where you put the object on the rotary table but hold the scanner in hand to scan. Lastly, its handheld mode is suitable for scanning large objects up to 2 meters in size, such as the car parts mentioned above.
Plus, thanks to its visual tracking, the Lizard doesn’t need markers to work. You can scan objects without having to pin a bunch of stickers to them first — its software’s tracking algorithm will take care of that for you.
LIGHT OR DARK
Besides its scan modes, the Lizard also offers some improved scanning functions that should make it easier for users to achieve good results with minimal effort.
For one, Creality states the Lizard can scan accurately in sunlight. 3D scanners typically struggle with too much direct light, forcing users to scan in a darkened room for best results. However, Creality claims the Lizard, thanks to its multi-spectral optical technology, maintains excellent performance even in bright sunlight — which would vastly improve its field of application. The scanner can also be powered by a portable charger, so, in theory, you could go out there and scan the woods to your heart desire.
What’s more, the CR-Scan Lizard promises better material adaptability when scanning black and dark objects. Sounds like it’s got it all.
COLOR MAPPING
Creality has stated that it is planning to release a fully automated color mapping texture suite in March 2022 that promises true color fidelity for your scanned objects, but its currently still in development. Once released, you can make use of the mapping process, where high-definition color pictures of the model taken with a phone or DSLR camera can automatically be mapped onto the 3D model, allowing you to create high-quality, vivid color scans.
CR STUDIO
The Lizard’s accompanying software, CR Studio, promises many features that should help to achieve clean scans. For example, the software features on-click model optimization and multi-positional auto alignment, auto noise removal, topology simplified, texture mapping, and much more.
You can also upload and share models via the Creality Cloud, allowing you to slice your scanned objects and even send them to a 3D printer — all with the click of a button.
Release Date & Availability
Creality has set up a limited pre-order via Kickstarter. The scanner is available for backing since February 10, 2022, alongside some early bird batch sales. According to the Kickstarter campaign, shipping will take place in April.
Over the past days and weeks, Creality has already released a couple of videos on its YouTube channel showing off the scanner’s features in greater detail. Be sure to check those out if the Lizard tickles your fancy.
Creality has also already sent All3DP a CR-Scan Lizard to try out, so we are looking forward to giving it a spin in the next few days. Stay tuned for a full review of our hands-on experience.
At the time of writing, the CR-Scan Lizard is available via Kickstarter with super early bird pledges, priced from $300 for the most basic Lizard package and reaching $400 for the luxury version that already comes with a color kit.
According to the campaign, the off-the-shelf price for the Lizard will be $599 for its base version. So, there are potentially some bucks to be saved if you get in early. However, it wouldn’t be the first time that prices given changed eventually.
Here are the technical specifications for the Creality CR-Scan Lizard 3D scanner:
GENERAL SPECIFICATIONS
Precision: 0.05 mm
Resolution ratio: 0.1 – 0.2 mm
Single capture range: 200 x 100 mm
Operating Distance: 150 – 400 mm
Scanning Speed: 10 fps
Tracking mode: Visual tracking
Light: LED+NIR (Near-infrared mode)
Splicing Mode: Fully automatic geometry and visual tracking (without marker)
OUTPUT
Output Format: STL, OBJ, PLY
Compatible System: Win 10 64bit (MacOS to be released in March 2022)
On-demand robot delivery now available in Pleasanton, CA at Lucky California flagship store
SAN FRANCISCO (February, 2022) – Starship Technologies, the world’s leading provider of autonomous delivery services, is now delivering groceries in the San Francisco Bay Area. Starship is expanding its partnership with The Save Mart Companies for the exclusive launch of an on-demand grocery delivery service at its Lucky California flagship store in Pleasanton, CA. Lucky is the first grocery store in the San Francisco Bay Area to partner with Starship.
Starship and The Save Mart Companies first partnered in September 2020, when the Save Mart flagship store in Modesto became the first grocery store in the U.S. to offer Starship robot delivery service. Since its launch, that store has expanded its delivery area to serve over 55,000 households. In Pleasanton, the service is launching to thousands of residents, with the delivery area expected to grow rapidly in the coming months, similar to Modesto.
“We are very pleased to bring the benefits of autonomous delivery to Pleasanton, in partnership with Lucky California,” said Ryan Tuohy, SVP of Sales and Business Development at Starship Technologies. “Since launching our service in Modesto in 2020, we’ve been excited to see the extremely positive reaction to the robots and how they were embraced as part of the local community. We think the residents of Pleasanton will appreciate the convenience and positive environmental impact of autonomous delivery and we fully expect the service area to quickly expand to more households.”
The robots, each of which can carry up to 20 pounds of groceries – the equivalent of about three shopping bags – provide a convenient, energy-efficient, and low-cost delivery alternative to driving to the Lucky California store, allowing shoppers to browse thousands of items via the secure Starship app for on-demand delivery straight to their home.
The robots travel autonomously – crossing streets, climbing curbs and traversing sidewalks – to provide on-demand delivery to shoppers. They often become local celebrities as community members share their robot selfies and “love notes” on social media.
“Since the debut of our contactless delivery service at the Save Mart flagship store, feedback from the Modesto community has been incredibly positive,” said Barbara Walker, senior vice president and chief marketing officer for The Save Mart Companies. “We are thrilled to expand this service to Lucky California in Pleasanton and offer a safe and efficient grocery delivery solution, along with some joyful entertainment, especially as the service area progressively expands over time..”
The Starship Food Delivery app is available for download on iOS and Android. To get started, customers choose from a range of their favorite groceries and drop a pin where they want their delivery to be sent. When an order is submitted, Lucky California team members gather the delivery items and carefully place them in a clean robot. Every robot’s interior and exterior is sanitized before each order. The customer can then watch as the robot makes its journey to them, via an interactive map. Once the robot arrives, the customer receives an alert, and can then meet the robot and unlock it through the app.
Starship already offers its services in many parts of the EU, UK and the US in cities, university campuses and industrial campuses, with further expansion planned in the near future. Starship is able to do L4 deliveries everywhere it operates – entire cities and campuses. The robots have been operating at L4 since 2018. On a daily basis Starship robots will complete numerous deliveries in a row 100% autonomously, including road crossings. This is why the cost of a Starship delivery is now lower than the human equivalent, which is believed to be a world first for any robot delivery company, whereas most others are still majority human controlled and in pilot mode.
Starship Technologies operates commercially on a daily basis around the world. Its zero-emission robots make more than 100,000 road crossings every day and have completed more than 2.5 million commercial deliveries and travelled more than 3 million miles (5 million+ kms) globally, more than any other autonomous delivery provider.
All-in-one embedded vision platform with new tools and functions
(PresseBox) (Obersulm) At IDS, image processing with artificial intelligence does not just mean that AI runs directly on cameras and users also have enormous design options through vision apps. Rather, with the IDS NXT ocean embedded vision platform, customers receive all the necessary, coordinated tools and workflows to realise their own AI vision applications without prior knowledge and to run them directly on the IDS NXT industrial cameras. Now follows the next free software update for the AI package. In addition to the topic of user-friendliness, the focus is also on making artificial intelligence clear and comprehensible for the user.
An all-in-one system such as IDS NXT ocean, which has integrated computing power and artificial intelligence thanks to the „deep ocean core“ developed by IDS, is ideally suited for entry into AI Vision. It requires no prior knowledge of deep learning or camera programming. The current software update makes setting up, deploying and controlling the intelligent cameras in the IDS NXT cockpit even easier. For this purpose, among other things, an ROI editor is integrated with which users can freely draw the image areas to be evaluated and configure, save and reuse them as custom grids with many parameters. In addition, the new tools Attention Maps and Confusion Matrix illustrate how the AI works in the cameras and what decisions it makes. This helps to clarify the process and enables the user to evaluate the quality of a trained neural network and to improve it through targeted retraining. Data security also plays an important role in the industrial use of artificial intelligence. As of the current update, communication between IDS NXT cameras and system components can therefore be encrypted via HTTPS.
Just get started with the IDS NXT ocean Creative Kit
Anyone who wants to test the industrial-grade embedded vision platform IDS NXT ocean and evaluate its potential for their own applications should take a look at the IDS NXT ocean Creative Kit. It provides customers with all the components they need to create, train and run a neural network. In addition to an IDS NXT industrial camera with 1.6 MP Sony sensor, lens, cable and tripod adapter, the package includes six months‘ access to the AI training software IDS NXT lighthouse. Currently, IDS is offering the set in a special promotion at particularly favourable conditions. Promotion page: https://en.ids-imaging.com/ids-nxt-ocean-creative-kit.html.
All-in-One Embedded Vision Plattform mit neuen Werkzeugen und Funktionen
(PresseBox) (Obersulm) Bei IDS bedeutet Bildverarbeitung mit künstlicher Intelligenz nicht nur, dass die KI direkt auf Kameras läuft und Anwender zusätzlich enorme Gestaltungsmöglichkeiten durch Vision Apps haben. Kunden erhalten mit der Embedded-Vision-Plattform IDS NXT ocean vielmehr alle erforderlichen, aufeinander abgestimmten Tools und Workflows, um eigene KI-Vision-Anwendungen ohne Vorwissen zu realisieren und direkt auf den IDS NXT Industriekameras auszuführen. Jetzt folgt das nächste kostenlose Softwareupdate für das KI-Paket. Im Fokus steht neben dem Thema Benutzerfreundlichkeit auch der Anspruch, die künstliche Intelligenz für den Anwender anschaulich und nachvollziehbar zu machen.
Ein All-in-One System wie IDS NXT ocean, das durch den von IDS entwickelten „deep ocean core“ über integrierte Rechenleistung und künstliche Intelligenz verfügt, eignet sich bestens für den Einstieg in AI Vision. Es erfordert weder Vorkenntnisse in Deep Learning noch in der Kameraprogrammierung. Das aktuelle Softwareupdate macht die Einrichtung, Inbetriebnahme und Steuerung der intelligenten Kameras im IDS NXT cockpit noch einfacher. Hierzu wird unter anderem ein ROI-Editor integriert, mit dem Anwender die auszuwertenden Bildbereiche frei zeichnen und als beliebige Raster mit vielen Parametern konfigurieren, speichern und wiederverwenden können. Darüber hinaus veranschaulichen die neuen Werkzeuge Attention Maps und Confusion Matrix, wie die KI in den Kameras arbeitet und welche Entscheidungen sie trifft. Das macht sie transparenter und hilft dem Anwender, die Qualität eines trainierten neuronalen Netzes zu bewerten und durch gezieltes Nachtraining zu verbessern. Beim industriellen Einsatz von künstlicher Intelligenz spielt auch Datensicherheit eine wichtige Rolle. Ab dem aktuellen Update lässt sich die Kommunikation zwischen IDS NXT Kameras und Anlagenkomponenten deshalb per HTTPS verschlüsseln.
Einfach loslegen mit dem IDS NXT ocean Creative Kit
Wer die industrietaugliche Embedded-Vision-Plattform IDS NXT ocean testen und das Potenzial für die eigenen Anwendungen evaluieren möchte, sollte einen Blick auf das IDS NXT ocean Creative Kit werfen. Kunden erhalten damit alle Komponenten, die sie für die Erstellung, das Trainieren und das Ausführen eines neuronalen Netzes benötigen. Neben einer IDS NXT Industriekamera mit 1,6 MP Sony Sensor, Objektiv, Kabel und Stativadapter enthält das Paket u.a. einen sechsmonatigen Zugang zur KI-Trainingssoftware IDS NXT lighthouse. Aktuell bietet IDS das Set in einer Sonderaktion zu besonders günstigen Konditionen an. Aktionsseite: https://de.ids-imaging.com/ids-nxt-ocean-creative-kit.html.
Autonomously driving robotic assistance system for the automated placement of coil creels
Due to the industry standard 4.0, digitalisation, automation and networking of systems and facilities are becoming the predominant topics in production and thus also in logistics. Industry 4.0 pursues the increasing optimisation of processes and workflows in favour of productivity and flexibility and thus the saving of time and costs. Robotic systems have become the driving force for automating processes. Through the Internet of Things (IoT), robots are becoming increasingly sensitive, autonomous, mobile and easier to operate. More and more they are becoming an everyday helper in factories and warehouses. Intelligent imaging techniques are playing an increasingly important role in this.
To meet the growing demands in scaling and changing production environments towards fully automated and intelligently networked production, the company ONTEC Automation GmbH from Naila in Bavaria has developed an autonomously driving robotic assistance system. The „Smart Robot Assistant“ uses the synergies of mobility and automation: it consists of a powerful and efficient intralogistics platform, a flexible robot arm and a robust 3D stereo camera system from the Ensenso N series by IDS Imaging Development Systems GmbH.
The solution is versatile and takes over monotonous, weighty set-up and placement tasks, for example. The autonomous transport system is suitable for floor-level lifting of Euro pallets up to container or industrial format as well as mesh pallets in various sizes with a maximum load of up to 1,200 kilograms. For a customer in the textile industry, the AGV (Automated Guided Vehicle) is used for the automated loading of coil creels. For this purpose, it picks up pallets with yarn spools, transports them to the designated creel and loads it for further processing. Using a specially developed gripper system, up to 1000 yarn packages per 8-hour shift are picked up and pushed onto a mandrel of the creel. The sizing scheme and the position of the coils are captured by an Ensenso 3D camera (N45 series) installed on the gripper arm.
Application
Pallets loaded with industrial yarn spools are picked up from the floor of a predefined storage place and transported to the creel location. There, the gripper positions itself vertically above the pallet. An image trigger is sent to the Ensenso 3D camera from the N45 series, triggered by the in-house software ONTEC SPSComm. It networks with the vehicle’s PLC and can thus read out and pass on data. In the application, SPSComm controls the communication between the software parts of the vehicle, gripper and camera. This way, the camera knows when the vehicle and the grabber are in position to take a picture. This takes an image and passes on a point cloud to a software solution from ONTEC based on the standard HALCON software, which reports the coordinates of the coils on the pallet to the robot. The robot can then accurately pick up the coils and process them further. As soon as the gripper has cleared a layer of the yarn spools, the Ensenso camera takes a picture of the packaging material lying between the yarn spools and provides point clouds of this as well. These point clouds are processed similarly to provide the robot with the information with which a needle gripper removes the intermediate layers. „This approach means that the number of layers and finishing patterns of the pallets do not have to be defined in advance and even incomplete pallets can be processed without any problems,“ explains Tim Böckel, software developer at ONTEC. „The gripper does not have to be converted for the use of the needle gripper. For this application, it has a normal gripping component for the coils and a needle gripping component for the intermediate layers.“
For this task, the mobile use for 3D acquisition of moving and static objects on the robot arm, the Ensenso 3D camera is suitable due to its compact design. The Ensenso N 45’s 3D stereo electronics are completely decoupled from the housing, allowing the use of a lightweight plastic composite as the housing material. The low weight facilitates the use on robot arms such as the Smart Robotic Asstistant. The camera can also cope with demanding environmental conditions. „Challenges with this application can be found primarily in the different lighting conditions that are evident in different rooms of the hall and at different times of the day,“ Tim Böckel describes the situation. Even in difficult lighting conditions, the integrated projector projects a high-contrast texture onto the object to be imaged by means of a pattern mask with a random dot pattern, thus supplementing the structures on featureless homogenous surfaces. This means that the integrated camera meets the requirements exactly. „By pre-configuring within NxView, the task was solved well.“ This sample programme with source code demonstrates the main functions of the NxLib library, which can be used to open one or more stereo and colour cameras whose image and depth data are visualised. Parameters such as exposure time, binning, AOI and depth measuring range can – as in this case – be adjusted live for the matching method used.
The matching process empowers the Ensenso 3D camera to recognise a very high number of pixels, including their position change, by means of the auxiliary structures projected onto the surface and to create complete, homogeneous depth information of the scene from this. This in turn ensures the necessary precision with which the Smart Robot Assistant proceeds. Other selection criteria for the camera were, among others, the standard vision interface Gigabit Ethernet and the global shutter 1.3 MP sensor. „The camera only takes one image pair of the entire pallet in favour of a faster throughput time, but it has to provide the coordinates from a relatively large distance with an accuracy in the millimetre range to enable the robot arm to grip precisely,“ explains Matthias Hofmann, IT specialist for application development at ONTEC. „We therefore need the high resolution of the camera to be able to safely record the edges of the coils with the 3D camera.“ The localisation of the edges is important in order to be able to pass on as accurate as possible the position from the centre of the spool to the gripper.
Furthermore, the camera is specially designed for use in harsh environmental conditions. It has a screwable GPIO connector for trigger and flash and is IP65/67 protected against dirt, dust, splash water or cleaning agents.
Software
The Ensenso SDK enables hand-eye calibration of the camera to the robot arm, allowing easy translation or displacement of coordinates using the robot pose. In addition, by using the internal camera settings, a „FileCam“ of the current situation is recorded at each pass, i.e. at each image trigger. This makes it possible to easily adjust any edge cases later on, in this application for example unexpected lighting conditions, obstacles in the image or also an unexpected positioning of the coils in the image. The Ensenso SDK also allows the internal camera LOG files to be stored and archived for possible evaluation.
ONTEC also uses these „FileCams“ to automatically check test cases and thus ensure the correct functioning of all arrangements when making adjustments to the vision software. In addition, various vehicles can be coordinated and logistical bottlenecks minimised on the basis of the control system specially developed by ONTEC. Different assistants can be navigated and act simultaneously in a very confined space. By using the industrial interface tool ONTEC SPSComm, even standard industrial robots can be safely integrated into the overall application and data can be exchanged between the different systems.
Outlook
Further development of the system is planned, among other things, in terms of navigation of the autonomous vehicle. „With regard to vehicle navigation for our AGV, the use of IDS cameras is very interesting. We are currently evaluating the use of the new Ensenso S series to enable the vehicle to react even more flexibly to obstacles, for example, classify them and possibly even drive around them,“ says Tim Böckel, software developer at ONTEC, outlining the next development step.
ONTEC’s own interface configuration already enables the system to be integrated into a wide variety of Industry 4.0 applications, while the modular structure of the autonomously moving robot solution leaves room for adaptation to a wide variety of tasks. In this way, it not only serves to increase efficiency and flexibility in production and logistics, but in many places also literally contributes to relieving the workload of employees.
drylin XXL-Raumportalroboter ist bis zu 60 Prozent günstiger als vergleichbare Lösungen und besonders einfach in Betrieb zu nehmen
Köln, 8. Februar 2022 – igus erweitert sein breites Low-Cost-Automation Angebot um einen neuen drylin XXL-Raumportalroboter. Das Portal hat einen Aktionsradius von 2000 x 2000 x 1500 Millimeter und eignet sich besonders für Palettierungsanwendungen bis 10 Kilogramm. Der Roboter ist ab 7.000 Euro inklusive Steuerung erhältlich und lässt sich einfach selbst nach dem Do-it-yourself Prinzip aufbauen und programmieren – ohne Hilfe eines Systemintegrators.
Zu teuer in der Anschaffung, zu aufwendig in der Programmierung, zu kompliziert in der Wartung: Viele kleine und mittelständische Unternehmen scheuen den Einstieg in die Automatisierung. Und gefährden damit langfristig ihre Wettbewerbsfähigkeit. Dabei geht der Einstieg ganz leicht von der Hand. Das beweist der drylin XXL-Portalroboter von igus. Der DIY-Bausatz bietet Unternehmen die Möglichkeit, schnell und unkompliziert einen Pick-and-Place Linearroboter für Aufgaben rund um Palettierung, Sortierung, Etikettierung und Qualitätsprüfung in Betrieb zu nehmen. „Palettier-Roboter, die in Zusammenarbeit mit externen Dienstleistern entstehen, kosten schnell zwischen 85.000 und 120.000 Euro. Das sprengt das Budget vieler kleiner Betriebe“, sagt Alexander Mühlens, Leiter Geschäftsbereich Low-Cost-Automation bei igus. „Wir haben deshalb eine Lösung entwickelt, die aufgrund des Einsatzes von Hochleistungskunststoffen und Leichtbaumaterialien wie Aluminium um ein Vielfaches günstiger ist. So kostet der drylin XXL-Raumportalroboter je nach Ausbaustufe zwischen 7.000 und 10.000 Euro. Eine Investition, die risikoarm ist und sich in der Regel innerhalb weniger Wochen amortisiert.“
DIY-Bausatz lässt sich ohne Vorkenntnisse schnell zusammensetzen
Das Raumportal erhält der Käufer als DIY-Bausatz. Bestandteile sind zwei Zahnriemenachsen und eine Zahnstangen-Auslegerachse mit Schrittmotoren und einem Aktionsraum von 2000 x 2000 x 1500 Millimeter. In der Maximallänge sind auch bis zu 6.000 x 6.000 x 1.500 Millimeter möglich. Zusätzlich ist im Paket ein Schaltschrank, Leitungen und Energieketten sowie die kostenlose Steuerungssoftware igus Robot Control (iRC) enthalten. Anwender können die Komponenten in wenigen Stunden zu einem betriebsfertigen Linearroboter zusammensetzen – ohne externe Hilfe, ohne Vorkenntnisse und ohne lange Einarbeitungszeit. Und werden noch zusätzliche Komponenten wie Kamerasysteme oder Greifer benötigt, so werden Anwender auf dem Robotik-Marktplatz RBTX schnell fündig.
Automatisierung entlastet Mitarbeiter
Zum Einsatz kommt der kartesische Roboter beispielsweise an Förderbändern, die Produkte von Spritzgussmaschinen abtransportieren. Hier nimmt der Roboter Artikel mit einem Maximalgewicht von 10 Kilogramm vom Band, transportiert sie mit einer Geschwindigkeit von bis zu 500 mm/s und positioniert sie mit einer Wiederholgenauigkeit von 0,8 Millimeter auf einer Palette. „Dank dieser Automatisierung können Betriebe ihre Mitarbeiter von körperlich anstrengenden und zeitaufwendigen Palettier-Aufgaben entlasten und Ressourcen für wichtigere Aufgaben gewinnen.“ Das System selbst verursacht dabei keinen Wartungsaufwand. Die Linearachsen bestehen aus korrosionsfreiem Aluminium, die Schlitten bewegen sich über Gleitlager aus Hochleistungskunststoff, die dank integrierter Festschmierstoffe über viele Jahre einen reibungsarmen Trockenlauf ohne externe Schmiermittel ermöglichen – selbst in staubigen und schmutzigen Umgebungen.
Doch nicht nur die Montage, sondern auch die Programmierung von Bewegungsabläufen stellt kein Einstiegshindernis dar. „Für viele Betriebe, die keine eigenen IT-Fachkräfte haben, ist die Programmierung von Robotern oft mit Problemen besetzt“, so Mühlens. „Wir haben deswegen mit der iRC eine kostenlose Software entwickelt, die optisch an häufig genutzte Office Software erinnert und eine intuitive Programmierung von Bewegungen ermöglicht. Das Besondere: die Software ist kostenlos und die so entstehende Low-Code-Programmierung kann dann 1:1 am realen Roboter verwendet werden.“ Herzstück der Software ist ein digitaler Zwilling des Raumportals, über den sich Bewegungen mit wenigen Klicks festlegen lassen. Auch im Vorfeld, bevor der Roboter in Betrieb ist. „Interessenten können vor dem Kauf anhand des 3D-Modells prüfen, ob gewünschte Bewegungen tatsächlich realisierbar sind. Zusätzlich laden wir alle Interessen ein, unsere Roboter live oder über das Internet kostenfrei auszuprobieren. Wir unterstützen sie bei der Inbetriebnahme und zeigen, was alles mit Low-Cost-Robotern möglich ist. Die Investition wird dadurch nahezu risikofrei.“
QUBS (www.qubs.toys) is a Swiss company producing traditionally-designed wooden toys with hidden high-tech magic: liberating children to explore their imagination, safely learn future skills and engage in educational, screen-free fun.
Inspired by the Montessori method, QUBS STEM toys educate as well as entertain. Playing with QUBS toys provides children, through play, with developmental skills in science, technology, engineering, and mathematics.
Loved by parents, teachers and, most importantly, young users (3 to 12 years), QUBS’ intuitive, gender neutral toys – made from responsibly sourced and long lasting beechwood – contain patented technology which brings them to life. Unlike other tech-enabled STEM children’s toys, QUBS’ toys have an eternal shelf life, do not require updates nor access to the internet, and are completely screen-less, empowering children to become creators, rather than passive users of laptop or smartphone screens.
Each block and toy component contains a QUBS-developed and patented version of RFID (Radio Frequency Identification) technology (the innovation most commonly-used in contactless payments and key fobs). RFID technology is 100% safe and secure for children and grown-ups, allowing the individual tiles and blocks to interact, all within their own secure universe.
Cody Block
QUBS’ first product, CodyBlock- to be showcased at Nuremberg Toy Fair – Spielwarenmesse Digital (where it has been shortlisted for the prestigious annual ‘Toy Award’) – features an independently-moving car (Cody), whose journey changes in response to a child’s placement and arrangement of wooden blocks within its environment. Encouraging creativity and teamwork, Cody Block introduces children to computer programming concepts, robotics, and the Internet Of Things through fun and accessible play.
Learning computational skills in early years is essential. Cody the car, and the wooden toy blocks which shape his journey, teach kids to think like a programmer: being introduced to principles of debugging (the process of identifying a problem and correcting it) and sequencing (the specific order in which instructions are performed in an algorithm) through physical play.
The task is to plan a path that leads Cody through the city and back home, his movements changing in response to the child’s arrangement and rearrangement of the wooden blocks (each containing RFID tech). Each block denotes a different directional command (e.g. ‘turn left’, ‘turn right’, ‘u-turn’ etc.), creating a sequence of instructions. This allows children to improve their motor skills, critical thinking, creativity and spatial awareness.
Cody Blockis designed for kids aged 3-12, and will be available to ship in Q2 2022.
Matty Block
QUBS’ second product, MattyBlock, is designed for ages 3-9, it helps children develop self confidence in mathematics by introducing the concepts of addition, subtraction and multiplication.
Children place Matty the farmer on a board above a sum of their own creation, formed by numbered tiles (representing seeds). With a nod or shake of his head, Matty guides young users to the right answer to the sum. MattyBlockfeatures voice feedback in six languages (English, German, French, Spanish, Italian and Mandarin), making it the perfect tool for children to play and learn autonomously. Its story setting provides a fun and comprehensive introduction to numbers and equations, while exploring the delicate and ever-changing world of nature.
Matty Blockwill be available in 2023.
About QUBS
Based in Zurich, Paris and London, QUBS Toys was founded by Hayri Bulman in 2019, a Swiss entrepreneur with over 30 years of IT expertise, working for GE (General Electric) and Xerox. Hayri’s own fatherhood, passion for wooden toys and firm grasp of technology motivated him to create QUBS to better equip the future generations for the digital world. Inspired by the toy company TEGU in 2015, Hayri sought out to merge classic wooden toys with modern technology and soon started working on concepts that combined RFID technology with wooden blocks. Since then, QUBS has expanded into a vast team of designers, engineers and creatives from all across Europe.
In April 2020, at the very beginning of the global pandemic, QUBS raised CHF 88,887 (~£70,000) by 503 backers during a Kickstarter campaign.
QUBS Toys will be available for purchase online from www.qubs.toys, as well as from major stockists.
Sophie writes on behalf of Panda Security covering cybersecurity and online safety best practices for consumers and families. Specifically, she is interested in removing the barriers of complicated cybersecurity topics and teaching data security in a way that is accessible to all. Her most recent piece is on the evolution of robotic dogs and where they're headed next.
Robots have been a point of fascination and study for centuries as researchers and inventors have sought to explore the potential for automated technology. While there’s a long history of the development and creation of autonomous machines, mobile, quadrupedal robots — or four-legged robotic dogs — have seen a significant boom in the last few decades.
The development of quadrupedal robots stems from the necessity of mobile robots in exploring dangerous or unstructured terrains. Compared to other mobile robots (like wheeled or bipedal/two-legged robots), quadrupedal robots are a superior locomotion system in terms of stability, control and speed.
The capabilities of quadrupedal robots are being explored in a variety of fields, from construction and entertainment to space exploration and military operations. Today, modern robotic dogs can be purchased by businesses and developers to complete tasks and explore environments deemed too dangerous for humans. Read on for the evolution of robotic dogs and where they might be headed in the future.
1966: Phony Pony
Although it technically mirrored the form of a horse, the Phony Pony was the first autonomous quadrupedal robot to emerge in the U.S. that set the precedent for robotic dogs of the future. Equipped with electrical motors, the Pony Pony had two degrees of freedom, or joints, in each leg (the hip and the knee) and one adaptive joint in the frontal plane. The hip and knee joints were identical, allowing for both forward and backward walking movements.
The Phony Pony was capable of crawling, walking and trotting, albeit at a very slow speed. Thanks to its spring-restrained “pelvic” structure, it was able to maintain static vertical stability during movement. Since the Phony Pony was developed before the advent of microprocessors, it could only be controlled through cables connected to a remote computer in an adjacent building.
Developer: Frank and McGhee
Use: Initial research and development of autonomous quadrupeds
1999: AIBO
In the late 1990s, Sony’s AIBO — one of the most iconic and advanced entertainment robotic dogs — hit the market. While the AIBO (Artificial Intelligence RoBOt) was constructed for entertainment purposes, its machinery is still highly complex.
Developed with touch, hearing, sight and balancing capabilities, it can respond to voice commands, shake hands, walk and chase a ball. It can also express six “emotions”: happiness, sadness, fear, anger, dislike and surprise. Its emotional state is expressed through tail wagging, eye color changes and body movements, as well as through a series of sounds including barks, whines and growls. Today, the AIBO has been used across many research groups for the purpose of testing artificial intelligence and sensory integration techniques.
Developer: Sony
Use: Toys and entertainment
2005: BigDog
Boston Dynamics has become a leader in the world of robotics, specifically in their development of canine-inspired quadrupeds. Their first robotic dog, coined BigDog, arrived in 2005. Measuring three by two feet and weighing in at 240 pounds, BigDog was designed to support soldiers in the military. It can carry 340 pounds, climb up and down 35-degree inclines and successfully hike over rough terrains.
Each of BigDog’s legs has a passive linear pneumatic compliance — a system that controls contact forces between a robot and a rigid environment — and three active joints in the knees and hips. The robot is powered by a one-cylinder go-kart engine, and its dynamic regulating system allows it to maintain balance. Its movement sensors embrace joint position, joint force, ground contact, ground load and a stereo vision system.
In 2012, developers were still working to refine BigDog’s capabilities before plans to officially deploy it to military squads. However, the project was discontinued in 2015 after concluding its gas-powered engine was too noisy to be used in combat.
Developer: Boston Dynamics
Use: Assist soldiers in unsafe terrains
2009: LittleDog
Four years after BigDog came LittleDog, Boston Dynamics’ smallest quadrupedal robot to date. LittleDog was developed specifically for research purposes to be used by third parties investigating quadrupedal locomotion.
Each of LittleDog’s legs are powered by three electric motors fueled by lithium polymer batteries and have a maximum operation time of thirty minutes. LittleDog maintains a large range of motion and is capable of climbing, crawling and walking across rocky terrains. A PC-level computer placed on top of LittleDog is responsible for its movement sensors, controls and communications. It can be controlled remotely and includes data-logging support for data analysis purposes.
Developer: Boston Dynamics
Use: Research on locomotion in quadrupeds
2011: AlphaDog Proto
Continuing their efforts to develop military-grade robots, Boston Dynamics released AlphaDog Proto in 2011. Powered by a hydraulic actuation system, AlphaDog Proto is designed to support soldiers in carrying heavy gear across rocky terrains. It’s capable of carrying up to 400 pounds for as far as 20 miles, all within the span of 24 hours, without needing to refuel.
AlphaDog Proto is equipped with a GPS navigation and computer vision system that allows it to follow soldiers while carrying their gear. Thanks to an internal combustion engine, AlphaDog Proto proved to be quieter than its predecessor BigDog, making it more suitable for field missions.
Developer: Boston Dynamics
Use: Assist soldiers in carrying heavy gear over unsafe terrains
2012: Legged Squad Support System (LS3)
Boston Dynamics’ development of the Legged Squad Support System (LS3) came soon after the creation of BigDog in their efforts to continue refining their quadrupedal robots for soldiers and Marines. LS3 was capable of operating in hot, cold, wet and otherwise unfavorable conditions. It contained a stereo vision system with a pair of stereo cameras, which were mounted inside the robot’s head. This operated in conjunction with a light-detecting and ranging unit that allowed it to follow a soldier’s lead and record feedback obtained from the camera.
Compared to BigDog, LS3 was around 10 times quieter at certain times and had an increased walking speed of one to three miles per hour, increased jogging speed of five miles per hour and the ability to run across flat surfaces at seven miles per hour. It was also capable of responding to ten voice commands, which was a more efficient function for soldiers who would be too preoccupied with a mission to use manual controls.
Five years into development, LS3 had successfully been refined enough to be able to operate with Marines in a realistic combat exercise and was used to resupply combat squads in locations that were difficult for squad vehicles to reach. By 2015, however, the LS3 was shelved due to noise and repair limitations. While the Marines were ultimately unable to use the LS3 in service, it provided valuable research insights in the field of autonomous technology.
Developer: Boston Dynamics
Use: Assist soldiers in carrying heavy gear over unsafe terrains
2016: Spot
Spot is Boston Dynamics’ next creation in their line of quadrupedal robots, designed in an effort to move away from developing quadrupeds strictly for military use and instead move into more commercial use. Spot is significantly smaller than their previous models, weighing just 160 pounds. Spot is capable of exploring rocky terrains, avoiding objects in its path during travel and climbing stairs and hills.
Spot’s hardware is equipped with powerful control boards and five sensor units on all sides of its body that allow it to navigate an area autonomously from any angle. Twelve custom motors power Spot’s legs, gaining speed of up to five feet per second and operating for up to 90 minutes. Its sensors are able to capture spherical images and also allow for mobile manipulation for tasks such as opening doors and grasping objects. Spot’s control methods are far more advanced than Boston Dynamics’ earlier robots, allowing for autonomous control in a wider variety of situations.
Developer: Boston Dynamics
Use: Documenting construction process and monitoring remote high-risk environments
2016: ANYmal
While Boston Dynamics had been the main leader in quadrupedal robots since the early 2000s, Swiss robotics company ANYbotics came out with its own iteration of the robotic dog in 2016. Positioned as an end-to-end robotic inspection solution, ANYmal was developed for industrial use, specifically the inspection of unsafe environments like energy and industrial plants.
ANYmal is mounted with a variety of laser inspection sensors to provide visual, thermal and acoustic readings. Equipped with an on-board camera, it’s capable of remote panning and tilting settings to adjust views of the inspection site. ANYmal is capable of autonomously perceiving its environment, planning its navigation path and selecting proper footholds during travel. It can even walk up stairs and fit into difficult-to-reach areas that traditional wheeled robots can’t.
ANYmal has undergone a handful of development iterations since 2016 and is available for purchase as of 2021. ANYbotics is currently working on an upgraded version of the robot suitable for potentially explosive environments.
Developer: ETH Zurich and ANYbotics
Use: Remote inspection of unsafe environments
2021: Vision 60
One of the latest developments in quadrupedal robots is Ghost Robotics’ Vision 60 robotic dog, which has recently been tested at the U.S. Air Force’s Scott Air Force Base in Illinois as part of its one-year pilot testing program. Built to mitigate risks faced by Air Force pilots, Vision 60 features a rifle mounted on its back contained in a gun pod and is equipped with sensors that allow it to operate in a wide variety of unstable terrains. It’s also capable of thermal imaging, infrared configuration and high-definition video streaming.
Vision 60 can carry a maximum of 31 pounds and can travel at up to 5.24 feet per second. It’s considered a semi-autonomous robot due to its accompanying rifle; while it can accurately line up with a target on its own, it can’t open fire without a human operator (in accordance with the U.S. military’s autonomous systems policy prohibiting automatic target engagement).
Developer: Ghost Robotics
Use: Military and Homeland Security operations
2021: CyberDog
With more companies embracing the development of quadrupeds, Xiaomi Global followed suit and released their version named CyberDog. CyberDog is an experimental, open-source robot promoted as both a human-friendly companion and an asset by law enforcement and military. CyberDog is sleeker and smaller than its other robotic dog predecessors, carrying a payload of just 6.6 pounds and running over 10 feet per second.
CyberDog is equipped with multiple cameras and image sensors located across its body, including touch sensors and an ultra-wide fisheye lens. CyberDog can hold 128 gigabytes of storage and is powered by Nvidia’s Jetson Xavier AI platform to perform real-time analyses of its surroundings, create navigation paths, plot its destination and avoid obstacles. CyberDog can also perform backflips and respond to voice commands thanks to its six microphones.
By making CyberDog an open-source project, Xiaomi hopes to expand its reach into the future of robot development and innovation. Its open-source nature is meant to encourage robotics enthusiasts to try their hand at writing code for CyberDog, giving the project more exposure and bolstering Xiaomi’s reputation in the robotics community.
Developer: Xiaomi Global
Use: An open-source platform for developers to build upon
While the market for quadrupedal robots is still in its early stages, interest is steadily growing in a wide range of industries. As for fears of robots pushing out the need for traditionally human-led jobs, these machines are more intended to support humans alongside their jobs rather than replace them outright.
On the other hand, privacy concerns associated with robots aren’t to be ignored. As with any tech-enabled device, hacking is always possible, especially for open-source robotic models that can put users’ personal information at risk. This applies not only to the quadrupeds discussed above, but to more common commercial robotic systems like baby monitors, security systems and other WiFi-connected devices. It’s important to ensure your home network system is as strong and secure as possible with a home antivirus platform.
The true AI vision robotic arm powered by Jetson Nano is affordable and open-source, making your AI creativity into reality.
In recent years, there are more makers, students, enthusiasts, and engineers learning artificial intelligence technology, and many interesting AI projects are being developed as well.Hiwonder brings the power of AI to robot, build a true AI robotic arm— JetMax, to enhance the AI and robotic learning experience for everyone.
JetMax featurs Deep Learning and Computer Vision abilities. It is equipped with Jetson Nano and HD Wide Angle camera, which enables it to interact with the perceived environment efficiently. It empowers you to skillfully make your AI creativity into reality.
Being an AI Vision Robotic Arm, JetMax not only features AI vision but has a clever brain as well. Supporting you in learning coding, researching AI robotics applications, and bringing your AI ideas to life. It can be your helping hand in a lab, university, or workshop.
Powered by NVIDIA Jetson Nano
The open-source JetMax robot arm is powered by Jetson Nano, featuring deep learning, computer vision and more. Jetson Nano has the performance needed to power modern AI workloads to enable JetMax robot arm with advanced AI capabilities.
Supports multiple types of EoAT (End-of-Arm Tooling)
Supporting multiple types of end-of-arm tooling such as grippers, suction cup, pen holder, electromagnet etc, JetMax provides you with many ways of creative design applications.
Open-Source
JetMax is an open platform hardware product. We contribute numerous project source and AI tutorials. Additionally, the API interface is completely opened for customization and supports, such as Python, C++ and JAVA languages.