















































Lettuce is a valuable crop in Europe and the USA. But labor shortages make it difficult to harvest this valuable field vegetable, as sourcing sufficient seasonal labor to meet harvesting commitments is one of the sector’s biggest challenges. Moreover, with wage inflation rising faster than producer prices, margins are very tight. In England, agricultural technology and machinery experts are working with IDS Imaging Development Systems GmbH (Obersulm, Germany) to develop a robotic solution to automate lettuce harvesting.
The team is working on a project funded by Innovate UK and includes experts from the Grimme agricultural machinery factory, the Agri-EPI Centre (Edinburgh UK), Harper Adams University (Newport UK), the Centre for Machine Vision at the University of the West of England (Bristol) and two of the UK’s largest salad producers, G’s Fresh and PDM Produce.
Within the project, existing leek harvesting machinery is adapted to lift the lettuce clear from the ground and grip it in between pinch belts. The lettuce’s outer, or ‘wrapper’, leaves will be mechanically removed to expose the stem. Machine vision and artificial intelligence are then used to identify a precise cut point on the stem to to neatly separate the head of lettuce.
„The cutting process of an iceberg is the most technically complicated step in the process to automate, according to teammates from G subsidiary Salad Harvesting Services Ltd.“, explains IDS Product Sales Specialist Rob Webb. „The prototype harvesting robot being built incorporates a GigE Vision camera from the uEye FA family. It is considered to be particularly robust and is therefore ideally suited to demanding environments. „As this is an outdoor application, a housing with IP65/67 protection is required here“, Rob Webb points out.
The choice fell on the GV-5280FA-C-HQ model with the compact 2/3″ global shutter CMOS sensor IMX264 from Sony. „The sensor was chosen mainly because of its versatility. We don’t need full resolution for AI processing, so sensitivity can be increased by binning. The larger sensor format means that wide-angle optics are not needed either“, Rob Webb summarized the requirements. In the application, the CMOS sensor convinces with excellent image quality, light sensitivity and exceptionally high dynamic range and delivers almost noise-free, very high-contrast 5 MP images in 5:4 format at 22 fps – even in applications with fluctuating light conditions. The extensive range of accessories, such as lens tubes and trailing cables, is just as tough as the camera housing and the screwable connectors (8-pin M12 connector with X-coding and 8-pin Binder connector). Another advantage: camera-internal functions such as pixel pre-processing, LUT or gamma reduce the required computer power to a minimum.
The prototype of the robotic mower will be used for field trials in England towards the end of the 2021 season.
„We are delighted to be involved in the project and look forward to seeing the results. We are convinced of its potential to automate and increase the efficiency of the lettuce harvest, not only in terms of compensating for the lack of seasonal workers“, affirms Jan Hartmann, Managing Director of IDS Imaging Development Systems GmbH.
The challenges facing the agricultural sector are indeed complex. According to a forecast by the United Nations Food and Agriculture Organization (FAO), agricultural productivity will have to increase by almost 50 percent by 2050 compared to 2012 due to the dramatic increase in population. Such a yield expectation means an enormous challenge for the agricultural industry, which is still in its infancy in terms of digitalization compared to other sectors and is already under high pressure to innovate in view of climatic changes and labor shortages. The agriculture of the future is based on networked devices and automation. Cameras are an important building block, and artificial intelligence is a central technology here. Smart applications such as harvesting robots can make a significant contribution to this.
VEXcode EXP is now available in a web-based version for Chrome browsers. The web-based version can be reached by navigating to codeexp.vex.com and contains all of the features and functionality of VEXcode EXP, but without the need to download or install anything! The new web-based version of VEXcode makes it easier for teachers and students to access projects from anywhere, at any time, on any device – including Chromebooks!
In addition to the built-in Help and Tutorials, the STEM Library contains additional resources and support for using web-based VEXcode EXP. Within the STEM Library you can find device-specific articles for connecting to web-based VEXcode EXP, loading and saving projects, updating firmware, and more. View the VEXcode EXP section of the STEM Library to learn more.
Web-based versions of VEXcode IQ and VEXcode V5 are in the works and will be available soon.
Macroact, the personal robotics development lab operating out of South Korea, has released their first AI based companion pet. Designed for education and entertainment, Maicat is now live on Kickstarter after years of design and testing.
CAPABLE – Ready to use directly from the box, Maicat is an autonomous robot pet. Using its sensors, Maicat is capable of detecting obstacles and walking around the house on its own. With its laser range finder and gyroscope, it is able to adjust for thick carpets and door frames.
CARING – Maicat has facial, voice pattern and emotional recognition software. When paired with the AI learning algorithm, Maicat is able to identify its owners and react to their moods.
CONNECTED – Integrated IoT connectivity allows you to add Maicat’s sensors and capabilities to your existing home network. The Maicat SDK will allow the creation of apps which will let Maicat talk to most modern IoT devices.
CREATIVE – Maicat is an excellent platform to get students interested in STEM topics. With an app and the Maicat SDK, students can study AI, programming, robotics, facial recognition…the list goes on and on.
CELEBRATED – Maicat was a CES 2022 Innovation Award nominee for its IoT integration and support. That’s more than you can say for most other pets.
CUDDLY – Maicat is small and light enough to pick up and pet. Sensors within its body let Maicat know it’s being petted and Maicat will respond lovingly.
To learn more about the Maicat project checkout the promotional link below.
About Macroact Inc.
Macroact is an AI and robotics startup that develops machine learning solutions for adaptive robots. The company focuses on the implementation of artificial intelligence solutions throughout the whole robot development process to reduce time and costs of the robot development and enhance the learning ability of robots. Their core technology is Maidynamics, an autonomous robot control solution. Maicat is their first adaptive robot.
All-in-one embedded vision platform with new tools and functions
(PresseBox) (Obersulm) At IDS, image processing with artificial intelligence does not just mean that AI runs directly on cameras and users also have enormous design options through vision apps. Rather, with the IDS NXT ocean embedded vision platform, customers receive all the necessary, coordinated tools and workflows to realise their own AI vision applications without prior knowledge and to run them directly on the IDS NXT industrial cameras. Now follows the next free software update for the AI package. In addition to the topic of user-friendliness, the focus is also on making artificial intelligence clear and comprehensible for the user.
An all-in-one system such as IDS NXT ocean, which has integrated computing power and artificial intelligence thanks to the „deep ocean core“ developed by IDS, is ideally suited for entry into AI Vision. It requires no prior knowledge of deep learning or camera programming. The current software update makes setting up, deploying and controlling the intelligent cameras in the IDS NXT cockpit even easier. For this purpose, among other things, an ROI editor is integrated with which users can freely draw the image areas to be evaluated and configure, save and reuse them as custom grids with many parameters. In addition, the new tools Attention Maps and Confusion Matrix illustrate how the AI works in the cameras and what decisions it makes. This helps to clarify the process and enables the user to evaluate the quality of a trained neural network and to improve it through targeted retraining. Data security also plays an important role in the industrial use of artificial intelligence. As of the current update, communication between IDS NXT cameras and system components can therefore be encrypted via HTTPS.
Just get started with the IDS NXT ocean Creative Kit
Anyone who wants to test the industrial-grade embedded vision platform IDS NXT ocean and evaluate its potential for their own applications should take a look at the IDS NXT ocean Creative Kit. It provides customers with all the components they need to create, train and run a neural network. In addition to an IDS NXT industrial camera with 1.6 MP Sony sensor, lens, cable and tripod adapter, the package includes six months‘ access to the AI training software IDS NXT lighthouse. Currently, IDS is offering the set in a special promotion at particularly favourable conditions. Promotion page: https://en.ids-imaging.com/ids-nxt-ocean-creative-kit.html.
Learn more: www.ids-nxt.com
All-in-One Embedded Vision Plattform mit neuen Werkzeugen und Funktionen
(PresseBox) (Obersulm) Bei IDS bedeutet Bildverarbeitung mit künstlicher Intelligenz nicht nur, dass die KI direkt auf Kameras läuft und Anwender zusätzlich enorme Gestaltungsmöglichkeiten durch Vision Apps haben. Kunden erhalten mit der Embedded-Vision-Plattform IDS NXT ocean vielmehr alle erforderlichen, aufeinander abgestimmten Tools und Workflows, um eigene KI-Vision-Anwendungen ohne Vorwissen zu realisieren und direkt auf den IDS NXT Industriekameras auszuführen. Jetzt folgt das nächste kostenlose Softwareupdate für das KI-Paket. Im Fokus steht neben dem Thema Benutzerfreundlichkeit auch der Anspruch, die künstliche Intelligenz für den Anwender anschaulich und nachvollziehbar zu machen.
Ein All-in-One System wie IDS NXT ocean, das durch den von IDS entwickelten „deep ocean core“ über integrierte Rechenleistung und künstliche Intelligenz verfügt, eignet sich bestens für den Einstieg in AI Vision. Es erfordert weder Vorkenntnisse in Deep Learning noch in der Kameraprogrammierung. Das aktuelle Softwareupdate macht die Einrichtung, Inbetriebnahme und Steuerung der intelligenten Kameras im IDS NXT cockpit noch einfacher. Hierzu wird unter anderem ein ROI-Editor integriert, mit dem Anwender die auszuwertenden Bildbereiche frei zeichnen und als beliebige Raster mit vielen Parametern konfigurieren, speichern und wiederverwenden können. Darüber hinaus veranschaulichen die neuen Werkzeuge Attention Maps und Confusion Matrix, wie die KI in den Kameras arbeitet und welche Entscheidungen sie trifft. Das macht sie transparenter und hilft dem Anwender, die Qualität eines trainierten neuronalen Netzes zu bewerten und durch gezieltes Nachtraining zu verbessern. Beim industriellen Einsatz von künstlicher Intelligenz spielt auch Datensicherheit eine wichtige Rolle. Ab dem aktuellen Update lässt sich die Kommunikation zwischen IDS NXT Kameras und Anlagenkomponenten deshalb per HTTPS verschlüsseln.
Einfach loslegen mit dem IDS NXT ocean Creative Kit
Wer die industrietaugliche Embedded-Vision-Plattform IDS NXT ocean testen und das Potenzial für die eigenen Anwendungen evaluieren möchte, sollte einen Blick auf das IDS NXT ocean Creative Kit werfen. Kunden erhalten damit alle Komponenten, die sie für die Erstellung, das Trainieren und das Ausführen eines neuronalen Netzes benötigen. Neben einer IDS NXT Industriekamera mit 1,6 MP Sony Sensor, Objektiv, Kabel und Stativadapter enthält das Paket u.a. einen sechsmonatigen Zugang zur KI-Trainingssoftware IDS NXT lighthouse. Aktuell bietet IDS das Set in einer Sonderaktion zu besonders günstigen Konditionen an. Aktionsseite: https://de.ids-imaging.com/ids-nxt-ocean-creative-kit.html.
Weitere Informationen: www.ids-nxt.de
Guest post by IDS Corporate Communications
Autonomously driving robotic assistance system for the automated placement of coil creels
Due to the industry standard 4.0, digitalisation, automation and networking of systems and facilities are becoming the predominant topics in production and thus also in logistics. Industry 4.0 pursues the increasing optimisation of processes and workflows in favour of productivity and flexibility and thus the saving of time and costs. Robotic systems have become the driving force for automating processes. Through the Internet of Things (IoT), robots are becoming increasingly sensitive, autonomous, mobile and easier to operate. More and more they are becoming an everyday helper in factories and warehouses. Intelligent imaging techniques are playing an increasingly important role in this.
To meet the growing demands in scaling and changing production environments towards fully automated and intelligently networked production, the company ONTEC Automation GmbH from Naila in Bavaria has developed an autonomously driving robotic assistance system. The „Smart Robot Assistant“ uses the synergies of mobility and automation: it consists of a powerful and efficient intralogistics platform, a flexible robot arm and a robust 3D stereo camera system from the Ensenso N series by IDS Imaging Development Systems GmbH.
The solution is versatile and takes over monotonous, weighty set-up and placement tasks, for example. The autonomous transport system is suitable for floor-level lifting of Euro pallets up to container or industrial format as well as mesh pallets in various sizes with a maximum load of up to 1,200 kilograms. For a customer in the textile industry, the AGV (Automated Guided Vehicle) is used for the automated loading of coil creels. For this purpose, it picks up pallets with yarn spools, transports them to the designated creel and loads it for further processing. Using a specially developed gripper system, up to 1000 yarn packages per 8-hour shift are picked up and pushed onto a mandrel of the creel. The sizing scheme and the position of the coils are captured by an Ensenso 3D camera (N45 series) installed on the gripper arm.
Pallets loaded with industrial yarn spools are picked up from the floor of a predefined storage place and transported to the creel location. There, the gripper positions itself vertically above the pallet. An image trigger is sent to the Ensenso 3D camera from the N45 series, triggered by the in-house software ONTEC SPSComm. It networks with the vehicle’s PLC and can thus read out and pass on data. In the application, SPSComm controls the communication between the software parts of the vehicle, gripper and camera. This way, the camera knows when the vehicle and the grabber are in position to take a picture. This takes an image and passes on a point cloud to a software solution from ONTEC based on the standard HALCON software, which reports the coordinates of the coils on the pallet to the robot. The robot can then accurately pick up the coils and process them further. As soon as the gripper has cleared a layer of the yarn spools, the Ensenso camera takes a picture of the packaging material lying between the yarn spools and provides point clouds of this as well. These point clouds are processed similarly to provide the robot with the information with which a needle gripper removes the intermediate layers. „This approach means that the number of layers and finishing patterns of the pallets do not have to be defined in advance and even incomplete pallets can be processed without any problems,“ explains Tim Böckel, software developer at ONTEC. „The gripper does not have to be converted for the use of the needle gripper. For this application, it has a normal gripping component for the coils and a needle gripping component for the intermediate layers.“
For this task, the mobile use for 3D acquisition of moving and static objects on the robot arm, the Ensenso 3D camera is suitable due to its compact design. The Ensenso N 45’s 3D stereo electronics are completely decoupled from the housing, allowing the use of a lightweight plastic composite as the housing material. The low weight facilitates the use on robot arms such as the Smart Robotic Asstistant. The camera can also cope with demanding environmental conditions. „Challenges with this application can be found primarily in the different lighting conditions that are evident in different rooms of the hall and at different times of the day,“ Tim Böckel describes the situation. Even in difficult lighting conditions, the integrated projector projects a high-contrast texture onto the object to be imaged by means of a pattern mask with a random dot pattern, thus supplementing the structures on featureless homogenous surfaces. This means that the integrated camera meets the requirements exactly. „By pre-configuring within NxView, the task was solved well.“ This sample programme with source code demonstrates the main functions of the NxLib library, which can be used to open one or more stereo and colour cameras whose image and depth data are visualised. Parameters such as exposure time, binning, AOI and depth measuring range can – as in this case – be adjusted live for the matching method used.
The matching process empowers the Ensenso 3D camera to recognise a very high number of pixels, including their position change, by means of the auxiliary structures projected onto the surface and to create complete, homogeneous depth information of the scene from this. This in turn ensures the necessary precision with which the Smart Robot Assistant proceeds. Other selection criteria for the camera were, among others, the standard vision interface Gigabit Ethernet and the global shutter 1.3 MP sensor. „The camera only takes one image pair of the entire pallet in favour of a faster throughput time, but it has to provide the coordinates from a relatively large distance with an accuracy in the millimetre range to enable the robot arm to grip precisely,“ explains Matthias Hofmann, IT specialist for application development at ONTEC. „We therefore need the high resolution of the camera to be able to safely record the edges of the coils with the 3D camera.“ The localisation of the edges is important in order to be able to pass on as accurate as possible the position from the centre of the spool to the gripper.
Furthermore, the camera is specially designed for use in harsh environmental conditions. It has a screwable GPIO connector for trigger and flash and is IP65/67 protected against dirt, dust, splash water or cleaning agents.
The Ensenso SDK enables hand-eye calibration of the camera to the robot arm, allowing easy translation or displacement of coordinates using the robot pose. In addition, by using the internal camera settings, a „FileCam“ of the current situation is recorded at each pass, i.e. at each image trigger. This makes it possible to easily adjust any edge cases later on, in this application for example unexpected lighting conditions, obstacles in the image or also an unexpected positioning of the coils in the image. The Ensenso SDK also allows the internal camera LOG files to be stored and archived for possible evaluation.
ONTEC also uses these „FileCams“ to automatically check test cases and thus ensure the correct functioning of all arrangements when making adjustments to the vision software. In addition, various vehicles can be coordinated and logistical bottlenecks minimised on the basis of the control system specially developed by ONTEC. Different assistants can be navigated and act simultaneously in a very confined space. By using the industrial interface tool ONTEC SPSComm, even standard industrial robots can be safely integrated into the overall application and data can be exchanged between the different systems.
Further development of the system is planned, among other things, in terms of navigation of the autonomous vehicle. „With regard to vehicle navigation for our AGV, the use of IDS cameras is very interesting. We are currently evaluating the use of the new Ensenso S series to enable the vehicle to react even more flexibly to obstacles, for example, classify them and possibly even drive around them,“ says Tim Böckel, software developer at ONTEC, outlining the next development step.
ONTEC’s own interface configuration already enables the system to be integrated into a wide variety of Industry 4.0 applications, while the modular structure of the autonomously moving robot solution leaves room for adaptation to a wide variety of tasks. In this way, it not only serves to increase efficiency and flexibility in production and logistics, but in many places also literally contributes to relieving the workload of employees.
More at: https://en.ids-imaging.com/casestudies-detail/picked-up-and-put-off-ensenso.html
QUBS (www.qubs.toys) is a Swiss company producing traditionally-designed wooden toys with hidden high-tech magic: liberating children to explore their imagination, safely learn future skills and engage in educational, screen-free fun.
Inspired by the Montessori method, QUBS STEM toys educate as well as entertain. Playing with QUBS toys provides children, through play, with developmental skills in science, technology, engineering, and mathematics.
Loved by parents, teachers and, most importantly, young users (3 to 12 years), QUBS’ intuitive, gender neutral toys – made from responsibly sourced and long lasting beechwood – contain patented technology which brings them to life. Unlike other tech-enabled STEM children’s toys, QUBS’ toys have an eternal shelf life, do not require updates nor access to the internet, and are completely screen-less, empowering children to become creators, rather than passive users of laptop or smartphone screens.
Each block and toy component contains a QUBS-developed and patented version of RFID (Radio Frequency Identification) technology (the innovation most commonly-used in contactless payments and key fobs). RFID technology is 100% safe and secure for children and grown-ups, allowing the individual tiles and blocks to interact, all within their own secure universe.
QUBS’ first product, CodyBlock- to be showcased at Nuremberg Toy Fair – Spielwarenmesse Digital (where it has been shortlisted for the prestigious annual ‘Toy Award’) – features an independently-moving car (Cody), whose journey changes in response to a child’s placement and arrangement of wooden blocks within its environment. Encouraging creativity and teamwork, Cody Block introduces children to computer programming concepts, robotics, and the Internet Of Things through fun and accessible play.
Learning computational skills in early years is essential. Cody the car, and the wooden toy blocks which shape his journey, teach kids to think like a programmer: being introduced to principles of debugging (the process of identifying a problem and correcting it) and sequencing (the specific order in which instructions are performed in an algorithm) through physical play.
The task is to plan a path that leads Cody through the city and back home, his movements changing in response to the child’s arrangement and rearrangement of the wooden blocks (each containing RFID tech). Each block denotes a different directional command (e.g. ‘turn left’, ‘turn right’, ‘u-turn’ etc.), creating a sequence of instructions. This allows children to improve their motor skills, critical thinking, creativity and spatial awareness.
Cody Blockis designed for kids aged 3-12, and will be available to ship in Q2 2022.
QUBS’ second product, MattyBlock, is designed for ages 3-9, it helps children develop self confidence in mathematics by introducing the concepts of addition, subtraction and multiplication.
Children place Matty the farmer on a board above a sum of their own creation, formed by numbered tiles (representing seeds). With a nod or shake of his head, Matty guides young users to the right answer to the sum. MattyBlockfeatures voice feedback in six languages (English, German, French, Spanish, Italian and Mandarin), making it the perfect tool for children to play and learn autonomously. Its story setting provides a fun and comprehensive introduction to numbers and equations, while exploring the delicate and ever-changing world of nature.
Matty Blockwill be available in 2023.
Based in Zurich, Paris and London, QUBS Toys was founded by Hayri Bulman in 2019, a Swiss entrepreneur with over 30 years of IT expertise, working for GE (General Electric) and Xerox. Hayri’s own fatherhood, passion for wooden toys and firm grasp of technology motivated him to create QUBS to better equip the future generations for the digital world. Inspired by the toy company TEGU in 2015, Hayri sought out to merge classic wooden toys with modern technology and soon started working on concepts that combined RFID technology with wooden blocks. Since then, QUBS has expanded into a vast team of designers, engineers and creatives from all across Europe.
In April 2020, at the very beginning of the global pandemic, QUBS raised CHF 88,887 (~£70,000) by 503 backers during a Kickstarter campaign.
QUBS Toys will be available for purchase online from www.qubs.toys, as well as from major stockists.
The true AI vision robotic arm powered by Jetson Nano is affordable and open-source, making your AI creativity into reality.
In recent years, there are more makers, students, enthusiasts, and engineers learning artificial intelligence technology, and many interesting AI projects are being developed as well. Hiwonder brings the power of AI to robot, build a true AI robotic arm — JetMax, to enhance the AI and robotic learning experience for everyone.
JetMax featurs Deep Learning and Computer Vision abilities. It is equipped with Jetson Nano and HD Wide Angle camera, which enables it to interact with the perceived environment efficiently. It empowers you to skillfully make your AI creativity into reality.
Being an AI Vision Robotic Arm, JetMax not only features AI vision but has a clever brain as well. Supporting you in learning coding, researching AI robotics applications, and bringing your AI ideas to life. It can be your helping hand in a lab, university, or workshop.
The open-source JetMax robot arm is powered by Jetson Nano, featuring deep learning, computer vision and more. Jetson Nano has the performance needed to power modern AI workloads to enable JetMax robot arm with advanced AI capabilities.
Supporting multiple types of end-of-arm tooling such as grippers, suction cup, pen holder, electromagnet etc, JetMax provides you with many ways of creative design applications.
JetMax is an open platform hardware product. We contribute numerous project source and AI tutorials. Additionally, the API interface is completely opened for customization and supports, such as Python, C++ and JAVA languages.
Canadian company MYNYMAL PC recently announced their newest innovation, a computer designed to be as minimalist as possible. The computer’s size comes in smaller than a tissue box yet houses a fully-fledged Windows desktop computer capable of day-to-day work and entertainment. Despite its humble appearance, this mini-computer packs quite a punch. A 4-core 8-thread CPU capable of photo editing, video editing, 3D modeling, and even light gaming makes this computer a useful tool and not just a decoration.
As decoration, it excels as well. The simple cube design is available in four different textures: Maple Wood, Concrete, White Marble, and Brushed Gunmetal Gray. This gives the buyer a variety of material choices to make the PC fit in best with the interior design of their home. The computer can be removed from the acrylic enclosure for hardware upgrades, and in the future MYNYMAL plans to sell the enclosures individually so that you can swap them out easily depending on the aesthetic you want. „We believe that technology should have both form and function, not just one or the other,“ said Gerard Cirera, Founder and CEO of MYNYMAL PC.
In addition to standard vinyl textures and materials, a limited amount of cube computers will come with interior RGB lighting and a custom Ore Block texture. An included remote control lets the user adjust the lighting to choose from red (redstone), blue (diamond), and many other colours and effects.
The MYNYMAL PC combines modern, minimalist design with the tech in your home, so you can finally get rid of that bulky old tower computer you don’t know where to put.