Festo and MassRobotics are Leading the Global Nurturing of Healthcare Robotics Innovation

The Festo-MassRobotics Healthcare Robotics Startup Catalyst program celebrates the milestones achieved by the program’s four selected global startups at the Healthcare Robotics Engineering Forum. Key life sciences and robotics speakers to lead the event.

The successful Healthcare Robotics Startup Catalyst program came to an end on April 7th, 2022. The concluding ceremony will be held at the Healthcare Robotics Engineering Forum, Boston Convention and Exhibition Center, on May 11, 2022. The event includes an impressive line-up of speakers: ​​Fady Saad, Co-founder & Vice President of Strategic Partnerships at MassRobotics; Alfons Riek, Vice President of Technology and Innovation at Festo; Kendalle Burlin O’Connell, President & Chief Operating Officer at MassBio; Kenn Turner, President and CEO at Mass Life Sciences Center; and Brian Johnson, President at MassMedic. All four selected startup companies, Kinarm (Canada), Assistive Technology Development Inc. (United States), Eureka Robotics (Singapore), and Bionomous (Switzerland ) will, in turn, promote their companies, along with their products and service offerings. They will also be demonstrating their technologies on the event’s expo floor.

In October 2021, MassRobotics, Festo, and other key players in healthcare robotics, launched a Startup Catalyst Program to advance healthcare robotics companies around the world, by providing the networking opportunities, guidance, and resources they need to grow and succeed. The aim of the program was to connect healthcare robotics startups with customers, investors, suppliers, marketing, and overall support. The program focused on startups in the areas of clinical care, public safety, laboratory, supply chain automation, out-of-hospital care, quality of life, as well as continuity of work and education, and training and support for healthcare professionals.

More than 30 companies applied from all over the world, and the selection committee invited four to join in the program. The participating startups completed impressive milestones, as detailed below:

  • Eureka Robotics develops and commercializes cutting-edge robotics and artificial intelligence (AI) technologies to automate high-accuracy, high-agility tasks. Eureka is currently completing fundraising rounds in Japan through connections provided by program mentors. Eureka was introduced to MassRobotics partner, Mitsubishi Electric, and signed a global partnership with Mitsubishi as a platinum partner. The program helped the company’s leadership to explore attractive applications in surgical lenses manufacturing technology, which is an extension to its focus on traditional manufacturing.
  • Bionomous provides laboratory equipment to automate the screening, sorting, and pipetting of miniature biological entities for more ethical and faster research in life science. CEO Frank Bonnet reports that with the aid of the Catalyst Program, Bionomous was able to run a pilot program in the US, leading to the company’s first sales outside Europe. This convinced Bionomous to expand into the US market and set up offices in the MassRobotics space in Boston. Bonnet emphasized the importance of the program’s mentors, who connected them to key industry leaders to open possibilities for future partnerships.
  • Assistive Technology Development Inc. is an American startup dedicated to at-home physical therapy solutions that are operable at a low cost and always accessible to rural patients and those who need closer monitoring for recovery. The company came into the program with three goals: 1) begin its first pilot study in a clinical setting; 2) downsize the actuation unit to a wearable form, and 3) raise capital. CEO Todd Roberts reports that with help from the program, the company has completed the first two milestones and is making progress on the third. It will begin phase I of a pilot study with UCHealth, a not-for-profit health care system, headquartered in Aurora, Colorado, on April 25th, allowing the company to present preliminary results at the keynote event at the Healthcare Conference. The study will assess the early clinical efficacy and collect patient and clinician feedback. Assistive’s actuation unit has been downsized by 70%, from a large, wall-powered, benchtop system to a wearable, battery-powered system that will enable the company to complete the pilot. Finally, Assistive is in the process of raising capital and has begun diligence with two firms.
  • Kinarm uses robotic arms to provide an objective assessment method to identify, measure, and track cognitive motor or sensory impairments resulting from injury or disease. Kinarm worked with assigned mentors from the robotics ecosystem who provided introductions to industry leaders who responded with “jaw-dropping, you-can-do-that?” exclamations, reports ​​Anne Vivian-Scott, CEO. Vivian-Scott was also introduced to experienced healthcare robotics leaders who will collaboratively aid Kinarm as the company scales its solutions. Vivian-Scott adds, “What we gained was not specific knowledge that can be encoded into our product, but direction. Quite frankly, most other programs are not ‘sufficiently vested’ in the participant’s business/opportunity to be able to offer such feedback.”

“I am grateful to Festo’s pioneering work to support our efforts to find global disrupting applications and startups in such a human-care field like healthcare, including life science, biotech, and medical devices,” said Fady Saad of MassRobotics.

 “I am impressed with the quality of applications we received, and the unique structure of the program that allowed us to select such innovative companies and match them with world-class advisors,” said Festo’s Alfons Riek. “Certainly, we are excited about the  networking opportunities opened to these companies and to presenting them to the world as great examples of the power of utilizing robotics in healthcare.”

MassRobotics, Festo, and additional corporations plan to launch the second version of the program by July 2022 to build on the programs’ amazing momentum and impact.

ABOUT MassRobotics: MassRobotics is the result of the collective work of a group of engineers, rocket scientists, and entrepreneurs with a shared vision to create an innovation hub and startup cluster focused on the needs of the robotics and IoT community. MassRobotics’ mission is to help create and scale the next generation of successful robotics and connected devices companies by providing entrepreneurs and innovative robotics/automation startups with the workspace and resources they need to develop, prototype, test, and commercialize their products and solutions. See www.massrobotics.org for details.

About Festo: Festo is a global player and an independent family-owned company with headquarters in Esslingen am Neckar, Germany. Festo has set standards in industrial automation technology and technical education ever since its establishment, thereby making a contribution to the sustainable development of the environment, the economy, and society. The company supplies pneumatic and electrical automation technology to 300,000 customers of factory and process automation in over 35 industries. The LifeTech sector with medical technology and laboratory automation is becoming increasingly important. The products and services are available in 176 countries. With about 20,000 employees in over 250 branch offices in 61 countries worldwide, Festo achieved a turnover of around €2.84 billion in 2020. Each year around 8 % of this turnover is invested in research and development. In this learning company, 1.5 % of turnover is invested in basic and further training. Festo Didactic SE is a leading provider of technical education and training and offers its customers worldwide comprehensive digital and physical learning solutions in the industrial environment.

Large laundry

Intelligent robotics for laundries closes automation gap

The textile and garment industry is facing major challenges with current supply chain and energy issues. The future recovery is also threatened by factors that hinder production, such as labour and equipment shortages, which put them under additional pressure. The competitiveness of the industry, especially in a global context, depends on how affected companies respond to these framework conditions. One solution is to move the production of clothing back to Europe in an economically viable way. Shorter transport routes and the associated significant savings in transport costs and greenhouse gases speak in favour of this. On the other hand, the related higher wage costs and the prevailing shortage of skilled workers in this country must be compensated. The latter requires further automation of textile processing. The German deep-tech start-up sewts GmbH from Munich has focused on the great potential that lies in this task. It develops solutions with the help of which robots – similar to humans – anticipate how a textile will behave and adapt their movement accordingly.

The German deep-tech start-up sewts GmbH from Munich has focused on the great potential that lies in this task. It develops solutions with the help of which robots – similar to humans – anticipate how a textile will behave and adapt their movement accordingly. In the first step, sewts has set its sights on an application for large industrial laundries. With a system that uses both 2D and 3D cameras from IDS Imaging Development Systems GmbH, the young entrepreneurs are automating one of the last remaining manual steps in large-scale industrial laundries, the unfolding process. Although 90% of the process steps in industrial washing are already automated, the remaining manual operations account for 30% of labour costs. The potential savings through automation are therefore enormous at this point.

Application

It is true that industrial laundries already operate in a highly automated environment to handle the large volumes of laundry. Among other things, the folding of laundry is done by machines. However, each of these machines usually requires an employee to manually spread out the laundry and feed it without creases. This monotonous and strenuous loading of the folding machines has a disproportionate effect on personnel costs. In addition, qualified workforce is difficult to find, which often has an impact on the capacity utilisation and thus the profitability of industrial laundries. The seasonal nature of the business also requires a high degree of flexibility. sewts makes IDS cameras the image processing components of a new type of intelligent system whose technology can now be used to automate individual steps, such as sorting dirty textiles or inserting laundry into folding machines.

„The particular challenge here is the malleability of the textiles,“ explains Tim Doerks, co-founder and CTO. While the automation of the processing of solid materials, such as metals, is comparatively unproblematic with the help of robotics and AI solutions, available software solutions and conventional image processing often still have their limits when it comes to easily deformable materials. Accordingly, commercially available robots and gripping systems have so far only been able to perform such simple operations as gripping a towel or piece of clothing inadequately. But the sewts system

Weiterlesen

Teledyne FLIR Introduces Hadron 640R Dual Thermal-Visible Camera for Unmanned Systems

GOLETA, Calif. and ORLANDO, Fla. ― Teledyne FLIR, part of Teledyne Technologies Incorporated, today announced the release of its high-performance Hadron 640R combined radiometric thermal and visible dual camera module. The Hadron 640R design is optimized for integration into unmanned aircraft systems (UAS), unmanned ground vehicles (UGV), robotic platforms, and emerging AI-ready applications where battery life and run time are mission critical.

The 640 x 512 resolution Boson longwave infrared (LWIR) thermal camera inside the Hadron 640R can see through total darkness, smoke, most fog, glare, and provide temperature measurements for every pixel in the scene. The addition of the high definition 64 MP visible camera enables the Hadron 640R to provide both thermal and visible imagery compatible with today’s on-device processors for AI and machine-learning applications at the edge.

“The Hadron 640R provides integrators the opportunity to deploy a high-performance dual-camera module into a variety of unmanned form factors from UAS to UGV thanks to its incredibly small size, weight, and power requirement,” said Michael Walters, vice president product management, Teledyne FLIR. “It is designed to maximize efficiency and its IP-54 rating protects the module from intrusion of dust and water from the outside environment.”

The Hadron 640R reduces development costs and time-to-market for integrators and original equipment manufacturer (OEM) product developers by offering a complete system through a single supplier, Teledyne FLIR. This includes offering drivers for market-leading processors from NVIDIA, Qualcomm, and more, plus industry-leading integration support and service from a support team of experts. It also offers flexible 60 Hz video output via USB or MIPI compatibility. Hadron 640R is a dual use product and is classified under US Department of Commerce jurisdiction.

The Teledyne FLIR Hadron 640R is available for purchase globally from Teledyne FLIR and its authorized dealers. To learn more or to purchase, visit www.flir.com/hadron640r.

For an exclusive in-person first look at the Hadron 640R, please visit booth #2107 at AUVSI Xponential, April 26-28, 2022, in Orlando, Florida.

About Teledyne FLIR
Teledyne FLIR, a Teledyne Technologies company, is a world leader in intelligent sensing solutions for defense and industrial applications with approximately 4,000 employees worldwide. Founded in 1978, the company creates advanced technologies to help professionals make better, faster decisions that save lives and livelihoods. For more information, please visit www.teledyneflir.com or follow @flir.

About Teledyne Technologies
Teledyne Technologies is a leading provider of sophisticated digital imaging products and software, instrumentation, aerospace and defense electronics, and engineered systems. Teledyne’s operations are primarily located in the United States, the United Kingdom, Canada, and Western and Northern Europe. For more information, visit Teledyne’s website at www.teledyne.com.

An innovative tracked mobile robot for research and agricultural applications

Guest Article by Robo-dyne

Thanks to the recent innovations in electronics and mechatronics, the use of mobile robots and manipulators is becoming popular mostly in smart factories where robotic platforms are used to move heavy loads and for the storage of goods.

One of the questions that arise when developing a new mobile robot is: continuous tracks or wheels? When implementing a new industrial robotics application, it can be very difficult to correctly choose between wheels and tracks when you build a robot because each locomotion system provides different performances and outputs. The most important features to consider when you have to design a locomotion system are the traction, the driving system, for example, Ackermann or skid-steering, the maximum ground pressure and the suspension system.

If it is necessary to minimize the ground pressure, then it is necessary to choose a solution based on tracks since they are well suitable for soft surfaces. On the other side, wheels have a significant advantage in steering compared to tracks and this can be translated into a good manoeuvrability for the wheels. Robo-dyne is a robotics company that provides tracked and wheeled robots that can be customized depending on the final requirements; this is particularly useful for all that companies that need to develop high-level applications based on a mobile robotic platform and do not will to invest money and time to build the mobile base because they aim to focus on the final application.

Robo-dyne designed MaXXII-S which is the first electric mobile robot with skid-steering and rubber tracks powered by AGM Batteries and able to use an innovative undercarriage configuration which includes a large rear wheel and a couple of two smaller front idle wheels pointing forward. This configuration allows the robot to be able to keep its balance when additional tools, i.e., lawn mowers unit, weed cutter, rotary hoes, are mounted on its rear side and it can also overcome obstacles greater than the ⅔ of the diameter of the rear wheel since the upper front wheel leans on it providing area for extra grip. MaXXII-S is open source and runs with ROS and C++ libraries in order to support researchers all over the world to develop their industrial applications.

The center idle wheels provide a better pressure distribution on the ground. A passive suspension system is used to minimize the vibrations over the main frame caused by the tracks-terrain interactions. In particular, MaXXII-S adopts a set of three shock absorbers with preload adjustment in addition to the possibility to change the elastic coefficient in order to better adapt the system configuration depending on the terrain features. This electric skid-steering tracked mobile robot and its undercarriage is already available on the market finding use in a vast range of agricultural applications, such as plant monitoring or weed cutting, and also for industrial use to move goods in warehouses with uneven ground as also reported on the manufacturer’s official website robo-dyne.com.

In Celebration of National Robotics Week, iRobot® Launches the Create® 3 Educational Robot

Robot’s Smartest Developer Platform, Now with ROS 2 and Python Support

BEDFORD, Mass., April 5, 2022 /PRNewswire/ — iRobot Corp. (NASDAQ: IRBT), a leader in consumer robots, today is expanding its educational product lineup with the launch of the Create® 3 educational robot – the company’s most capable developer platform to date. Based on the Roomba® i3 Series robot vacuum platform, Create 3 provides educators and advanced makers with a reliable, out of the box alternative to costly and labor-intensive robotics kits that require assembly and testing. Instead of cleaning people’s homes,1 the robot is designed to promote higher-level exploration for those seeking to advance their education or career in robotics.

In Celebration of National Robotics Week, iRobot launched the Create® 3 Educational Robot – the company’s most capable developer platform to date. Now with ROS 2 and Python Support, Create 3 provides educators and advanced makers with a reliable, out of the box alternative to costly and labor-intensive robotics kits that require assembly and testing. Create 3 is designed to promote higher-level exploration for those seeking to advance their education or career in robotics.

The launch of Create 3 coincides with National Robotics Week, which began April 2 and runs through April 10, 2022. National Robotics Week, founded and organized by iRobot, is a time to inspire students about robotics and STEM-related fields, and to share the excitement of robotics with audiences of all ages through a range of in-person and virtual events.

„iRobot is committed to delivering STEM tools to all levels of the educational community, empowering the next generation of engineers, scientists and enthusiasts to do more,“ said Colin Angle, chairman and CEO of iRobot. „The advanced capabilities we’ve made available on Create 3 enable higher-level students, educators and developers to be in the driver’s seat of robotics exploration, allowing them to one day discover new ways for robots to benefit society.“

With ROS 2 support, forget about building the platform, and focus on your application: 
The next generation of iRobot’s affordable and trusted all-in-one mobile robot development platform, Create 3 brings a variety of new functionalities to users, including compatibility with ROS 2, an industry-standard software for roboticists worldwide. Robots require many different components, such as actuators, sensors and control systems, to communicate with each other in order to work. ROS 2 enables this communication, allowing students to speed up the development of their project by focusing more on their core application rather than the platform itself. Learning ROS 2 also gives students valuable experience that many companies are seeking from robotics developers.

Expand your coding skills even further with Python support:
iRobot also released a Python Web Playground for its iRobot Root® and Create 3 educational robots, providing a bridge for beginners to begin learning more advanced programming skills outside of the iRobot Coding App. Python, a commonly used coding language, enables users to broaden the complexity of projects that they work on. The iRobot Education Python Web Playground allows advanced learners and educators to program the iRobot Root and Create 3 educational robots with a common library written in Python. This provides users with a pathway to learn a new coding language, opening the door to further innovation and career development.

With more smarts, Create 3 lets you do more:
As a connected robot, Create 3 comes equipped with Wi-Fi, Ethernet-over-USB host, and Bluetooth. Create 3 is also equipped with a suite of intelligent technology, including an inertial measurement unit (IMU), optical floor tracking sensor, wheel encoders, and infrared sensors for autonomous localization, navigation, and telepresence applications. Additionally, the robot includes cliff, bump and slip detection, along with LED lights and a speaker.

A 3D simulation of Create 3 is also available using Ignition Gazebo for increased access to robotics education and research.

Create 3 Pricing and Availability
Create 3 is available immediately in the US and Canada for $299 USD and $399 CAD. It will be available in EMEA through authorized distributors in the coming months. Additional details can be found at https://edu.irobot.com/what-we-offer/create3.

iRobot Education Python Web Playground Availability
The iRobot Education Python Web Playground can be accessed in-browser at python.irobot.com.

Robots as helpers in the lettuce harvest

Robot solution for automating the lettuce harvest

Lettuce is a valuable crop in Europe and the USA. But labor shortages make it difficult to harvest this valuable field vegetable, as sourcing sufficient seasonal labor to meet harvesting commitments is one of the sector’s biggest challenges. Moreover, with wage inflation rising faster than producer prices, margins are very tight. In England, agricultural technology and machinery experts are working with IDS Imaging Development Systems GmbH (Obersulm, Germany) to develop a robotic solution to automate lettuce harvesting.

Robot solution for automating the lettuce harvest

The team is working on a project funded by Innovate UK and includes experts from the Grimme agricultural machinery factory, the Agri-EPI Centre (Edinburgh UK), Harper Adams University (Newport UK), the Centre for Machine Vision at the University of the West of England (Bristol) and two of the UK’s largest salad producers, G’s Fresh and PDM Produce.

Within the project, existing leek harvesting machinery is adapted to lift the lettuce clear from the ground and grip it in between pinch belts. The lettuce’s outer, or ‘wrapper’, leaves will be mechanically removed to expose the stem. Machine vision and artificial intelligence are then used to identify a precise cut point on the stem to to neatly separate the head of lettuce.

„The cutting process of an iceberg is the most technically complicated step in the process to automate, according to teammates from G subsidiary Salad Harvesting Services Ltd.“, explains IDS Product Sales Specialist Rob Webb. „The prototype harvesting robot being built incorporates a GigE Vision camera from the uEye FA family. It is considered to be particularly robust and is therefore ideally suited to demanding environments. „As this is an outdoor application, a housing with IP65/67 protection is required here“, Rob Webb points out.

GV-5280FA

The choice fell on the GV-5280FA-C-HQ model with the compact 2/3″ global shutter CMOS sensor IMX264 from Sony. „The sensor was chosen mainly because of its versatility. We don’t need full resolution for AI processing, so sensitivity can be increased by binning. The larger sensor format means that wide-angle optics are not needed either“, Rob Webb summarized the requirements. In the application, the CMOS sensor convinces with excellent image quality, light sensitivity and exceptionally high dynamic range and delivers almost noise-free, very high-contrast 5 MP images in 5:4 format at 22 fps – even in applications with fluctuating light conditions. The extensive range of accessories, such as lens tubes and trailing cables, is just as tough as the camera housing and the screwable connectors (8-pin M12 connector with X-coding and 8-pin Binder connector). Another advantage: camera-internal functions such as pixel pre-processing, LUT or gamma reduce the required computer power to a minimum.

The prototype of the robotic mower will be used for field trials in England towards the end of the 2021 season.

„We are delighted to be involved in the project and look forward to seeing the results. We are convinced of its potential to automate and increase the efficiency of the lettuce harvest, not only in terms of compensating for the lack of seasonal workers“, affirms Jan Hartmann, Managing Director of IDS Imaging Development Systems GmbH.

Prototype lettuce harvesting robot of Agri-Epicentre (UK)

The challenges facing the agricultural sector are indeed complex. According to a forecast by the United Nations Food and Agriculture Organization (FAO), agricultural productivity will have to increase by almost 50 percent by 2050 compared to 2012 due to the dramatic increase in population. Such a yield expectation means an enormous challenge for the agricultural industry, which is still in its infancy in terms of digitalization compared to other sectors and is already under high pressure to innovate in view of climatic changes and labor shortages. The agriculture of the future is based on networked devices and automation. Cameras are an important building block, and artificial intelligence is a central technology here. Smart applications such as harvesting robots can make a significant contribution to this.

iRobot Releases Genius 4.0 Home Intelligence: Doubles the Intelligence for Roomba® i3 and i3+ Robot Vacuums and More

Updates Include Imprint® Smart Mapping for Roomba i3 Series, Siri Commands, Clothing & Towel Detection for Roomba j7 Series

BEDFORD, Mass., March 17, 2022 /PRNewswire/ — iRobot Corp. (NASDAQ: IRBT), a leader in consumer robots, today announced that it has started rolling out its iRobot Genius 4.0 Home Intelligence software update to Wi-Fi connected Roomba® robot vacuum and Braava jet® robot mop customers. The company is on a mission to design superior cleaning experiences that go beyond just smart. iRobot Genius 4.0 is the next step in transforming the smart home into a thoughtful home.

With the iRobot Genius 4.0 update and Imprint Smart Mapping, customers can now create customizable Smart Maps for their Roomba i3 and i3+ robot vacuums, enabling them to send their robot to clean specific rooms via the iRobot Home app or through their preferred voice assistant.

iRobot Genius 4.0 includes updates that provide powerful new functionality, like Imprint® Smart Mapping for Roomba i3 and i3+ customers, as well as customization and convenience features like Room-Specific Cleaning Preferences, Siri Shortcut Integration, Child & Pet Lock, and Do Not Disturb. The update also expands the list of objects the Roomba j Series can recognize and avoid to include clothing and towels.

The company also announced that the Roomba i3 and i3+ will be sold as the Roomba i3 EVO and Roomba i3+ EVO robot vacuums in the Americas moving forward. These Roomba i3 models automatically include the Imprint Smart Mapping update and are being sold at a new, lower retail price, starting at $349 USD for the Roomba i3 EVO and $549 USD for the Roomba i3+ EVO.

„The beauty of iRobot Genius is that our robots get smarter over time and continuously provide customers with new ways to clean where, when and how they want,“ said Keith Hartsfield, chief product officer at iRobot. „As iRobot develops new features and experiences, the updates are pushed out to customers‘ robots at no cost. From the day a customer welcomes a Roomba robot vacuum or Braava jet robot mop into their home, they know that they’ll always benefit from new features and functionality. They are also getting a robot that works harder for them, so they don’t have to. With more than 60 million personalized recommendations provided to customers to date, our robots are proven to learn, respect and work around individual schedules and needs.“

The iRobot Genius 4.0 update delivers these thoughtful mapping, voice and app features across iRobot’s Wi-Fi connected robot lineup:

  • Tell Roomba i3 and i3+ to Clean the Rooms You Want: With double the intelligence provided by Imprint Smart Mapping, customers can now create customizable Smart Maps for their Roomba i3 and i3+ robot vacuums, enabling them to send their robot to clean specific rooms1 via the iRobot Home app or through their preferred voice assistant. They can also now receive estimated cleaning times and create cleaning routines based on their preferred schedules, rooms and automations. This update is available to Roomba i3 and i3+ customers in the Americas and APAC now and is expected to be available in EMEA by the end of Q3 2022.
  • Clean Each Room the Way You Want: Everyone’s home is unique, with individual rooms varying in size, flooring, furniture and traffic level. That’s why users who own Imprint Smart Mapping-capable robots2 will have more control of how their robot cleans with Room-Specific Cleaning Preferences. Need your Roomba to take an extra pass in the entryway where shoes are kept but quickly clean other rooms? No problem. Looking for your Braava jet m6 to dispense more cleaning solution when tackling the kitchen, but not in the hallway? You’ll be able to do that too.
  • Use Siri to Clean – Everywhere: With approximately 600 supported Alexa® and Google Assistant® commands, robots that can be scheduled to clean specific rooms with your voice, and the only robots that can be told to clean specific areas like around the kitchen counter, iRobot is expanding its market-leading voice capabilities to include Siri. After all, you should have choices when it comes to verbally communicating with your robot, and with Siri Shortcut Integration, you’ll get just that. Like existing Alexa ® or Google Assistant ® shortcuts in the iRobot Home App, owners of Wi-Fi connected Roomba robot vacuums and Braava jet robot mops who use an iOS device will have the option to connect their robot to Siri in the iRobot Home App. Want to vacuum your whole home? Set up your own custom phrase, and just say „Hey Siri, ask Roomba to clean everywhere.“
  • No More Accidental Starts: Has Fido ever accidentally started your robot? Or maybe a curious child unintentionally found the robot’s „Clean“ button? Problem solved with Child & Pet Lock via the iRobot Home App, an option that temporarily disables the physical „Clean“ button on Wi-Fi connected Roomba and Braava jet robots.3 Once activated, the robots can only be controlled through the iRobot Home App, eliminating the need to quickly end an accidental start.
  • Never Be Distracted or Woken Up: Introducing iRobot’s NAP Commitment (Never Awake People or Pets). With the Do Not Disturb feature, customers can use the iRobot Home App to define windows of time in which the robot should not run, whether that be when someone is asleep or in a meeting. Do Not Disturb provides peace of mind that your robot will respect life’s quiet times. This feature has been rolled out to existing customers globally.
  • Less Clean-Up Before You Clean Up: Having already avoided 3 million objects in people’s homes since being introduced last year, the Roomba j7 and j7+ will also be able to detect and avoid clothing and towels left on the floor, letting customers feel confident the job will get done without needing to pick them up beforehand. These objects expand the visual vocabulary of the Roomba j7 and j7+, which already recognizes and avoids shoes, socks, cords, headphones, and pet waste. iRobot will continue enabling the Roomba j7 Series to identify and avoid even more objects that might prevent mission completion over time.

Availability:

All iRobot Genius 4.0 software updates will be rolling out globally to Wi-Fi connected Roomba and Braava jet customers through the end of June 2022 with the exception of Do Not Disturb, which is now available globally, and Imprint Smart Mapping for Roomba i3 and i3+, which is now available for customers in the Americas and APAC regions – and expected to be available for customers in EMEA by the end of Q3 2022.

The Roomba i3+ EVO robot vacuum with Clean Base® Automatic Dirt Disposal is available for purchase immediately in the U.S. and Canada starting at $549 USD on www.irobot.com and at select retailers. The Roomba i3 EVO robot vacuum can also be purchased without the Clean Base starting at $349 on www.irobot.com and at select retailers.

For more information or questions on robot software updates, please visit: https://homesupport.irobot.com/s/article/550.

Web-based VEXcode EXP

VEXcode EXP is now available in a web-based version for Chrome browsers. The web-based version can be reached by navigating to codeexp.vex.com and contains all of the features and functionality of VEXcode EXP, but without the need to download or install anything! The new web-based version of VEXcode makes it easier for teachers and students to access projects from anywhere, at any time, on any device – including Chromebooks!

In addition to the built-in Help and Tutorials, the STEM Library contains additional resources and support for using web-based VEXcode EXP. Within the STEM Library you can find device-specific articles for connecting to web-based VEXcode EXP, loading and saving projects, updating firmware, and more. View the VEXcode EXP section of the STEM Library to learn more.

Web-based versions of VEXcode IQ and VEXcode V5 are in the works and will be available soon.

Draper Teaches Robots to Build Trust with Humans – new research

New study shows methods robots can use to self-assess their own performance

CAMBRIDGE, MASS. (PRWEB) MARCH 08, 2022

Establishing human-robot trust isn’t always easy. Beyond the fear of automation going rogue, robots simply don’t communicate how they are doing. When this happens, establishing a basis for humans to trust robots can be difficult.

Now, research is shedding light on how autonomous systems can foster human confidence in robots. Largely, the research suggests that humans have an easier time trusting a robot that offers some kind of self-assessment as it goes about its tasks, according to Aastha Acharya, a Draper Scholar and Ph.D. candidate at the University of Colorado Boulder.

Acharya said we need to start considering what communications are useful, particularly if we want to have humans trust and rely on their automated co-workers. “We can take cues from any effective workplace relationship, where the key to establishing trust is understanding co-workers’ capabilities and limitations,” she said. A gap in understanding can lead to improper tasking of the robot, and subsequent misuse, abuse or disuse of its autonomy.

To understand the problem, Acharya joined researchers from Draper and the University of Colorado Boulder to study how autonomous robots that use learned probabilistic world models can compute and express self-assessed competencies in the form of machine self-confidence. Probabilistic world models take into account the impact of uncertainties in events or actions in predicting the potential occurrence of future outcomes.

In the study, the world models were designed to enable the robots to forecast their behavior and report their own perspective about their tasking prior to task execution. With this information, a human can better judge whether a robot is sufficiently capable of completing a task, and adjust expectations to suit the situation.

To demonstrate their method, researchers developed and tested a probabilistic world model on a simulated intelligence, surveillance and reconnaissance mission for an autonomous uncrewed aerial vehicle (UAV). The UAV flew over a field populated by a radio tower, an airstrip and mountains. The mission was designed to collect data from the tower while avoiding detection by an adversary. The UAV was asked to consider factors such as detections, collections, battery life and environmental conditions to understand its task competency.

Findings were reported in the article “Generalizing Competency Self-Assessment for Autonomous Vehicles Using Deep Reinforcement Learning,” where the team addressed several important questions. How do we encourage appropriate human trust in an autonomous system? How do we know that self-assessed capabilities of the autonomous system are accurate?

Human-machine collaboration lies at the core of a wide spectrum of algorithmic strategies for generating soft assurances, which are collectively aimed at trust management, according to the paper. “Humans must be able to establish a basis for correctly using and relying on robotic autonomy for success,” the authors said. The team behind the paper includes Acharya’s advisors Rebecca Russell, Ph.D., from Draper and Nisar Ahmed, Ph.D., from the University of Colorado Boulder.

The research into autonomous self-assessment is based upon work supported by DARPA’s Competency-Aware Machine Learning (CAML) program.

In addition, funds for this study were provided by the Draper Scholar Program. The program gives graduate students the opportunity to conduct their thesis research under the supervision of both a faculty adviser and a member of Draper’s technical staff, in an area of mutual interest. Draper Scholars’ graduate degree tuition and stipends are funded by Draper.

Since 1973, the Draper Scholar Program, formerly known as the Draper Fellow Program, has supported more than 1,000 graduate students pursuing advanced degrees in engineering and the sciences. Draper Scholars are from both civilian and military backgrounds, and Draper Scholar alumni excel worldwide in the technical, corporate, government, academic, and entrepreneurship sectors.

Draper

At Draper, we believe exciting things happen when new capabilities are imagined and created. Whether formulating a concept and developing each component to achieve a field-ready prototype, or combining existing technologies in new ways, Draper engineers apply multidisciplinary approaches that deliver new capabilities to customers. As a nonprofit engineering innovation company, Draper focuses on the design, development and deployment of advanced technological solutions for the world’s most challenging and important problems. We provide engineering solutions directly to government, industry and academia; work on teams as prime contractor or subcontractor; and participate as a collaborator in consortia. We provide unbiased assessments of technology or systems designed or recommended by other organizations—custom designed, as well as commercial-off-the-shelf. Visit Draper at http://www.draper.com.

Maicat, the Cybernetic Companion Cat

Macroact, the personal robotics development lab operating out of South Korea, has released  their first AI based companion pet. Designed for education and entertainment, Maicat is now live on Kickstarter after years of design and testing. 

CAPABLE – Ready to use directly from the box, Maicat is an autonomous robot pet. Using its  sensors, Maicat is capable of detecting obstacles and walking around the house on its own.  With its laser range finder and gyroscope, it is able to adjust for thick carpets and door frames. 

CARING Maicat has facial, voice pattern and emotional recognition software. When paired  with the AI learning algorithm, Maicat is able to identify its owners and react to their moods. 

CONNECTED – Integrated IoT connectivity allows you to add Maicat’s sensors and capabilities  to your existing home network. The Maicat SDK will allow the creation of apps which will let Maicat talk to most modern IoT devices.

CREATIVE Maicat is an excellent platform to get students interested in STEM topics. With an  app and the Maicat SDK, students can study AI, programming, robotics, facial recognition…the  list goes on and on. 

CELEBRATED Maicat was a CES 2022 Innovation Award nominee for its IoT integration and  support. That’s more than you can say for most other pets. 

CUDDLY Maicat is small and light enough to pick up and pet. Sensors within its body let  Maicat know it’s being petted and Maicat will respond lovingly. 

To learn more about the Maicat project checkout the promotional link below.

Meet Maicat 

Maicat Kickstarter 

About Macroact Inc. 

Macroact is an AI and robotics startup that develops machine learning solutions for adaptive robots. The company focuses on the implementation of artificial intelligence solutions throughout the  whole robot development process to reduce time and costs of the robot development and enhance the  learning ability of robots. Their core technology is Maidynamics, an autonomous robot control solution.  Maicat is their first adaptive robot.