All-in-One Embedded Vision Plattform mit neuen Werkzeugen und Funktionen
(PresseBox) (Obersulm) Bei IDS bedeutet Bildverarbeitung mit künstlicher Intelligenz nicht nur, dass die KI direkt auf Kameras läuft und Anwender zusätzlich enorme Gestaltungsmöglichkeiten durch Vision Apps haben. Kunden erhalten mit der Embedded-Vision-Plattform IDS NXT ocean vielmehr alle erforderlichen, aufeinander abgestimmten Tools und Workflows, um eigene KI-Vision-Anwendungen ohne Vorwissen zu realisieren und direkt auf den IDS NXT Industriekameras auszuführen. Jetzt folgt das nächste kostenlose Softwareupdate für das KI-Paket. Im Fokus steht neben dem Thema Benutzerfreundlichkeit auch der Anspruch, die künstliche Intelligenz für den Anwender anschaulich und nachvollziehbar zu machen.
Ein All-in-One System wie IDS NXT ocean, das durch den von IDS entwickelten „deep ocean core“ über integrierte Rechenleistung und künstliche Intelligenz verfügt, eignet sich bestens für den Einstieg in AI Vision. Es erfordert weder Vorkenntnisse in Deep Learning noch in der Kameraprogrammierung. Das aktuelle Softwareupdate macht die Einrichtung, Inbetriebnahme und Steuerung der intelligenten Kameras im IDS NXT cockpit noch einfacher. Hierzu wird unter anderem ein ROI-Editor integriert, mit dem Anwender die auszuwertenden Bildbereiche frei zeichnen und als beliebige Raster mit vielen Parametern konfigurieren, speichern und wiederverwenden können. Darüber hinaus veranschaulichen die neuen Werkzeuge Attention Maps und Confusion Matrix, wie die KI in den Kameras arbeitet und welche Entscheidungen sie trifft. Das macht sie transparenter und hilft dem Anwender, die Qualität eines trainierten neuronalen Netzes zu bewerten und durch gezieltes Nachtraining zu verbessern. Beim industriellen Einsatz von künstlicher Intelligenz spielt auch Datensicherheit eine wichtige Rolle. Ab dem aktuellen Update lässt sich die Kommunikation zwischen IDS NXT Kameras und Anlagenkomponenten deshalb per HTTPS verschlüsseln.
Einfach loslegen mit dem IDS NXT ocean Creative Kit
Wer die industrietaugliche Embedded-Vision-Plattform IDS NXT ocean testen und das Potenzial für die eigenen Anwendungen evaluieren möchte, sollte einen Blick auf das IDS NXT ocean Creative Kit werfen. Kunden erhalten damit alle Komponenten, die sie für die Erstellung, das Trainieren und das Ausführen eines neuronalen Netzes benötigen. Neben einer IDS NXT Industriekamera mit 1,6 MP Sony Sensor, Objektiv, Kabel und Stativadapter enthält das Paket u.a. einen sechsmonatigen Zugang zur KI-Trainingssoftware IDS NXT lighthouse. Aktuell bietet IDS das Set in einer Sonderaktion zu besonders günstigen Konditionen an. Aktionsseite: https://de.ids-imaging.com/ids-nxt-ocean-creative-kit.html.
The true AI vision robotic arm powered by Jetson Nano is affordable and open-source, making your AI creativity into reality.
In recent years, there are more makers, students, enthusiasts, and engineers learning artificial intelligence technology, and many interesting AI projects are being developed as well.Hiwonder brings the power of AI to robot, build a true AI robotic arm— JetMax, to enhance the AI and robotic learning experience for everyone.
JetMax featurs Deep Learning and Computer Vision abilities. It is equipped with Jetson Nano and HD Wide Angle camera, which enables it to interact with the perceived environment efficiently. It empowers you to skillfully make your AI creativity into reality.
Being an AI Vision Robotic Arm, JetMax not only features AI vision but has a clever brain as well. Supporting you in learning coding, researching AI robotics applications, and bringing your AI ideas to life. It can be your helping hand in a lab, university, or workshop.
Powered by NVIDIA Jetson Nano
The open-source JetMax robot arm is powered by Jetson Nano, featuring deep learning, computer vision and more. Jetson Nano has the performance needed to power modern AI workloads to enable JetMax robot arm with advanced AI capabilities.
Supports multiple types of EoAT (End-of-Arm Tooling)
Supporting multiple types of end-of-arm tooling such as grippers, suction cup, pen holder, electromagnet etc, JetMax provides you with many ways of creative design applications.
Open-Source
JetMax is an open platform hardware product. We contribute numerous project source and AI tutorials. Additionally, the API interface is completely opened for customization and supports, such as Python, C++ and JAVA languages.
The Ameca humanoid robot combines AI with AB (Artificial Body) for relatable natural human gestures and upgradable modular mechanics via cloud-managed API development tool kit
LAS VEGAS—December 3rd, 2021—CES 2022 — Engineered Arts, a UK company that creates the most memorable interactive character experiences, today announced its latest humanoid robot named Ameca (pron. Am-ek-uh). Through 20 years of increasing robotics innovation, the Ameca series features ground-breaking advancements in movement and natural gestures, intelligent interaction, and a future-proof software system designed to embrace artificial intelligence and computer vision with adaptive learning—giving users an API customization pathway never before available.
Engineered Arts’ Ameca humanoid robot will take center stage at CES 2022 in Las Vegas at the Great Britain and Northern Ireland Pavilion in Tech West Hall G—lower level of the Venetian Expo Center, at Booth 62502 & 62524 from January 5th- 8th.
“A humanoid robot will always instil an image of what the future may hold. Ameca represents a perfect platform to explore how our machines can live with, collaborate, and enrich our lives in tomorrow’s sustainable communities,” said Morgan Roe, Director of Operations at Engineered Arts. “Ameca integrates both AI with AB (artificial body) for advanced, iterative technologies that deliver superior motion and gestures, all housed in a human form and robotic visage for a non-threatening, gender-neutral integration into an inclusive society,” added Roe.
The Engineered Arts team can create any robot figure in as little as four months. All Engineered Arts Ameca, Mesmer series and RoboThespian robot creations are available for ownership or through an integrated end-to-end rental program for special limited engagements and showcases across the world. To learn more about the humanoid robots from Engineered Arts visit: https://www.engineeredarts.co.uk/
About Engineered Arts
Engineered Arts, Ltd. integrates a talented team of engineers and creatives, working together to produce technology that lives and breathes engagement, imagination, and entertainment. At the heart of its robotic humanoids is the Tritium operating system, a cloud-based operating system that drives robot animation, interaction, maintenance links and content distribution. For more information visit: https://www.engineeredarts.co.uk/
Mobile robot experts AgileX just announced the launch of LIMO – an ROS-based multi-modal car with 4 steering modes and open-source software that is perfect for ROS beginners as well as advanced programmers. This exciting new robotics platform has virtually unlimited applications for education, business and industry and is available now here
LIMO is an incredibly versatile and multifunctional robotic platform for designing and programming robot AI. It uses the modular programming languages ROS 1 or ROS 2 to achieve many functional purposes including mapping, navigation, obstacle avoidance, path planning, and more for educational, commercial and industrial applications.
“At AgileX Robotics our vision is to enable all industries and individuals with the ability to improve productivity and efficiency through robot technology. Our latest product, LIMO is a powerful yet easy to use mobile robotic platform that is perfect for learning ROS, completing tasks for business and education, and beginning a journey in the exciting world of robotics. We designed LIMO to be easy to use with an intuitive programming method and open source capabilities. It’s the best way to get started with robotic AI.” CEO, AgileX Robotics.
Four steering modes make LIMO substantially superior to other robots in its class. The available modes are: Omni Wheel Steering, Tracked Steering, Four-Wheel Differential Steering and Ackermann Steering. These advanced steering modes plus built-in 360° scanning LiDAR and RealSense infrared camera make the platform perfect for industrial and commercial tasks.
Equipped with four other USB ports and powered by Nvidia Jetson Nano, LIMO can be fully customized with other hardware according to one’s needs. LIMO can be connected with open-source ROS 1 & ROS 2. Programming Demo, ROS Packages and Simulation powered by Gazebo are supported as well. With these incredible features, LIMO can achieve precise self-localization, SLAM & V-SLAM mapping, route planning and autonomous obstacle avoidance, reverse parking, traffic light recognition, and more.
LIMO, the world‘s first multi-modal mobile robot with AI modules is currently being launched via a Kickstarter campaign to reward early adopters with special deals and pricing. Learn more here: [LINK]
Researchers from Germany and Canada work on new AI methods for picking robots.
ISLANDIA, NY, July 7, 2021 — Production, warehouse, shipping – where goods are produced, stored, sorted or packed, picking also takes place. This means that several individual goods are removed from storage units such as boxes or cartons and reassembled. With the FLAIROP (Federated Learning for Robot Picking) project Festo and researchers from the Karlsruhe Institute of Technology (KIT), together with partners from Canada, want to make picking robots smarter using distributed AI methods. To do this, they are investigating how to use training data from multiple stations, from multiple plants, or even companies without requiring participants to hand over sensitive company data.
“We are investigating how the most versatile training data possible from multiple locations can be used to develop more robust and efficient solutions using artificial intelligence algorithms than with data from just one robot,“ says Jonathan Auberle from the Institute of Material Handling and Logistics (IFL) at KIT. In the process, items are further processed by autonomous robots at several picking stations by means of gripping and transferring. At the various stations, the robots are trained with very different articles. At the end, they should be able to grasp articles from other stations that they have not yet learned about. „Through the approach of federated learning, we balance data diversity and data security in an industrial environment,“ says the expert.
Powerful algorithms for industry and logistics 4.0
Until now, federated learning has been used predominantly in the medical sector for image analysis, where the protection of patient data is a particularly high priority. Consequently, there is no exchange of training data such as images or grasp points for training the artificial neural network. Only pieces of stored knowledge – the local weights of the neural network that tell how strongly one neuron is connected to another – are transferred to a central server. There, the weights from all stations are collected and optimized using various criteria. Then the improved version is played back to the local stations and the process repeats. The goal is to develop new, more powerful algorithms for the robust use of artificial intelligence for industry and Logistics 4.0 while complying with data protection guidelines.
“In the FLAIROP research project, we are developing new ways for robots to learn from each other without sharing sensitive data and company secrets. This brings two major benefits: we protect our customers‘ data, and we gain speed because the robots can take over many tasks more quickly. In this way, the collaborative robots can, for example, support production workers with repetitive, heavy, and tiring tasks”, explains Jan Seyler, Head of Advanced Develop. Analytics and Control at Festo SE & Co. KG During the project, a total of four autonomous picking stations will be set up for training the robots: Two at the KIT Institute for Material Handling and Logistics (IFL) and two at the Festo SE company based in Esslingen am Neckar.
Start-up DarwinAI and University of Waterloo from Canada are further partners
“DarwinAI is thrilled to provide our Explainable (XAI) platform to the FLAIROP project and pleased to work with such esteemed Canadian and German academic organizations and our industry partner, Festo. We hope that our XAI technology will enable high-value human-in-the-loop processes for this exciting project, which represents an important facet of our offering alongside our novel approach to Federated Learning. Having our roots in academic research, we are enthusiastic about this collaboration and the industrial benefits of our new approach for a range of manufacturing customers”, says Sheldon Fernandez, CEO, DarwinAI.
“The University of Waterloo is ecstatic to be working with Karlsruhe Institute of Technology and a global industrial automation leader like Festo to bring the next generation of trustworthy artificial intelligence to manufacturing. By harnessing DarwinAI’s Explainable AI (XAI) and Federated Learning, we can enable AI solutions to help support factory workers in their daily production tasks to maximize efficiency, productivity, and safety”, says Dr. Alexander Wong, Co-director of the Vision and Image Processing Research Group, University of Waterloo, and Chief Scientist at DarwinAI.
About FLAIROP
The FLAIROP (Federated Learning for Robot Picking) project is a partnership between Canadian and German organizations. The Canadian project partners focus on object recognition through Deep Learning, Explainable AI, and optimization, while the German partners contribute their expertise in robotics, autonomous grasping through Deep Learning, and data security.
KIT-IFL: consortium leadership, development grasp determination, development automatic learning data generation.
KIT-AIFB: Development of Federated Learning Framework
Festo SE & Co. KG: development of picking stations, piloting in real warehouse logistics
University of Waterloo (Canada): Development object recognition
Darwin AI (Canada): Local and Global Network Optimization, Automated Generation of Network Structures
Visit www.festo.com/us for more information on Festo products and services.
About Festo
Festo is a leading manufacturer of pneumatic and electromechanical systems, components, and controls for process and industrial automation. For more than 40 years, Festo Corporation has continuously elevated the state of manufacturing with innovations and optimized motion control solutions that deliver higher performing, more profitable automated manufacturing and processing equipment.
The newly founded company QUADRUPED Robotics is the first and currently the only German company to introduce fully modifiable multi-legged robots to the European market. In doing so, this form of robot represents a novelty: the four-legged robots combine artificial intelligence with new motion sequences and individually customizable equipment. The A1 robot in the QUADRUPED line is based on the Robot Operating System (ROS.org) and can thus be adapted to its environment and requirements. However, even the basic equipment enables a wide range of applications.
By means of an AI-controlled and depth-sensing smart camera, HD recordings can be transmitted in real time and to a terminal device. At the same time, the integrated multi-eye camera offers real-time tracking of objects in sight, gesture recognition and target tracking following specific movement patterns.
The basis for the development of an environment map is the visual SLAM. QUADRUPED A1 calculates paths, obstacles, routes and navigation points. This leads to vision-based autonomous obstacle avoidance. In addition, QUADRUPED A1 also recognizes obstacle shapes and an adjustment of the body position takes place. If an impact or fall does occur, the advanced dynamic balancing algorithm allows balance to be quickly restored. Further measurement data as well as more dynamic behavior can be achieved by integrating additional sensor technology, such as that of a 3D LiDAR or further camera modules.
The QUADRUPED A1 incorporates the unique patented sensitive foot contact. Each of the four feet can be controlled individually. The smart actuators provide precise footing as well as different gaits. The system is based on a low-level control developed by QUADRUPED Robotics, which can read out the position including torque and current consumption at any time. The foot end is waterproof and dustproof and can be easily replaced after wear. The A1 impressed with its latest measured top speed of 11.8 km/h (3.3 m/s), which is unique for a robot of this type. It can also carry loads of up to 5 kg.
For simplified maintenance work, the robot was designed with a stable and lightweight body structure. The A1 has an external 24 V power input and 5 V-/12 V-/19 V power supply, which enables the use of additional external devices. Other external interfaces include 4 USB, 2 HDMI, 2 Ethernet ports.
It is equipped with a powerful redundant control system: low-level control for CAN communication with the smart actuators and NVIDIA Xavier for calculation or measurement data evaluation. The current runtime of approx. 1.5 hours varies depending on the application.
Additional equipment is available from QUADRUPED Robotics and can be delivered with implemented software packages on request. Due to in-house research and development, the end customer can order a finished and tested product. Another service is the provision of complete documentation on the website www.docs.quadruped.de. In addition, complete simulation environments based on Webots & Gazebo are also made available for download there, which can be used for application testing.
QUADRUPED Robotics is a spin-off of MYBOTSHOP uG, which emerged as an established sales and development partner in the fields of robotics, sensor technology and automation technology. Company founder Daniel Kottlarz draws from the potential of four-legged and autonomous robots the opportunity to relieve humans in particularly dangerous areas of operation and situations and to ward off dangerous situations by means of the autonomous robots.
Das neu gegründete Unternehmen QUADRUPED Robotics führt als erstes und derzeit einziges deutsches Unternehmen voll modifizierbare mehrbeinige Roboter in den europäischen Markt ein. Dabei stellt diese Form der Roboter eine Neuheit dar: die Vierbeiner kombinieren künstliche Intelligenz mit neuen Bewegungsabläufen und einer individuell anpassbaren Ausstattung.
Der Roboter A1 der Linie QUADRUPED basiert auf dem Robot Operating System (ROS.org) und lässt sich somit auf seine Umgebung und Anforderung anpassen. Doch auch schon die Grundausstattung ermöglicht einen breiten Anwendungsbereich. Mittels KI-gesteuerter und tiefenerkennender Smart-Kamera lassen sich HD-Aufnahmen in Echtzeit und an ein Endgerät übertragen. Gleichzeitig bietet die integrierte Mehraugen-Kamera die Echtzeit-Verfolgung von Objekten in Sichtweite, Gestenerkennung und auf bestimmte Bewegungsmuster folgend die Zielpersonenverfolgung. Grundlage zur Erarbeitung einer Umgebungskarte ist das visuelle SLAM. QUADRUPED A1 berechnet Wege, Hindernisse, Strecken und Navigationspunkte. Dies führt zu einer visions-basierten autonomen Hindernisvermeidung. Zusätzlich erkennt der QUADRUPED A1 auch Hindernisformen und es erfolgt eine Anpassung der Körperposition. Sollte es doch zu einem Aufprall oder Sturz kommen, ermöglicht der fortschrittliche dynamische Balancier-Algorithmus das Gleichgewicht schnell wiederherzustellen. Weitere Messdaten sowie dynamischeres Verhalten können durch die Integration zusätzlicher Sensorik, wie die eines 3D-LiDAR oder weiterer Kameramodule, erreicht werden.
Im QUADRUPED A1 ist der einzigartige patentierte sensible Fußkontakt verbaut. Jeder der vier Füße kann einzeln und individuell angesteuert werden. Durch die smarten Aktuatoren sind präzises Auftreten sowie verschiedene Gangart geboten. Das System basiert auf einem von QUADRUPED Robotics entwickelten Low-Level-Control, das zu jedem Zeitpunkt die Position samt Drehmoment und Stromaufnahme auslesen kann. Das Fußende ist wasser- und staubdicht und kann nach Abnutzung leicht ausgetauscht werden. Der A1 überzeugte durch seine zuletzt gemessene Höchstgeschwindigkeit von 11,8 km/h (3,3 m/s), welche für einen Roboter dieser Art einmalig ist. Zudem kann er Lasten bis zu 5 kg tragen.
Für vereinfachte Wartungsarbeiten wurde bei dem Roboter auf eine stabile und leichte Karosseriestruktur geachtet. Der A1 verfügt über einen externen 24 V Stromeingang und 5 V-/12 V-/19 V-Spannungsversorgung, die den Einsatz zusätzlicher externer Geräte ermöglicht. Weitere externe Schnittstellen sind 4 USB-, 2 HDMI-, 2 Ethernet-Anschlüsse. Ausgestattet ist er mit einer leistungsstarken redundanten Steuerung: Low-Level-Control zur CAN-Kommunikation mit den smarten Aktuatoren und NVIDIA Xavier für die Berechnung bzw. Messdatenauswertung. Die aktuelle Laufzeit von ca. 1,5 Stunden variiert je nach Anwendung.
Zusatz-Equipment ist bei QUADRUPED Robotics erhältlich und wird auf Wunsch mit implementierten Software-Packages ausgeliefert. Durch die hausinterne Forschung und Entwicklung kann der Endkunde ein fertiges und getestetes Produkt bestellen. Ein weiterer Service ist die Bereitstellung der vollständigen Dokumentation auf der Website www.docs.quadruped.de. Darüber hinaus werden dort auch vollständige Simulationsumgebungen auf Basis von Webots & Gazebo zum Download bereitgestellt, die zu Anwendungstests genutzt werden können.
QUADRUPED Robotics ist eine Ausgründung der MYBOTSHOP uG, die als etablierter Vertriebs- und Entwicklungspartner in den Bereichen Robotik, Sensorik und Automatisierungstechnik entstand. Firmengründer Daniel Kottlarz schöpft aus dem Potenzial der vierbeinigen und autonomen Roboter die Chance, den Menschen in besonders gefährlichen Einsatzbereichen und Situationen zu entlasten und mittels der autonomen Roboter Gefahrensituationen abzuwehren.
Ludo AI, available now in open beta, gives developers access to the world’s first AI platform for games concept creation – accelerating and democratizing games creation
Seattle, USA. AI (Artificial Intelligence) games creativity platform Ludo has announced its open beta, following a deeply successful closed beta and attracted participation from independent studios across the globe. Games creators tasked with delivering the next hit game to emulate the success of the likes of Call of Duty, Among Us, Fortnite and Fall Guys, now have the answer in Ludo – the world’s first AI games ideation tool.
Ludo, Latin for ‘I Play’, uses machine learning and natural language processing to develop game concepts 24 hours a day. The platform is constantly learning and evolving. Ludo is built on a database of close to a million games and is agile and supremely intelligent. When asked to find a new game idea, based on intuitive keyword searches, Ludo returns almost immediately with multiple written game concepts, artwork and images that developers can rapidly work on to take the next stage (concept presentation, MVP or accelerated soft launch).
AI has never before been used at the start of the games creation process: In a 159.3 billion* dollar industry, the pressure to release new hit games is relentless: And coming up with new exciting and sticky games is the Holy Grail. Ludo is set to revolutionize game creation enabling developers by arming them with unique games concepts within minutes of their request being processed. Furthermore, as Ludo’s powerful capabilities are within the reach of any size of studio, the creation process has been democratized.
Games publishers and developers must deliver hit new games at a pace: The industry landscape is changing as it grows in value: Large, acquisitive publishers are constantly on the lookout for growing independents, with great new games and creative ideas, to absorb as they, in turn, need to deliver value to their stakeholders.
“Creativity is the new currency in the games industry,” said Tom Pigott, CEO of JetPlay, Ludo’s creator. “The next hit game could be worth millions and you never know where it will spring up from. With Ludo anyone can come up with a great new game idea without having to waste hours on the process and then invest even more time in researching what is already out there and how successful any similar games have been. Ludo does it all for you: Ludo brings the playfulness back into the game creation process, increases the probability of coming up with a great new game, and saves time and money.”
Since the global pandemic the games industry has seen exponential growth and it is estimated to be worth $200 Billion by 2023. Every developer is under pressure to create a viable pipeline and now with so many ways of testing games quickly ( a large percentage being rejected before they get through the gates) the appetite is at an all time high for new games ideas and concepts.
Ludo has been created by a small outstanding global team of AI Ph.D.’s and the brainchild of seasoned entrepreneur Tom Pigott, CEO of Jet Play, the developer of Ludo. The new open beta follows a highly successful closed program that saw a select group of studios harness the creative power of AI. Now, with an open beta, games developers can try the platform free of charge for a trial period.
„We’ve been extremely pleased by the feedback and the usage of our platform by the game makers that were part of the closed beta,“ said Pigott. „AI, when used as part of the creative process, delivers great results. It is easy to use, working intuitively with keyword searches, and those involved in our closed beta have already proved that amazing things can be done, and all without detracting from their development or marketing time. Very soon Ludo will become an integral part of every studio’s games ideation process.”
The Ludo open beta program offers an opportunity to enjoy all the benefits of early adoption, giving a head start on a mobile game creation approach that works. Due to the tremendous interest there is a waitlist: those interested in joining the Ludo open beta can apply or find out more here.
El DORADO HILLS, CA — December, 2020 — Blaize today fully unveiled the Blaize AI Studio offering, the industry’s first open and code-free software platform to span the complete edge AI operational workflow from idea to development, deployment and management. AI Studio dramatically reduces edge AI application deployment complexity, time, and cost by breaking the barriers within existing application development and machine learning operations (MLOps) infrastructure that hinder edge AI deployments. Eliminating the complexities of integrating disparate tools and workflows, along with the introduction of multiple ease-of-use and intelligence features, AI Studio reduces from months to days the time required to go from models to deployed production applications.
“While AI applications are migrating to the Edge with growth projected to outpace that of the Data Center, Edge AI deployments today are complicated by a lack of tools for application development and MLOps,” says Dinakar Munagala, Co-founder and CEO, Blaize. “AI Studio was born of the insights to this problem gained in our earliest POC edge AI hardware customer engagements, as we recognized the need and opportunity for a new class of AI software platform to address the complete end-to-end edge AI operational workflow.”
“AI Studio is open and highly optimized for the AI development landscape that exists across heterogeneous ecosystems at the edge,” says Dmitry Zakharchenko, VP Research & Development, Blaize. “With the AI automation benefits of a truly modern user experience interface, AI Studio serves the unique needs in customers’ edge use cases for ease of application development, deployment, and management, as well as broad usability by both developers and domain expert non-developers.”
The combination of AI Studio innovations in user interface, use of collaborative Marketplaces, end-to-end application development, and operational management, collectively bridge the operational chasm hindering AI edge ROI. Deployed with the Blaize AI edge computing hardware offerings that address unserved edge hardware needs, AI Studio makes AI more practical and economical for edge use cases where unmet application development and MLOps needs delay the pace of production deployment.
“In our work for clients, which may include developing models for quality inspection within manufacturing, identifying stress markers to improve drug trials or even predicting high resolution depth for autonomous vehicles, it is vital that businesses can build unique AI applications that prove their ideas quickly,” says Tim Ensor, Director of AI, Cambridge Consultants. “AI Studio offers innovators the means to achieve this confidence in rapid timeframes, which is a really exciting prospect.” Cambridge Consultants, part of Capgemini Group, helps the world’s biggest brands and most ambitious businesses innovate in AI, including those within the Blaize ecosystem.
Code-free assistive UI for more users, more productivity The AI Studio code-free visual interface is intuitive for a broad range of skill levels beyond just AI data scientists, which is a scarce and costly resource for many organizations. “Hey Blaize” summons a contextually intelligent assistant with an expert knowledge-driven recommendation system to guide users through the workflow. This ease of use enables AI edge app development for wider teams from AI developers to system builders to business domain subject matter experts.
Open standards for user flexibility, broader adoption With AI Studio, users can deploy models with one click to plug into any workflow across multiple open standards including ONNX, OpenVX, containers, Python, or GStreamer. No other solution offers this degree of open standard deployment support, as most are proprietary solutions that lock in users with limited options. Support for these open standards allows AI Studio to deploy to any hardware that fully supports the standards.
Marketplaces collaboration Marketplace support allows users to discover models, data and complete applications from anywhere – public or private – and collaborate continuously to build and deploy high-quality AI applications.
AI Studio supports open public models, data marketplaces and repositories, and provides connectivity and infrastructure to host private marketplaces. Users can continually scale proven AI edge models and vertical AI solutions to effectively reuse across enterprises, choosing from hundreds of models with drag and drop ease to speed application development
Easy-to-Use application development workflow: The AI Studio model development workflow allows users to easily train and optimize models for specific datasets and use cases, and deploy quickly into multiple formats and packages. With the click of a button, AI Studio’s unique Transfer Learning feature quickly retrains imported models for the user’s data and use case. Blaize edge-aware optimization tool, NetDeploy, automatically optimizes the models to the user’s specific accuracy and performance needs. With AI Studio, users can easily build and customize complete application flows other than neural networks, such as image signal processing, tracking or sensor fusion functions.
Ground-breaking edge MLOps/DevOps features As a complete end-to-end platform, AI Studio helps users deploy, manage, monitor and continuously improve their edge AI applications. Built on a cloud-native infrastructure based on microservices, containers and Kubernetes, AI Studio is highly scalable and reliable in production.
Blaize AI Studio Early Adopter Customers Results In smart retail, smart city and industry 4.0 markets, Blaize customers are realizing new levels of efficiency in AI application development and deployment using AI Studio. Examples include:
– Complete end-to-end AI development cycle reduction from months to days – Reduction in training compute by as much as 90%
– Edge-aware efficient optimizations and compression of models with a < 3% accuracy drop
– New revolutionary contextual conversational interfaces that eclipse visual UI
Availability AI Studio is available now to qualified early adopter customers, with general availability in Q1 2021. The AI Studio product offering includes licenses for individual seats, enterprise, and on-premise subscriptions, with product features and services suited to the needs of each license type.
About Blaize
Blaize leads new-generation computing unleashing the potential of AI to enable leaps in the value technology delivers to improve the way we all work and live. Blaize offers transformative computing solutions for AI data collection and processing at the edge of network, with focus on smart vision applications including automobility, retail, security, industrial and metro. Blaize has secured US$87M in equity funding to date from strategic and venture investors DENSO, Daimler, SPARX Group, Magna, Samsung Catalyst Fund, Temasek, GGV Capital, Wavemaker and SGInnovate. With headquarters in El Dorado Hills (CA), Blaize has teams in Campbell (CA), Cary (NC), and subsidiaries in Hyderabad (India), Manila (Philippines), and Leeds and Kings Langley (UK), with 300+ employees worldwide.
9th November 2020, Dubai, United Arab Emirates: GrubTech, the UAE-based tech start-up that is taking the foodservice industry by storm with the introduction of the world’s most technologically advanced digital commerce tool for restaurant and cloud kitchen owners, and India-based AI-powered video analytics powerhouse Wobot.ai are proud to announce a global strategic collaboration. The partnership brings together GrubTech’s native technology expertise in the foodservice landscape and Wobot’s state-of-the-art platform to curate an optimum experience for restaurateurs and cloud kitchen operators globally.
In this age of the technological revolution, rapidly evolving technology is expected to provide much-needed tailwinds to the foodservice business, as tech enablement is no longer a luxury, but a necessity to survive and succeed. The alliance between GrubTech and Wobot.ai establishes a comprehensive solution for restaurants and cloud kitchens, encompassing the digitization of everything from order capture and operations to compliance management and marketing.
GrubTech’s integration with food aggregators, points of sale and third party logistics providers, eliminates the need for the manual, error-prone and often cumbersome entry of orders into siloed solutions. Rather, it digitizes the order lifecycle, providing comprehensive visibility over sales and operations and resulting in reduced costs, increased efficiencies and shortened food preparation and delivery times, i.e. from click to doorbell in far less time.
„GrubTech provides restaurants, cloud kitchens and virtual brands with the first end-to-end management system, automating manual processes in order to drive operational efficiencies and improve the customer experience. Wobot’s AI-powered insights & business intelligence tools create a perfect synergy with our platform, setting us on course to completely revolutionise the global foodservice industry. We look forward to helping to drive future fit and profitable operations for our customers, as they strive to win in this ever-changing landscape” said Mohamed Al Fayed, Co-Founder and CEO of GrubTech.
Wobot.ai today powers 10,000+ units globally, helping them reduce their risk of non-compliance, cost of monitoring, and increases their customer NPS with its computer vision technology.
Mr Adit Chhabra, CEO of Wobot added „Our vision with the Wobot-GrubTech alliance is to create a seamless workplace optimized to deliver operational excellence in the hospitality industry with our combined technology platforms. Our service offerings tailored specifically for restaurants and cloud kitchens, offers the unmatched capability to deliver massive value for these businesses. Wobot’s platform monitors health, safety & operational checklists & helps you ascertain if you meet global foodservice industry standards“.
GrubTech’s solution is highly scalable and easily deployable remotely, and the company is in advanced discussions to deploy across a number of large enterprises and SME’s across the MENA region and beyond into SE Asia, and Europe. The agreement with Wobot.ai will significantly enhance the offering, as with heightened food safety requirements and increased restrictions resulting from the COVID-19 pandemic, countries are frequently updating their compliance legislation creating an urgent need for an effective and multi-purpose operations platform.