The true AI vision robotic arm powered by Jetson Nano is affordable and open-source, making your AI creativity into reality.
In recent years, there are more makers, students, enthusiasts, and engineers learning artificial intelligence technology, and many interesting AI projects are being developed as well.Hiwonder brings the power of AI to robot, build a true AI robotic arm— JetMax, to enhance the AI and robotic learning experience for everyone.
JetMax featurs Deep Learning and Computer Vision abilities. It is equipped with Jetson Nano and HD Wide Angle camera, which enables it to interact with the perceived environment efficiently. It empowers you to skillfully make your AI creativity into reality.
Being an AI Vision Robotic Arm, JetMax not only features AI vision but has a clever brain as well. Supporting you in learning coding, researching AI robotics applications, and bringing your AI ideas to life. It can be your helping hand in a lab, university, or workshop.
Powered by NVIDIA Jetson Nano
The open-source JetMax robot arm is powered by Jetson Nano, featuring deep learning, computer vision and more. Jetson Nano has the performance needed to power modern AI workloads to enable JetMax robot arm with advanced AI capabilities.
Supports multiple types of EoAT (End-of-Arm Tooling)
Supporting multiple types of end-of-arm tooling such as grippers, suction cup, pen holder, electromagnet etc, JetMax provides you with many ways of creative design applications.
Open-Source
JetMax is an open platform hardware product. We contribute numerous project source and AI tutorials. Additionally, the API interface is completely opened for customization and supports, such as Python, C++ and JAVA languages.
Canadian company MYNYMAL PC recently announced their newest innovation, a computer designed to be as minimalist as possible. The computer’s size comes in smaller than a tissue box yet houses a fully-fledged Windows desktop computer capable of day-to-day work and entertainment. Despite its humble appearance, this mini-computer packs quite a punch. A 4-core 8-thread CPU capable of photo editing, video editing, 3D modeling, and even light gaming makes this computer a useful tool and not just a decoration.
As decoration, it excels as well. The simple cube design is available in four different textures: Maple Wood, Concrete, White Marble, and Brushed Gunmetal Gray. This gives the buyer a variety of material choices to make the PC fit in best with the interior design of their home. The computer can be removed from the acrylic enclosure for hardware upgrades, and in the future MYNYMAL plans to sell the enclosures individually so that you can swap them out easily depending on the aesthetic you want. „We believe that technology should have both form and function, not just one or the other,“ said Gerard Cirera, Founder and CEO of MYNYMAL PC.
In addition to standard vinyl textures and materials, a limited amount of cube computers will come with interior RGB lighting and a custom Ore Block texture. An included remote control lets the user adjust the lighting to choose from red (redstone), blue (diamond), and many other colours and effects.
The MYNYMAL PC combines modern, minimalist design with the tech in your home, so you can finally get rid of that bulky old tower computer you don’t know where to put.
The Ameca humanoid robot combines AI with AB (Artificial Body) for relatable natural human gestures and upgradable modular mechanics via cloud-managed API development tool kit
LAS VEGAS—December 3rd, 2021—CES 2022 — Engineered Arts, a UK company that creates the most memorable interactive character experiences, today announced its latest humanoid robot named Ameca (pron. Am-ek-uh). Through 20 years of increasing robotics innovation, the Ameca series features ground-breaking advancements in movement and natural gestures, intelligent interaction, and a future-proof software system designed to embrace artificial intelligence and computer vision with adaptive learning—giving users an API customization pathway never before available.
Engineered Arts’ Ameca humanoid robot will take center stage at CES 2022 in Las Vegas at the Great Britain and Northern Ireland Pavilion in Tech West Hall G—lower level of the Venetian Expo Center, at Booth 62502 & 62524 from January 5th- 8th.
“A humanoid robot will always instil an image of what the future may hold. Ameca represents a perfect platform to explore how our machines can live with, collaborate, and enrich our lives in tomorrow’s sustainable communities,” said Morgan Roe, Director of Operations at Engineered Arts. “Ameca integrates both AI with AB (artificial body) for advanced, iterative technologies that deliver superior motion and gestures, all housed in a human form and robotic visage for a non-threatening, gender-neutral integration into an inclusive society,” added Roe.
The Engineered Arts team can create any robot figure in as little as four months. All Engineered Arts Ameca, Mesmer series and RoboThespian robot creations are available for ownership or through an integrated end-to-end rental program for special limited engagements and showcases across the world. To learn more about the humanoid robots from Engineered Arts visit: https://www.engineeredarts.co.uk/
About Engineered Arts
Engineered Arts, Ltd. integrates a talented team of engineers and creatives, working together to produce technology that lives and breathes engagement, imagination, and entertainment. At the heart of its robotic humanoids is the Tritium operating system, a cloud-based operating system that drives robot animation, interaction, maintenance links and content distribution. For more information visit: https://www.engineeredarts.co.uk/
Beyond Imagination, in conjunction with Zero Gravity Corporation (ZERO-G), will be showcasing its cutting-edge “Beomni” humanoid robot at the January 2022 Consumer Electronics Show. This will be the first time the robot has been shown to the general public.
“I’ve been involved and taken ZERO-G flights with Dr. Peter Diamandis since the inception of ZERO-G. They are an ideal partner for BEYOND IMAGINATION,” says Dr. Harry Kloor, Founder and CEO of BEYOND IMAGINATION, “because our remote-piloted robot has the dexterity to perform any task and even learn to execute complex sequences autonomously, saving time and reducing risk.”
Beomni is one of the world’s most advanced general-purpose humanoid robots, with an evolving “AI Brain” that enables it to assist its human pilot in performing a limitless number of tasks.
For example, Beomni can help perform scientific experiments in challenging environments such as microgravity. Working together with ZERO-G, BEYOND IMAGINATION plans to deploy custom versions of its robot that can perform experiments in the ZERO-G aircraft while experimenters remain safely on the ground. Leveraging Beomni’s low-latency remote video feed and natural controls, experimenters will be able to manipulate their equipment from afar as if they are actually on the aircraft. Adds Dr. Kloor, “as a scientist experienced with microgravity experiments, I know that the ability to manipulate a microgravity experiment from the ground is a game-changer. Our partnership will enable more complex actions to be performed and multiple experimenters to participate.”
Ease of use is one of Beomni’s claims to fame. Past experience shows that new users acclimate to piloting the robot in just a few minutes. At the show, invited guests will be able to pilot the robot using nothing more than a virtual reality headset and a pair of special gloves.
BEYOND IMAGINATION’s robots are operated via a human partnership with an AI Brain that augments and enhances human capabilities. This AI includes multiple “lobes”, each related to a specific skill or set of skills that the robot can perform. Over time, on a task-by-task basis, Beomni’s AI Brain learns, evolving from assisting with tasks to semi-autonomous and then to fully autonomous operation.
This collaboration is a first step toward broadening access to microgravity opportunities for consumers, corporate customers, entertainment companies, and scientific research teams at NASA and beyond.
“We are delighted to be able to partner with a visionary company like BEYOND IMAGINATION,” said ZERO-G’s CEO Matt Gohd. “Their state-of-the-art robot and AI will enable us to develop unique and unmatched new services. Working together, we see a wide range of applications for private and public space agencies around the globe.”
Since operating its first commercial flight in 2004, ZERO-G has given more than 17,000 flyers the opportunity to feel true weightlessness. As the interest in commercial space travel increases, the flights offered by ZERO-G are the only FAA-approved opportunities in the U.S. for civilians to understand weightlessness. At a fraction of the cost of consumer space flights in development, ZERO-G is paving the way for the general public to enjoy the wonders of interstellar travel. Each ZERO-G mission is designed for maximum fun. The aircraft’s interior is a zero-gravity playroom, complete with padded floors and walls and video cameras to record the unforgettable moments.
Beomni has applications in space-based and lunar operations and construction as well as terrestrial activities ranging from bio-manufacturing & logistics to aircraft & energy sector inspections to health care & senior care services. BEYOND IMAGINATION recently completed an on-site pilot study at TRU PACE (Program of All-Inclusive Care for the Elderly) Center with Beomni and has been busy scheduling more pilot studies across a range of industries for early 2022.
About BEYOND IMAGINATION Beyond Imagination, Inc. is a robotics and AI platform company that is focused on bringing humanoid robots to market rapidly. By partnering a human pilot with an evolving AI Brain, we will soon be able to deploy our Beomni Robotics platform across a wide range of commercial applications. Our practical, real-world approach is closer to that of Tesla, which released its vehicles and then built AI from the data that they collected, rather than that of other companies that are focused in narrow R&D domains.
Founded by leading innovators in AI and robotics, and strengthened by a solid patent portfolio, Beyond Imagination, Inc. is poised to revolutionize life and fundamentally change the way we work, travel and engage with others around the world.
Beyond Imagination is taking advance orders and is always open to strategic partnerships and investments from qualified investors. In-person demos are available by appointment for media and investors. Additionally, Beomni is available for television appearances in the greater Denver/Boulder area. Potential partners for pilot studies in medicine and beyond are encouraged to discuss their specific use cases with the company.
About ZERO-G Zero Gravity Corporation is a privately held space entertainment and tourism company whose mission is to make the excitement and adventure of space accessible to the public. ZERO-G is the first and only FAA-approved provider of weightless flight in the U.S. for the general public; entertainment and film industries; corporate and incentive markets; non-profit research and education sectors; and the government. ZERO-G’s attention to detail, excellent service and quality of experience combined with its exciting history has set the foundation for the most exhilarating adventure-based tourism.
The LattePanda team launched its new generation of LattePanda 3 Delta on Kickstarter on November the 2nd 2021. The crowd funding campaign has already blasted past its required pledge goal raising over $200,989 thanks to over 731 backers with still 3 days remaining. Building on the company’s previous mini PC systems, the new LattePanda 3 Delta computer is powered by the latest Intel 11th generation N5105 mobile quad-core processor offering up to 2.9GHz burst frequency. The ultra-thin design measures just 16 mm in thickness, enabling you to use the small form factor mini PC for a wealth of different projects.
Hardware and Spec
The LattePanda 3 Delta is a 125 mm x 78 mm x 16mm Single Board Computer with CPU, memory, storage and Arduino components on the one side while up to 42 expandable interfaces for peripherals and GPIOs sit on the other. The added bonus of having a Gigabit Ethernet port onboard guarantees the device can connect to the Internet at extremely high speed.
2x 50pin GPIOs: including Audio Output, USB2.0, RS232, RTC, Power Management, Status Indication, Arduino Pinout, 5V&3.3V Power Output, etc.
In Use
LattePanda 3 Delta has an Intel Celeron N5105, a Windows 10 processor that replaces the Celeron N4100. The Celeron N5105 features improved UHD Graphics and higher performance than the Celeron N4100. LattePanda 3 Delta offers 2x better processing performance and 3x faster graphics performance. With such excellent performance, you can use it to watch 4K HDR videos smoothly and even play some heavy games.
LattePanda 3 Delta contains up to 8 GB of LPDDR4 RAM and 64 GB of eMMC 5.1 flash storage, which ensures that you can load a large number of web pages in Chrome, or run multiple virtual machines fastly and smoothly.
LattePanda 3 Delta uses Wi-Fi 6 whose transfer speed is 2.7 times faster than Wi-Fi 5. Besides, LattePanda 3 Delta has a Gigabit Ethernet port onboard. You can connect to the Internet at extremely high speed.
LattePanda 3 Delta is compatible with both Windows 10 and Linux OS. Windows 11 can also be run on LattePanda 3 Delta. You can select various OS freely based on your project. No matter what operating system you use, the blue screen or crash is inevitable. LattePanda 3 Delta designs a “Watchdog Timer”. When some problems occur, LattePanda 3 Delta will restart automatically and go back to work normally.
For those applications relying on battery, independent control of the power of all the USB ports and other power connectors are essential. LattePanda 3 Delta has made it possible. You can control the USB power and 5V power ON or OFF based on your project.
Being a hackable computer, the LattePanda 3 Delta is a perfect solution for home automation projects, robotics, in car entertainment systems, education and smart industrial systems. Below is a quick overview of all the available Kickstarter packages, components, operating systems, and pricing.
Pledges are still available from £169 or roughly $229 or CAD $284 and shipping is expected to take place during March 2022. Options are available to receive the computer with Windows 10 Pro operating system license for a little extra as well as being available with a UPS HAT for £206 or $279 or CAD $346 with shipping expected to be a month later during April 2022.For more information, full specifications and purchasing options jump over to the official Kickstarter crowdfunding campaign page by following the link below.
A wall climbing robot made by HausBots can reduce workplace accidents, as it can be used for inspection and maintenance tasks such as building and infrastructure inspection and surveying or even painting.
However, to make sure the robot itself would work and is safe to use researchers from the WMG SME group helped the local business design and test the robot The robot is now on the market, after a four-year journey from a garage in Bournville to Singapore
A novel wall climbing robot, built designed and created by Birmingham based HausBots with the help of WMG at the University of Warwick is on the market, and could reduce the number of workplace accidents.
HausBots is a Birmingham based company who are on mission to use technology to protect and maintain the built environment. They have designed, built and created an innovative wall-climbing robot, that can climb vertical surfaces and be used for inspection and maintenance tasks such as building and infrastructure inspection and surveying or even painting.
The idea of the HausBots started in the co-founder’s garage, and with the help of the WMG SME team the robot was bought to life, as the team were able to help with building the prototype and testing the technology.
Four years ago, when the first prototype was developed researchers at WMG, University of Warwick worked with HausBots on the circuit motor controls and designed the system to help them get production ready thanks to the Product Innovation Accelerator scheme with CWLEP.
One the key uses of the HausBots is to help reduce the number of workplace accidents, in the US 85,000 workers fall from height every year, of which 700 of them will be fatal. The accidents also cost insurance companies over $1bn in claims every year, therefore not only does reducing the amount of accidents mean less injuries and trauma, but also means there’s a huge economic saving.
However, to ensure the robot itself doesn’t fall it had to undergo extensive electro-magnetic compatibility (EMC) testing to make sure the fans, which essentially attach it to the surface arefunctioning correctly.
The WMG SME team tested the robot by placing it in the EMC chamber and assessing how it responds to noise and to make sure it didn’t emit any unwanted noise into the atmosphere itself. Using amplifiers to simulate noise and analysers, the researchers were able to detect any unwanted interference and emissions with the robot and record results.
Dr David Norman, from the WMG SME group at the University of Warwick comments: “It has been a pleasure to be with HausBots and help them develop their product, the concept of the robot is incredible, and could save lives and reduce the number of workplace accidents.
“Our facilities and expertise have helped HausBots develop a market-ready product, which is now on the market and has carried out many jobs from painting and cleaning the graffiti off the spaghetti junction in Birmingham. We hope to continue working with them in the future and can’t wait to see where they are this time next year.”
Jack Crone, CEO and Co-Founder of HausBots comments:
“The WMG SME group have helped us from day one, by helping us build the prototype all the way to making sure the robot safely sticks to the wall and carries out its job efficiently.
“We have worked tirelessly over the last 3 years to make HausBot, and we are incredibly excited to have sold our first one to a company in Singapore, we hope this is the first of many that will also help reduce numbers of workplace accidents.
“Going forward we hope to continue our work with WMG at the University of Warwick to make more robots for other uses that can reduce harm to humans.”
Supporters will be able to help donate robot kits to schools to support STEM education
Toronto, Ontario, Canada – Nov 2021 – Quantum Robotic Systems Inc. (QRS) announced the launch of a Kickstarter campaign for QBii, an affordable, multi-functional and expandable service robot.
QBii (pronounced “cue + bee”) is about the size of a shoebox and weighs only 9 lbs. While most other service robots are limited to only one function, QBii performs a host of useful tasks in the home and in the workplace, including
Carrying heavy items like grocery bins or boxes (up to 50 lbs)
Sweeping, mopping and vacuuming floors
Towing carts with payloads (up to 200 lbs)
QBii is also programmable and customizable. “People have the option of purchasing QBii as a kit, which makes it a powerful resource for STEM educators,” says QRS president, Dr. Frank Naccarato. “In fact, supporters of our Kickstarter can contribute towards the donation of QBii Kits to schools.”
Founded in 2016 by Dr. Frank Naccarato, Quantum Robotic Systems Inc. (QRS) is a Toronto-based company that makes unique mobile autonomous robots. QRS has developed and patented a novel stairclimbing technology that allows users to carry heavy, bulky loads up and down stairs in an easier, faster and safer way. The company has incorporated this technology into its Robotic Stairclimbing Assistant (ROSA), a service robot that can carry things while climbing up and down stairs, and Doll-E, a stairclimbing moving cart capable of lifting 500 lbs.
The upgraded and even more feature-rich version of Kickstarter’s Most Funded 3D Scanner Ever was launched today
Shenzhen—Following its successful Kickstarter story with the platform’s most funded 3D scanner ever, Revopoint is launching today the second generation of the device, offering a new version that supports more functions and an enhanced 3D scanning performance. Catering to the requirements of 3D printing creators, VR/AR model makers, reverse design creators, and high-tech enthusiasts in general, Revopoint POP 2 3D Scanner has been launched at <https://bit.ly/30tcYiJ>.
POP 2 adopts a Binocular and Micro-Structured Light formula for an exceptionally high precision and texture scanning performance. The device uses a proprietary micro projecting chip to ensure that the fast acquired 3D point cloud data is captured with a high accuracy of 10 frame rates, achieving a 0.1mm single-frame accuracy.
POP 2 has a built-in high-performance 3D calculation chip that supports fast 3D scanning. Its embedded 6Dof Gyroscope also enhances a fluent shape, marker point and color feature point cloud stitching. „And the user can enjoy these amazing features with any ordinary smartphone, tablet, or laptop due to POP 2’s intelligent algorithms, which ensure speed and accuracy for the scanner regardless of the computer you use it with,“ Miss. Vivian, the co-founder of Revopoint, added.
The 3D scanner launched today allows users to explore expanded scanning possibilities, including using it as a handheld scanner for big statues and other big figures outdoors, or to mark points to scan large or featureless objects. POP 2 also innovates by using an invisible eye-friendly infrared light source to project and scan. This makes it possible for users to scan human and animal faces and body parts without producing any discomfort to the scanned subject.
„This is a professional-grade 3D scanner with a wealth of high-end features offered at a consumer-grade price. Everyone can buy it and use it,“ said Vivian. The versatile device supports high-precision handheld and turntable scanning, featuring impressive accuracy combined with a simplified one-button operation.
Designed for professionals and demanding hobbyists, POP 2 is a compact 3D scanner that can be carried and used anywhere. Its single cable can be used to charge and connect to the user’s smartphone, tablet, or laptop.
The scanner’s software is also simplified and user-friendly, allowing for the operation to be displayed on its interface. „If there’s an error, you can simply roll back, correct it, and keep moving forward with no worries,“ Vivian assured. The software works with Windows, Mac, Android and iOS, unlike conventional 3D scanner software, which only supports Windows.
The company representative further clarified that the device is especially designed for 3D printing, human body scanning, large-scale sculpture scanning, a plethora of cultural and creative design applications, reverse modeling, different medical applications, and advanced VR/AR and 3D modeling applications.
The Revopoint POP 2 3D Scanner campaign on Kickstarter at <https://bit.ly/30tcYiJ> is seeking to raise $9,975 to fund the large-scale production of the scanner. Backers who support the campaign gain early and discounted access to the device.
About Our Company
Revopoint focuses on the research and development of cutting-edge structured light and 3D imaging core hardware technology. The company’s core technical team leverages many years of experience in 3D imaging and artificial intelligence technology research and development, having developed different devices in the field, from chips to complete machines, focusing on 3D cameras and 3D scanner products.
Ingenieur*innen sind zunehmend bestrebt, KI erfolgreich in Projekte und Anwendungen zu integrieren, während sie versuchen, ihre eigene KI-Lernkurve zu meistern. Allerdings werden viele KI-Projekte nach wenig vielversprechenden Ergebnissen wieder verworfen. Woran liegt das? Johanna Pingel, Product Marketing Manager bei MathWorks, erläutert, warum es für Ingenieur*innen wichtig ist, sich auf den gesamten KI-Workflow zu konzentrieren und nicht nur auf die Modellentwicklung:
Ingenieur*innen, die Machine Learning und Deep Learning einsetzen, erwarten oft, dass sie einen großen Teil ihrer Zeit mit der Entwicklung und Feinabstimmung von KI-Modellen verbringen. Die Modellierung ist zwar ein wichtiger Schritt im Workflow, aber das Modell ist nicht alleiniges Ziel. Das Schlüsselelement für den Erfolg bei der praktischen KI-Implementierung ist das frühzeitige Aufdecken von Problemen. Außerdem ist es wichtig zu wissen, auf welche Aspekte des Workflows man Zeit und Ressourcen konzentrieren sollte, um die besten Ergebnisse zu erzielen. Das sind nicht immer die offensichtlichsten Schritte.
Der KI-gesteuerte Workflow
Es lassen sich vier Schritte in einem KI-gesteuerten Workflow differenzieren, wobei jeder Schritt seine eigene Rolle bei der erfolgreichen Implementierung von KI in einem Projekt spielt.
Schritt 1: Datenaufbereitung
Die Datenaufbereitung ist wohl der wichtigste Schritt im KI-Workflow: Ohne robuste und genaue Daten zum Trainieren eines Modells sind Projekte rasch zum Scheitern verurteilt. Wenn Ingenieur*innen das Modell mit „schlechten“ Daten füttern, werden sie keine aufschlussreichen Ergebnisse erhalten – und wahrscheinlich viele Stunden damit verbringen, herauszufinden, warum das Modell nicht funktioniert.
Um ein Modell zu trainieren, sollten Ingenieur*innen mit sauberen, gelabelten Daten beginnen, und zwar mit so vielen wie möglich. Dies kann einer der zeitaufwendigsten Schritte des Workflows sein. Wenn Deep Learning-Modelle nicht wie erwartet funktionieren, konzentrieren sich viele darauf, wie man das Modell verbessern kann – durch das Optimieren von Parametern, die Feinabstimmung des Modells und mehrere Trainingsiterationen. Doch noch viel wichtiger ist die Aufbereitung und das korrekte Labeln der Eingabedaten. Das darf nicht vernachlässigt werden, um sicherzustellen, dass Daten korrekt vom Modell verstanden werden können.
Schritt 2: KI-Modellierung
Sobald die Daten sauber und richtig gelabelt sind, kann zur Modellierungsphase des Workflows übergegangen werden. Hierbei werden die Daten als Input verwendet und das Modell lernt aus diesen Daten. Das Ziel einer erfolgreichen Modellierungsphase ist die Erstellung eines robusten, genauen Modells, das intelligente Entscheidungen auf Basis der Daten treffen kann. Dies ist auch der Punkt, an dem Deep Learning, Machine Learning oder eine Kombination davon in den Arbeitsablauf einfließt. Hier entscheiden die Ingenieur*innen, welche Methoden das präziseste und robusteste Ergebnis hervorbringt.
Die KI-Modellierung ist ein iterativer Schritt innerhalb des gesamten Workflows, und Ingenieur*innen müssen die Änderungen, die sie während dieses Schrittes am Modell vornehmen, nachverfolgen können. Die Nachverfolgung von Änderungen und die Aufzeichnung von Trainingsiterationen mit Tools wie dem Experiment Manager von MathWorks sind entscheidend, da sie helfen die Parameter zu erklären, die zum genauesten Modell führen und reproduzierbare Ergebnisse liefern.
Schritt 3: Simulation und Tests
Ingenieur*innen müssen beachten, dass KI-Elemente meistens nur ein kleiner Teil eines größeren Systems sind. Sie müssen in allen Szenarien im Zusammenspiel mit anderen Teilen des Endprodukts korrekt funktionieren, einschließlich anderer Sensoren und Algorithmen wie Steuerung, Signalverarbeitung und Sensorfusion. Ein Beispiel ist hier ein Szenario für automatisiertes Fahren: Dabei handelt es sich nicht nur um ein System zur Erkennung von Objekten (Fußgänger*innen, Autos, Stoppschilder), sondern dieses System muss mit anderen Systemen zur Lokalisierung, Wegplanung, Steuerung und weiteren integriert werden. Simulationen und Genauigkeitstests sind der Schlüssel, um sicherzustellen, dass das KI-Modell richtig funktioniert und alles gut mit anderen Systemen harmoniert, bevor ein Modell in der realen Welt eingesetzt wird.
Um diesen Grad an Genauigkeit und Robustheit vor dem Einsatz zu erreichen, müssen Ingenieur*innen validieren, dass das Modell in jeder Situation so reagiert, wie es soll. Sie sollten sich auch mit den Fragen befassen, wie exakt das Modell insgesamt ist und ob alle Randfälle abgedeckt sind. Durch den Einsatz von Werkzeugen wie Simulink können Ingenieur*innen überprüfen, ob das Modell für alle erwarteten Anwendungsfälle wie gewünscht funktioniert, und so kosten- und zeitintensive Überarbeitungen vermeiden.
Schritt 4: Einsatz
Ist das Modell reif für die Bereitstellung, folgt als nächster Schritt der Einsatz auf der Zielhardware – mit anderen Worten, die Bereitstellung des Modells in der endgültigen Sprache, in der es implementiert werden soll. Das erfordert in der Regel, dass die Entwicklungsingenieur*innen ein implementierungsbereites Modell nutzen, um es in die vorgesehene Hardwareumgebung einzupassen.
Die vorgesehene Hardwareumgebung kann vom Desktop über die Cloud bis hin zu FPGAs reichen. Mithilfe von flexiblen Werkzeugen wie MATLAB kann der endgültige Code für alle Szenarien generiert werden. Das bietet Ingenieur*innen den Spielraum, ihr Modell in einer Vielzahl von Umgebungen einzusetzen, ohne den ursprünglichen Code neu schreiben zu müssen. Das Deployment eines Modells direkt auf einer GPU kann hier als Beispiel dienen: Die automatische Codegenerierung eliminiert Codierungsfehler, die durch eine manuelle Übersetzung entstehen könnten, und liefert hochoptimierten CUDA-Code, der effizient auf der GPU läuft.
Gemeinsam stärker
Ingenieur*innen müssen keine Datenwissenschaftler*innen oder gar KI-Expert*innen werden, um mit KI erfolgreich zu sein. Mit Werkzeugen für die Datenaufbereitung, Anwendungen zur Integration von KI in ihre Arbeitsabläufe und mit verfügbaren Expert*innen, die Fragen zur KI-Integration beantworten, können sie KI-Modelle auf Erfolgskurs bringen. In jedem dieser Schritte im Workflow haben Ingenieur*innen die Möglichkeit, flexibel ihr eigenes Domänenwissen einzubringen. Dies ist eine wichtige Basis, auf der sie mit den richtigen Ressourcen aufbauen und die sie durch KI ergänzen können.
Über MathWorks
MathWorks ist der führende Entwickler von Software für mathematische Berechnungen. MATLAB, die Programmiersprache für Ingenieurwesen und Wissenschaft, ist eine Programmierumgebung für die Algorithmen-Entwicklung, Analyse und Visualisierung von Daten sowie für numerische Berechnungen. Simulink ist eine Blockdiagramm-basierte Entwicklungsumgebung für die Simulation und das Model-Based Design von technischen Mehrdomänen-Systemen und Embedded Systemen. Ingenieure und Wissenschaftler weltweit setzen diese Produktfamilien ein, um die Forschung sowie Innovationen und Entwicklungen in der Automobilindustrie, der Luft- und Raumfahrt, der Elektronik, dem Finanzwesen, der Biotechnologie und weiteren Industriezweigen zu beschleunigen. MATLAB und Simulink sind zudem an Universitäten und Forschungsinstituten weltweit wichtige Lehr- und Forschungswerkzeuge. MathWorks wurde 1984 gegründet und beschäftigt mehr als 5000 Mitarbeiter in 16 Ländern. Der Hauptsitz des Unternehmens ist Natick, Massachusetts, in den USA. Lokale Niederlassungen in der D-A-CH-Region befinden sich in Aachen, München, Paderborn, Stuttgart und Bern. Weitere Informationen finden Sie unter mathworks.com.