Story
At the pinnacle of industrial artificial intelligence and machine learning applications, a digital twin represents a virtual construction of real-world physical products, mechanisms, or mechanical procedures. Since the simulation of a real-world product or industrial technique provides the flexibility of exerting countless examination scenarios, even to the point of cycling arduous or dangerous tasks, without causing any tangible ramifications and safety risks, digital twins hold immeasurable importance in developing adaptive product manufacturing procedures and building secure, cost-effective, and efficient industrial facilities.
After inspecting recent research papers about the applications of digital twins in industrial operations, I noticed the focal point of employing a virtual representation is to improve the safety and efficiency of an already existing industrial facility or mechanical process. Even though forestalling acute physical hazards due to dangerous work safety risks and advancing the precision of ongoing industrial operations are the prominent digital twin use cases, I wanted to explore the innovative opportunities of reversing the digital twin implementation and starting with a virtual industrial construction, consisting of individual machinery and sample product components, to develop a safe, practical, cost-effective, and efficient real-world mechanism from scratch.
By reversing the digital twin application process, I wanted to investigate whether having a virtual construction before building the real-world counterpart could help to forfend concomitant risks of assembling an industrial manufacturing system, reduce exorbitant overhaul costs due to the lack of internal design blueprints, and test device components to obtain the optimum performance for multifaceted operations.
As I was conceptualizing this proof-of-concept project, I inspected various industrial settings with which I could show the benefits of reversing the digital twin implementation. Since product transportation and shipping operations require complex industrial mechanisms to achieve accuracy and reliability while maintaining a time-sensitive workflow, I decided to apply my reverse digital twin approach to design a virtual shipping workstation, construct a synthetic data set of customized sample products, and train a precise object detection model to accomplish building a production-ready product transportation mechanism. In accordance with my approach, I designed all sample products from scratch to emphasize the strengths of a full-fledged digital twin, providing the opportunity to train an object detection model for products waiting to be manufactured.
Since I needed to know the exact electronic components employed by the shipping workstation to create compatible 3D parts, I decided to prototype the mechanical device structure and design a unique PCB (inspired by Wall-E) based on Arduino Nano Matter as the workstation control panel. I designed the Wall-E PCB outline and encasement on Autodesk Fusion 360 to place the electronic components in relation to the PCB easily while designing the virtual shipping workstation.
After testing electronic components and completing the PCB layout, I designed a plethora of 3D parts on Autodesk Fusion 360, including but not limited to custom bearings optimized for 5 mm steel balls, planetary gear mechanisms, and separated rotating platforms. After finalizing the required mechanical 3D parts, I exported the virtual shipping workstation as a single file in the OBJ format to produce an accurate virtual representation of the shipping workstation.
Then, I imported the virtual shipping workstation into the NVIDIA Omniverse USD Composer, which allows users to assemble, light, simulate, and render large-scale scenes for world-building. To generate a realistic scenery for shipping operations, I utilized some free 3D models provided by Omniverse and designed some additional assets. After completing my shipping warehouse scenery, I experimented with camera, material, and lighting configurations to render the virtual shipping workstation with exceptional quality and produce a precise digital twin.
After employing built-in NVIDIA Omniverse features to construct my synthetic data set of customized sample products as instantiated by the shipping workstation digital twin, I uploaded the collected samples to Edge Impulse to train an advanced object detection model (FOMO) with synthetic product images. After validating and testing my FOMO model, I deployed it as a Linux (AARCH64) application (.eim) compatible with Raspberry Pi 5.
After building my object detection model successfully and completing my assignments with the shipping workstation digital twin on NVIDIA Omniverse, I started to print all workstation 3D parts to assemble the real-world counterpart.
To create a fully functioning smart shipping workstation with state-of-the-art features, I developed a web application from scratch to manage the MariaDB database server hosted by Raspberry Pi 5, run the Edge Impulse FOMO object detection model, and transfer the detection results with the modified model resulting images. I also developed a mobile application (Android) operating as the workstation interface and the proxy between the workstation control panel (based on Arduino Nano Matter) and the web application.
So, this is my project in a nutshell 😃
Please refer to the following tutorial to inspect in-depth feature, design, and code explanations.






































































Design process, available features, and final results
As my projects became more intricate due to complex part designs, multiple development board integrations, and various features requiring interconnected networking, I decided to prepare more straightforward written tutorials with brevity and produce more comprehensive demonstration videos showcasing my entire design process, results, and device features from start to finish.
Thus, I highly recommend watching the project demonstration videos below to inspect my design process, the construction of the synthetic data set, and all of the shipping workstation features.
Step 0: A simplified illustration of interconnected networking
As a part of preparing a more visually-inclined tutorial, I decided to create a concise illustration of interconnected networking infrastructure to delineate the complicated data transfer procedures between different development boards, complementary web, and mobile applications.

Step 1: Testing electronic components and prototyping the device structure
Before proceeding with designing 3D parts, I needed to determine all electrical components required to operate the real-world shipping workstation. Thus, I started to test and prepare electronic components for prototyping the device structure.
#️⃣ Since Arduino Nano Matter is a versatile IoT development board providing state-of-the-art Matter® and Bluetooth® Low Energy (BLE) connectivity thanks to the MGM240SD22VNA wireless module from Silicon Labs, I decided to base the shipping workstation control panel on Nano Matter.
#️⃣ Since I envisioned a fully automated homing sequence for the moving workstation parts, I decided to utilize an IR break-beam sensor (300 mm) and two micro switches (KW10-Z5P).
#️⃣ By utilizing a soldering station for tricky wire connections, I prepared the mentioned components for prototyping.






#️⃣ Since I needed to supply a lot of current-demanding electronic components with different operating voltages, I decided to convert my old ATX power supply unit (PSU) to a simple bench power supply by utilizing an ATX adapter board (XH-M229) providing stable 3.3V, 5V, and 12V. For each power output of the adapter board, I soldered wires via the soldering station to attach a DC-barrel-to-wire-jack (male) in order to create a production-ready bench power supply.


#️⃣ Since Nano Matter operates at 3.3V and the IR break-beam sensor requires 5V logic level voltage to generate powerful enough signals for motion detection, the sensor cannot be connected directly to Nano Matter. Therefore, I utilized a bi-directional logic level converter to shift the voltage for the connections between the IR sensor and Nano Matter.
#️⃣ Since I planned to design intricate gear mechanisms to control the moving parts of the real-world shipping workstation, I decided to utilize four efficient and powerful Nema 17 (17HS3401) stepper motors, similar to most FDM 3D printers. To connect the Nema 17 stepper motors to Nano Matter securely, I employed four A4988 driver modules.
#️⃣ As a practical shipping workstation feature, I decided to connect a tiny (embedded) thermal printer to Nano Matter to print a shipping receipt for each completed order. I utilized a sticker paper roll to make receipts fastenable to cardboard boxes.
#️⃣ To build a feature-packed and interactive workstation control panel, I also connected an SSD1306 OLED display, three control buttons, and an RGB LED to Nano Matter.
#️⃣ As depicted below, I made all component connections according to available and compatible Arduino Nano Matter pins.
// Connections
// Arduino Nano Matter :
// Nema 17 (17HS3401) Stepper Motor w/ A4988 Driver Module [Motor 1]
// 3.3V ------------------------ VDD
// GND ------------------------ GND
// D2 ------------------------ DIR
// D3 ------------------------ STEP
// Nema 17 (17HS3401) Stepper Motor w/ A4988 Driver Module [Motor 2]
// 3.3V ------------------------ VDD
// GND ------------------------ GND
// D4 ------------------------ DIR
// D5 ------------------------ STEP
// Nema 17 (17HS3401) Stepper Motor w/ A4988 Driver Module [Motor 3]
// 3.3V ------------------------ VDD
// GND ------------------------ GND
// D6 ------------------------ DIR
// D7 ------------------------ STEP
// Nema 17 (17HS3401) Stepper Motor w/ A4988 Driver Module [Motor 4]
// 3.3V ------------------------ VDD
// GND ------------------------ GND
// D8 ------------------------ DIR
// D9 ------------------------ STEP
// Tiny (Embedded) Thermal Printer
// D0/TX1 ------------------------ RX
// D1/RX1 ------------------------ TX
// GND ------------------------ GND
// SSD1306 OLED Display (128x64)
// A4/SDA ------------------------ SDA
// A5/SCL ------------------------ SCL
// Infrared (IR) Break-beam Sensor [Receiver]
// A6 ------------------------ Signal
// Control Button (A)
// A0 ------------------------ +
// Control Button (B)
// A1 ------------------------ +
// Control Button (C)
// A2 ------------------------ +
// Micro Switch with Pulley [First]
// A3 ------------------------ +
// Micro Switch with Pulley [Second]
// A7 ------------------------ +
// 5mm Common Anode RGB LED
// D10 ------------------------ R
// D11 ------------------------ G
// D12 ------------------------ B






#️⃣ Furthermore, I put Raspberry Pi 5 into its aluminum case providing a cooling fan to secure all cable connections.

Step 1.1: Designing the Wall-E-inspired PCB layout and silkscreen graphics
As I was prototyping the device structure and conceptualizing the workstation features, I pondered the question of how I should design a unique PCB for a smart shipping workstation. Then, I remembered the perennial efforts of Wall-E to move and arrange garbage as small packages. Thus, I drew my inspiration from Wall-E while designing this PCB, running an automated package-moving operation :)
To simplify the PCB integration and place electronic components precisely while designing complementary 3D parts, I created the Wall-E PCB outline and a snug-fit PCB encasement on Autodesk Fusion 360.





Then, I imported my outline graphic to Kicad 8.0 in the DXF format and designed the Wall-E PCB layout and silkscreen graphics according to the prototype electronic component connections.






Step 1.2: Soldering and assembling the Wall-E PCB
After completing the Wall-E PCB design, I utilized ELECROW's high-quality PCB manufacturing services. For further inspection, I provided the fabrication files of this PCB below, or you can order it directly from my ELECROW community page.
#️⃣ After receiving my PCBs, I attached all electronic components by utilizing a TS100 soldering iron and the soldering station.
📌 Component assignments on the Wall-E PCB:
A1 (Headers for Arduino Nano Matter)
DR1, DR2, DR3, DR4 (Headers for A4988 Stepper Motor Driver)
Motor1, Motor2, Motor3, Motor4, (Headers for Nema 17 [17HS3401] Stepper Motor)
SSD1306 (Headers for SSD1306 OLED Display)
Thermal1 (Headers for Embedded Thermal Printer)
L1 (Headers for Bi-Directional Logic Level Converter)
IR1 (IR Break-beam Sensor [Receiver])
IR2 (IR Break-beam Sensor [Transmitter])
SW1, SW2 (Micro Switch [KW10-Z5P])
C1, C2, C3 (6x6 Pushbutton)
D1 (5 mm Common Anode RGB LED)
J_5V_1, J_12V_1 (DC Barrel Female Power Jack)
J_5V_2, J_12V_2 (Headers for Power Supply)









After concluding soldering components to the Wall-E PCB, I tested whether the PCB worked as expected or was susceptible to electrical issues; I did not encounter any problems.


Step 2: Creating a fully functional virtual shipping workstation
Even though I decided to build a virtual shipping workstation to present my reverse digital twin approach as a proof-of-concept project, I focused on designing an intricate mechanism manifesting an industrial-level shipping operation providing a professional product transportation system and a custom warehouse management system.
I designed all shipping workstation 3D parts on Autodesk Fusion 360, including custom mechanical parts for moving components.
While designing 3D parts, I utilized some third-party CAD files to obtain accurate measurements and create a precise virtual construction.
Nema 17 (17HS3401) Stepper Motor | Inspect
Raspberry Pi 5 | Inspect
In the following steps, I will explain my design process for each 3D part categorically.
After finalizing all 3D parts, I exported the virtual shipping workstation as a single file in the OBJ format to produce an accurate virtual representation of the shipping workstation.






In accordance with my reverse digital twin approach, I had to envision each virtual shipping workstation 3D part appearance as close as possible to their real-world counterparts since I needed to construct a synthetic data set and train an object detection model even before printing these 3D parts. Therefore, while designing, I decided on the filaments I would utilize for each 3D part. Then, I searched for material type and color code for each filament to assign them to the corresponding 3D parts on Fusion 360.
I selected these PLA filaments for different 3D parts.
🎨 For shipping workstation 3D parts:
- ePLA-HS Grey (#B5B8BE)
- ePLA-Matte Mint Green (#6DA582)
- ePLA-Matte Peach Pink (#F9C0CF)
- ePLA-Matte Deep Black (#2F3231)
- ePLA-Matte Milky White (#F7F5F4)
- ePLA-Matte Light Khaki (#AD9E8D)
- ePLA-Matte Tangerine (#E24C13)
- ePLA-Matte Almond Yellow (#C7D58C)
🎨 For sample product 3D parts:
- eSilk-PLA Lime (#9CE40C)
- eSilk-PLA Silver (#AAA3B5)
- eSilk-PLA Jacinth (#FF8472)
- ePLA-Metal Antique Brass (#CB9E70)
- PLA+ Light Blue (#37B8F5)
- PLA+ Fire Engine Red (#A62E34)


Step 2.1: Designing mechanical 3D components
Since I wanted to design the shipping workstation from the ground up, I decided to create custom mechanical components for the moving workstation parts.
#️⃣ First, I started to work on designing a template for bearings optimized for 5 mm steel balls. In this regard, I was able to create ball bearings in different sizes to swivel mechanical components.
#️⃣ For a simple assembly process, I designed the bearing template in three parts: inner ring, top outer ring, and bottom outer ring. The outer ring (top and bottom) includes M3 screw holes to adjust the bearing tightness easily.
#️⃣ Since I used parameters to define the dimensions and clearances of the bearing template, I was able to create custom bearings in different sizes effortlessly.








After completing the ball bearing template, I started to work on designing the two rotating platforms for storing and presenting sample products respectively.
Since I wanted to create an industrial-level shipping workstation and showcase the digital twin capabilities for intricate mechanical components, I decided to design planetary gear mechanisms to rotate the platforms.
To generate gears in different sizes, I utilized the SpurGear add-in script.
After experimenting with virtual planetary gear configurations, I decided to fix the ring gear and employ the planet carrier (Y-shaped) to attach and rotate the platform face. In this configuration, the sun gear behaves as the driver gear and provides higher torque while maintaining a lower speed.
To determine gear ratios and teeth numbers, I applied these equations:
🔢 Variables
- R ➡ Ring gear teeth number
- S ➡ Sun gear teeth number
- P ➡ Planet gear teeth number
- Tr ➡ Ring gear rotation
- Ts ➡ Sun gear rotation
- Ty ➡ Planet carrier (Y-shaped) rotation
🔢 Equations
- R = (2 × P) + S
- (R + S) × Ty = (R × Tr) + (Ts × S)
🔢 Since the ring gear is fixed and I wanted to have a 1/3 gear ratio for Ty/Ts:
- (R + S) × Ty = Ts × S
- Ty = Ts × (S / (R + S))
- R = 96
- S = 48
- P = 24

#️⃣ According to the fixed ring planetary gear configuration, I designed the platforms with the embedded ring gear.
⚙️ First platform:




⚙️ Second platform:




#️⃣ Then, I designed the planet gears, the sun gear, and the secondary stepper motor gear. In this regard, the Nema 17 stepper motor attached to the platform drives the secondary gear to rotate the sun gear which drives the planet gears.
⚙️ First platform:





⚙️ Second platform:





#️⃣ After completing the planetary gear mechanisms, I designed the Y-shaped planet carrier connected to the planet gears via custom bearings.
#️⃣ To stabilize torque distribution, the sun gear and the planet carrier are connected to the central shaft of the platform via custom bearings.
#️⃣ Then, I designed platform faces attached to the Y-shaped planet carriers via snap-fit joints.
#️⃣ Since the first rotating platform stores sample products and the second rotating platform presents the transported product, I designed face separators and rotation pins accordingly.
#️⃣ The rotation pins are tailored for the selected platform homing methods — IR break-beam sensor and micro switch.
⚙️ First platform:











⚙️ Second platform:











Step 2.2: Designing product transportation mechanism
After completing both rotating platform systems, I started to work on the industrial-level transportation mechanism to move the selected sample product from the first platform to the second platform.
Conforming with my mechanical part design principle for the moving parts of the shipping workstation, I utilized gears to move the carrier on the transportation road. Nonetheless, since the product transportation mechanism requires linear motion, I designed a rack and pinion system converting rotational motion to linear motion.
#️⃣ First, I designed the transportation road, bridging the first platform with the second platform. I integrated two linear gears (racks) at the bottom of the transportation road.


#️⃣ Then, I designed pinions, the pinion connection pin, and the stepper motor direction gear.








#️⃣ After completing the rack and pinion system, I designed the transportation carrier. I employed custom bearings to connect the carrier, pinions, and the pinion connection pin to enable linear motion while maintaining a stable torque distribution.
#️⃣ Then, I designed a basic carrier arm to hold the sample product still while pulling and pushing it on the transportation road.












#️⃣ In this regard, the first Nema 17 stepper motor attached to the carrier drives pinions and the second one drives the carrier arm.







Step 2.3: Designing complementary accessories
After completing mechanical 3D parts for the moving workstation components, I started to design complementary accessories, including the platform roofs, for the first and second platforms.
#️⃣ Since the first platform utilizes the IR break-beam sensor as the homing method to synchronize the 200-step per rotation pattern for 60° turns, I designed the first platform roof compatible with the IR sensor receiver and transmitter.
#️⃣ Then, I designed add-ons for Raspberry Pi 5 and a USB webcam since the first platform stores the sample products for automated selection and transportation process.










#️⃣ Since the second platform employs the micro switch as the homing method to align the face separator toward the transportation road, I designed the second platform roof compatible with the micro switch.
#️⃣ Then, I designed the add-on for the thermal printer and the mount for the PCB encasement since the second platform exhibits the selected and transported product.











Step 2.4: Designing customized sample products
After completing the shipping workstation 3D parts, I focused on designing unique sample products since I wanted to examine the precision and efficiency of an object detection model trained on synthetic images of products that do not exist in the market.
In this regard, I was able to investigate whether it is feasible and cost-effective to initiate developing an AI-based solution for industrial operations with a synthetic data set generated from virtual product representations even before manufacturing or mass producing them.
#️⃣ Compatible with the platform face separators, I designed multipart enamel pin-inspired 3D models as virtual sample products representing these objects:
- Wrench
- Mouse
- Basketball
- Teacup
- Hammer
- Screwdriver





Step 3.0: Setting up the NVIDIA Omniverse Launcher
NVIDIA Omniverse™ is a versatile and developer-friendly platform integrating OpenUSD (Universal Scene Description) and NVIDIA RTX™ rendering technologies into existing software tools and simulation workflows with officially supported APIs, SDKs, and services. In this regard, NVIDIA Omniverse provides all the necessary building tools to envision and realize large-scale and AI-enabled virtual worlds.
Since NVIDIA Omniverse is a platform optimized for industrial digitalization and physical AI simulation and provides lots of easy-to-use tools for 3D world (environment) modeling, I decided to capitalize on its enhanced simulation and rendering features while building my shipping workstation digital twin. As NVIDIA states, various enterprises employ Omniverse's state-of-the-art services to develop digital twins as testing grounds to design, simulate, operate, and optimize their products and production facilities.
Even though NVIDIA Omniverse provides developers with the NVIDIA Omniverse Kit SDK to build OpenUSD-native applications and extensions for specific tasks, I decided to utilize the Omniverse Launcher as a single-user workstation, which gives access to all Omniverse services required to build my physically accurate shipping workstation digital twin.
#️⃣ First, install the Omniverse Launcher here.
#️⃣ Then, create an NVIDIA account and confirm the license agreement to initiate the single-user workstation.



#️⃣ Assign paths to store the necessary Omniverse Launcher information locally.

#️⃣ Since the Omniverse Launcher requires a Nucleus Collaboration Server to access all available apps, services, and assets, create a local Nucleus server and its administration account.





#️⃣ After establishing the local Nucleus server (service), the Launcher shows all available applications, services, connectors, and content on the Exchange tab.

Step 3: Forming the shipping workstation digital twin on NVIDIA Omniverse USD Composer
The Omniverse USD Composer is an application built on the Omniverse Kit and provides advanced layout tools and simulation capabilities, including but not limited to NVIDIA RTX™ Renderer and physics extension, for generating visually compelling and physically accurate worlds.
Since the USD Composer allows developers to import existing assets (designs) and render large-scale scenes with user-friendly simulation tools, I decided to set up the USD Composer on the Omniverse Launcher to build my shipping workstation digital twin.


After installing the USD Composer, I started to work on producing a realistic scenery for industrial-level shipping operations.
Plausibly, NVIDIA Omniverse provides built-in asset (3D model) and material libraries for various use cases. Also, the USD Composer includes the Asset Store displaying all available high-quality 3D models from diverse third-party content libraries.
#️⃣ First, I scrutinized all available assets provided by Omniverse (default) and Sketchfab (free Creative Commons-licensed) to produce a suitable scenery, including a close replica of my standing desk.














#️⃣ Then, I designed some custom assets with the integrated Omniverse tools to finalize my shipping warehouse scenery.




#️⃣ After completing my shipping warehouse scenery, I imported the virtual shipping workstation in the OBJ format.
#️⃣ Since the Omniverse Launcher can automatically detect and assign Fusion 360 material, color, and texture configurations, the USD Composer rendered the virtual shipping workstation to produce a flawless digital twin.







#️⃣ To move the first rotating platform and sample products as a single object via the physics extension, I tried to group all associated models under a new Xform. However, it was not possible since these models were references from the original OBJ file.
#️⃣ To solve this issue, I saved the Omniverse stage again by utilizing the Save Flattened As option to merge all 3D models. Then, I was able to modify and group the associated models easily.







#️⃣ After producing the shipping workstation digital twin, I created a few cameras to survey the virtual workstation and capture synthetic sample product images effortlessly.












Step 4: Constructing a synthetic data set of customized sample products via NVIDIA Omniverse
#️⃣ After preparing the shipping workstation digital twin for synthetic data collection, I experimented with camera, lighting, and rendering configurations to create optimal conditions.
#️⃣ Then, I applied the built-in Capture Screenshot (F10) feature by activating the Capture only the 3D viewport option so as to construct my synthetic data set of unique sample products in various poses.























🖼️ Synthetic data samples:













Step 5: Setting up LAMP web server, Edge Impulse CLI, and Linux Python SDK on Raspberry Pi 5
After constructing my synthetic data set, I was going to build my object detection model before proceeding with real-world shipping workstation preparations. However, while trying to upload my synthetic data samples generated by NVIDIA Omniverse USD Composer, I noticed most of them were refused by the Edge Impulse data uploader by being tagged as duplicated. I even attempted to upload six individual samples for each product, nonetheless, the issue still resumed. Thus, I decided to set up Raspberry Pi 5 earlier to perform the tasks required by the real-world shipping workstation and upload samples directly.
#️⃣ First, I installed the Raspberry Pi 5-compatible operating system image on a microSD card and initiated Raspberry Pi 5.
❗⚡ Note: While testing peripherals, I encountered under-voltage issues and purchased the official Raspberry Pi 5 27W USB-C power supply.



#️⃣ After initiating Raspberry Pi 5 successfully, I set up an Apache web server with a MariaDB database. I also installed PHP MySQL and cURL packages to host and enable the web workstation application features.
sudo apt-get install apache2 php mariadb-server php-mysql php-curl -y


#️⃣ To utilize the MariaDB database, I created a new user and followed the secure installation prompt.
sudo mysql_secure_installation

#️⃣ After setting up the LAMP web server, I installed the Edge Impulse CLI by following the official instructions for Raspbian OS.
#️⃣ First, I downloaded the latest Node.js version since versions older than 20.x may lead to installation issues or runtime errors.
curl -sL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt-get install -y nodejs
node -v
#️⃣ Then, I installed the available CLI tools.
npm install -g edge-impulse-cli

#️⃣ After setting up the Edge Impulse CLI, I installed the Edge Impulse Linux Python SDK to run Edge Impulse machine learning models via Python.
❗ If you do not run a virtual environment on Pi 5, the system may throw an error while trying to install packages via pip. To simply solve this issue, you can add --break-system-packages.
sudo apt-get install libatlas-base-dev libportaudio2 libportaudiocpp0 portaudio19-dev python3-pip
sudo pip3 install pyaudio edge_impulse_linux --break-system-packages


Step 6: Building an object detection model (FOMO) w/ Edge Impulse Enterprise
Since Edge Impulse provides developer-friendly tools for advanced AI applications and supports almost every development board due to its model deployment options, I decided to utilize Edge Impulse Enterprise to build my object detection model. Also, Edge Impulse Enterprise incorporates elaborate model architectures for advanced computer vision applications and optimizes the state-of-the-art vision models for edge devices and single-board computers such as Raspberry Pi 5.
Among the diverse machine learning algorithms provided by Edge Impulse, I decided to employ FOMO (Faster Objects, More Objects) since it is a novel algorithm optimized for highly constrained devices with a brilliant heat map to bounding boxes technique.
While labeling my synthetic image samples, I simply applied the names of the represented real-world objects:
- wrench
- mouse
- basketball
- tea_cup
- hammer
- screwdriver
Plausibly, Edge Impulse Enterprise enables developers with advanced tools to build, optimize, and deploy each available machine learning algorithm as supported firmware for nearly any device you can think of. Therefore, after training and validating, I was able to deploy my FOMO model as a Linux (AARCH64) application (.eim) compatible with Raspberry Pi 5.
You can inspect my object detection model (FOMO) on Edge Impulse as a public project.
Step 6.1: Uploading and labeling training and testing images (samples)
#️⃣ To utilize the advanced AI tools provided by Edge Impulse, register here and create a new project.

As mentioned earlier, the Edge Impulse data uploader refused most of the synthetic image samples generated by the Omniverse USD Composer. Thus, I set up the Edge Impulse CLI to upload my synthetic data set from Raspberry Pi 5 to my Edge Impulse project directly.
Since the Edge Impulse CLI allows developers to override duplicate sample detection, I was able to upload all of my synthetic data set as training and testing samples without any problem.
❗ Use --category to choose the data category (training or testing) and add --allow-duplicates to override duplicate detection.
cd Projects/project_omniverse/omniverse_data_set
edge-impulse-uploader *.png --allow-duplicates
edge-impulse-uploader --category testing *.png --allow-duplicates




#️⃣ To employ the bounding box labeling tool for object detection models, go to Dashboard ➡ Project info ➡ Labeling method and select Bounding boxes (object detection).

After uploading my synthetic data set of unique sample products and activating the bounding box labeling tool, I started to draw bounding boxes around the target objects for each image sample.
#️⃣ Go to Data acquisition ➡ Labeling queue to access all unlabeled items (training and testing) remaining in the given data set.
#️⃣ After drawing bounding boxes around target objects, click the Save labels button to label an image sample. Then, repeat this process until all samples have at least one labeled target object.






























Step 6.2: Training the FOMO model on synthetic sample product images
An impulse is a custom machine learning application processed and optimized by Edge Impulse. I created my impulse by employing the Image processing block and the Object Detection (Images) learning block.
The Image processing block optionally turns the input image format to grayscale or RGB and generates a features array from the passed raw image.
The Object Detection (Images) learning block represents the accessible machine learning algorithms to perform object detection.
#️⃣ Go to the Create impulse page, set the image dimensions to 320, select the Fit shortest axis resize mode so as to scale (resize) the given image samples precisely, and click Save Impulse.

#️⃣ To modify the raw features in the applicable format, go to the Image page, set the Color depth parameter as RGB, and click Save parameters.

#️⃣ Then, click Generate features to apply the Image processing block to training image samples.



#️⃣ After generating features successfully, navigate to the Object detection page and click Start training.
According to my prolonged experiments, I modified the neural network settings and architecture to achieve reliable accuracy and validity:
📌 Neural network settings:
- Number of training cycles ➡ 75
- Learning rate ➡ 0.010
- Validation set size ➡ 3%
📌 Neural network architecture:
- FOMO (Faster Objects, More Objects) MobileNetV2 0.35
After training with the given configurations, Edge Impulse evaluated the F1 score (accuracy) as 73.7% due to the modest volume of the validation set.



#️⃣ Since I decided to experiment with different model and simulation (render) configurations consecutively, I utilized two versions of the same model to achieve the results I wanted faster.

Step 6.3: Evaluating the model accuracy and deploying the validated model
By applying the given testing samples, Edge Impulse evaluated the model accuracy (precision) as 93.10%.
#️⃣ To validate the trained model, go to the Model testing page and click Classify all.



Then, I deployed the validated model as a fully optimized and customizable Linux (AARCH64) application (.eim).
#️⃣ Navigate to the Deployment page and search for Linux (AARCH64).
#️⃣ Choose the Quantized (int8) optimization option to get the optimum performance while running the deployed model.
#️⃣ Finally, click Build to download the model as a Linux (AARCH64) application (.eim) compatible with Raspberry Pi 5.



Step 7: Printing and assembling 3D parts of the virtual shipping workstation to build its real-world counterpart
After concluding my assignments with the shipping workstation digital twin on NVIDIA Omniverse USD Composer, I started to work on building its real-world counterpart.
#️⃣ First, on Autodesk Fusion 360, I exported all virtual shipping workstation 3D parts in the STL format individually.
#️⃣ Then, I sliced the exported parts in PrusaSlicer, which provides lots of groundbreaking features such as paint-on supports and height range modifiers.
#️⃣ Due to the fluctuating part dimensions, I needed to utilize my Anycubic Kobra 2 and Kobra 2 Max 3D printers simultaneously while printing parts. Thus, I applied the relative slicer settings for each printer.
⚙️ Platforms:










⚙️ Gears:




⚙️ Bearings:




⚙️ Transportation mechanism:






⚙️ Accessories:




⚙️ Sample products:














As mentioned earlier, I assigned PLA filament attributes for each virtual 3D part. I utilized the exact PLA filaments to print their real-world counterparts.






After printing all 3D parts successfully, I started to work on assembling the real-world shipping workstation.



#️⃣ First, I assembled all custom ball bearings.
#️⃣ To assemble one of my custom bearings, place the required number of 5 mm steel balls between the inner ring and the bottom outer ring.
#️⃣ Then, cap the placed steel balls with the top outer ring and utilize M3 screws to adjust the bearing tightness.












❗ Although all related 3D parts can be affixed via M3 screws after printing, plastic parts tend to loosen or break after a while due to friction and abrasion. Thus, I employed a well-known injection molding technique to make some connections more sturdy — M3 brass threaded inserts.



#️⃣ For each rotating platform, I fastened the required Nema 17 stepper motor and assembled the planetary gear mechanism consisting of a sun gear, three planet gears, a secondary stepper motor gear, and a Y-shaped planet carrier.
#️⃣ As explained earlier, I employed custom bearings to connect swiveling components to maintain stable torque distribution.




