Unity is hosting a workshop at ROSCon 2021 to showcase robotics simulation using Unity and ROS2
Many industrial scenarios, such as order fulfillment in warehouses and distribution centers, require multiple robots to operate in a coordinated fashion. Instead of using one complex humanoid robot to perform all tasks, multiple different and specialized robots are used. Advances brought by ROS2 have made it easier to coordinate complex multi-robot systems, but testing them in simulation has still been challenging.
In this workshop, you’ll learn how to use the Unity engine and ROS2 to simulate multiple mobile robots that must perform object pose estimation in order to complete a “find-and-ferry” task in a dynamic warehouse environment.
This will be a full-day workshop open to ROSCon attendees on October 20th, 2021 in New Orleans, LA.
The workshop will be broken into four phases.
The workshop will begin with an introduction to the Unity Editor. Attendees will then download all required packages, set up the initial Unity scene, and set up the ROS-TCP-Connector and ROS-TCP-Endpoint to ensure proper communication between ROS2 and Unity. Once communication is verified, attendees will bring a robot into the scene with the URDF Importer.
- How to create a Unity Scene
- How to use the Package Manager to download and install Unity packages
- How to manipulate objects in the scene
- How to set up ROS-TCP communication
- How to use the URDF Importer to import a robot
With the scene and robot properly loaded, the next phase of the workshop will focus on autonomous navigation. Attendees will learn to connect various sensors to the robot. A Docker container running ROS2 will run the navigation applications using the sensor data that is generated from the Unity scene. At the end of this phase, you will have a robot that can autonomously go from point A to B.
- How to add sensors to a robot
- How to set up Nav2 publishers and subscribers
- How to set up message visualizers
- How to run the Nav2 simulation in Unity
Now that a single robot (the “FindBot”) can move around the Unity scene autonomously, you will bring in a second robot (the “FerryBot”), which is equipped with a manipulator. This robot will be responsible for executing the pick-and-place task.
- How to add a manipulator to a mobile robot
- How to set up and run the Nav2 simulation with two robots simultaneously
Up until now, the FindBot was able to identify the pose of the target object by magic. How would a real robot figure out the pose? In the final phase of this workshop, attendees will go through the process of using the Unity Perception Package to collect synthetic data for training a computer vision model that performs object pose estimation. We will cover the steps and setup for training a machine learning model based on this training data, but for the sake of time, we will also provide all attendees with a pre-trained model. Finally, attendees will learn how to put all the pieces together: deploy the trained pose estimation model to a ROS2 node and simulate a fully autonomous find-and-ferry task, wherein the robotic arm can pick up the object and place it on the FerryBot for ferrying to the desired location.
- How to use the Unity Perception Package for data collection
- How to train a computer vision model