Back to Blog
Tinker and transform gobot robot file5/21/2023 Now, we've got to take the transform tree and create it with code. Hopefully, the above example helped to understand tf on a conceptual level. Our robot can use this information to reason about laser scans in the "base_link" frame and safely plan around obstacles in its environment. With this transform tree set up, converting the laser scan received in the "base_laser" frame to the "base_link" frame is as simple as making a call to the tf library. This means the transform associated with the edge connecting "base_link" and "base_laser" should be (x: 0.1m, y: 0.0m, z: 0.2m). Let's choose the "base_link" coordinate frame as the parent because as other pieces/sensors are added to the robot, it will make the most sense for them to relate to the "base_laser" frame by traversing through the "base_link" frame. Remember, this distinction is important because tf assumes that all transforms move from parent to child. To create the edge between them, we first need to decide which node will be the parent and which will be the child. To create a transform tree for our simple example, we'll create two nodes, one for the "base_link" coordinate frame and one for the "base_laser" coordinate frame. Tf uses a tree structure to guarantee that there is only a single traversal that links any two coordinate frames together, and assumes that all edges in the tree are directed from parent to child nodes. Conceptually, each node in the transform tree corresponds to a coordinate frame and each edge corresponds to the transform that needs to be applied to move from the current node to its child. To define and store the relationship between the "base_link" and "base_laser" frames using tf, we need to add them to a transform tree. Instead we'll define the relationship between "base_link" and "base_laser" once using tf and let it manage the transformation between the two coordinate frames for us. Luckily, however, we don't have to do this work ourselves. We could choose to manage this relationship ourselves, meaning storing and applying the appropriate translations between the frames when necessary, but this becomes a real pain as the number of coordinate frames increase. Specifically, we know that to get data from the "base_link" frame to the "base_laser" frame we must apply a translation of (x: 0.1m, y: 0.0m, z: 0.2m), and to get data from the "base_laser" frame to the "base_link" frame we must apply the opposite translation (x: -0.1m, y: 0.0m, z: -0.20m). This gives us a translational offset that relates the "base_link" frame to the "base_laser" frame. In defining this relationship, assume we know that the laser is mounted 10cm forward and 20cm above the center point of the mobile base. In essence, we need to define a relationship between the "base_laser" and "base_link" coordinate frames. To do this successfully, we need a way of transforming the laser scan we've received from the "base_laser" frame to the "base_link" frame. Now suppose we want to take this data and use it to help the mobile base avoid obstacles in the world. In other words, we have some data in the "base_laser" coordinate frame. We'll call the coordinate frame attached to the mobile base "base_link" (for navigation, its important that this be placed at the rotational center of the robot) and we'll call the coordinate frame attached to the laser "base_laser." For frame naming conventions, see REP 105Īt this point, let's assume that we have some data from the laser in the form of distances from the laser's center point. Let's also give them names for easy reference. In referring to the robot let's define two coordinate frames: one corresponding to the center point of the base of the robot and one for the center point of the laser that is mounted on top of the base. To make this more concrete, consider the example of a simple robot that has a mobile base with a single laser mounted on top of it. At an abstract level, a transform tree defines offsets in terms of both translation and rotation between different coordinate frames. Many ROS packages require the transform tree of a robot to be published using the tf software library. robot file first and then run? I am unsure if there is a better way to do this, since some of the steps depend on previous steps' output (unittest/pytest does not support this).EOL distros: electric fuerte groovy hydro indigo jade kinetic lunar Is there any way to run robot framework with a code or json instead of.
0 Comments
Read More
Leave a Reply. |