WorldSkills Kazan 2019 Test project for the Skill 63 - Robot System Integration was about automated assembling of Russian Matrioshkas.
The Basic task consisted in a simple Pick & Place with Deburring of the top parts. The main difficulty was the layout planning because of numerous amount of work stations. I had to think it well in order to have less useless movements.
For the deburring, I suppose, the movement was imagined as circular motion. Instead of this, I decided to do the one-point-rotation, because this way I don't have to calculate the coordinates of the circle and the deburring result will be homogeneous.
The Extended tasks, all 3 were about Computer Vision:
All of these are already kind of an error handling, so the HMI error menus were not that important. But the user-friendly HMI itself was required.
These tasks are quite complex to do at "one-shot", but even this was not the end of the project. You had no calibration grid for the camera. Instead of this you had a "Target" that had to be used directly on the robot gripper to create the grid. The iRVision Package includes the utilities for grid generations with the target. During this process the robot will do many snaps of the target in a defined by you area, on 2 planes. This method has an advantage of high precision (if an appropriate lights are provided). On the other hand you should be careful on the movements, because you can't predict the next position of the robot with the consequence of the breaking a part of the system.
Documentation is also one of the main targets, it had to be as comprehensible and as graphical as possible at the same time. The operator should be able to find immediately what he needs and fix everything in case of system or user errors.
Lastly the Digital Twin is the first and the last thing the integrator MUST do. The role of the digital twin is essential for the performance of the integrator. The simulation should be as equal as possible to the reality, it defines if the layout is appropriate, positions are reachable, test the cycle time and allows to the integrator to make any change to the robot far more faster than in reality.
Grid creation by a robot with a target mounted on the tool offers several advantages for camera calibration in robotics and computer vision applications. Here’s why this approach is beneficial:
Position acquisition is a common task in Line-Tracking and Bin-Picking. It offers less sophisticated machine operations in input, leaving the "cleaning" work to the robot.
For this application I decided to snap the tray on every iteration and searching 1 part at time. This solution permits you to slightly reduce the Time-To-Find and erases the queue scheduling.
From one application to another the teaching method may vary in terms of searching the perimeter or the area of the part due to light conditions. In the simulation, as the "perfect world" the perimeter was enough. But there is a problem of high probability of too close location of the parts. So, for this purpose, the sides of the part should be checked for the free space (showed on image). Exactly here there is another reason why I search 1 part at time. because during the pick operation, the gripper fingers might move another doll top and as the consequence: change the actual position.
In the competition, every cell was provided with a Camera of 8mm focal-length. This means that the distortion on the borders of the camera view is extremely high.
This means that the tray of the doll tops should be as much centered as possible to reduce the camera error, even if you've done an excellent calibration.
In order to decide on which tray the part should be placed, you had to check the correctness of the assembly.
The basic path was by simply touching the microswitch. Whilst the extended path was by using vision.
As in Position Acquisition, is enough to search for the outline of the doll by using Inspection tools provided in iRVision Package.
This task is probably the most tricky one. The teams who placed the chuck on the basement had to rethink the layout. In this task you had to make the most of the doll incorporated magnets.
In order to align both of the parts accurately I used the parallel method with interruptions. While the robot was rotating the doll, in parallel, the camera was "spamming" the snaps continuously until it detects the whole word.
In my video of this project yo can see the medium-worst scenario because the limit for this competition that the doll may be disaligned by +/-22°.
As explained before, as we have the magnets the doll align could be handled only by the gripper. Whilst the top part could be align only after the camera view is perpendicular to the letters of the bottom.
For this type of application I'm not sure that it is the best choice to use the vision systems. I'm afraid that if the high accuracy is required, in the worst case scenario you have to perform the rotation really slow with the cost of cycle time. Instead I would think about a more "mechanical solution".
Deburring in robotics refers to the automated removal of sharp edges, burrs, and excess material from manufactured parts. This process is crucial in industries like automotive, aerospace, and metalworking to ensure high-quality, smooth finishes.
In this project, I suppose, implicitly the judges wanted to see the circular movement for the deburring simulation because there was a limit of 100 mm/s. I did, instead, the one-point-rotation, this way you don't need to calculate the coordinates of the circle and your deburring result will be homogeneous.
However it was a competition aimed to challenge integrators more on vision knowledge than on deburring accuracy.