Worldskills Kazan 2019

Robot System Integration
Robotized System setup for Russian Matrioshkas Assembly with Deburring, Vision Inspection and Vision Position Calculation. World-level competition between Robot Integrators on FANUC Robots.
Time to complete
22.5 hours
Project type
Robot System Integration
Specific skills
Motion Optimizing, Advanced use of PRs, Parametrized programming, HMI, Safety, Collision Guard Adjustments, Camera Calibration without Grid (with the target on tool), Vision Position Calculation, Vision Inspection, Maintenance Documentation Writing, Payload Checker, Menu Utility, Local Registers, BG Logic, Simulation, Digital Twin, CAD

General Description

WorldSkills Kazan 2019 Test project for the Skill 63 - Robot System Integration was about automated assembling of Russian Matrioshkas.

The Basic task consisted in a simple Pick & Place with Deburring of the top parts. The main difficulty was the layout planning because of numerous amount of work stations. I had to think it well in order to have less useless movements.

For the deburring, I suppose, the movement was imagined as circular motion. Instead of this, I decided to do the one-point-rotation, because this way I don't have to calculate the coordinates of the circle and the deburring result will be homogeneous.

The Extended tasks, all 3 were about Computer Vision:

  1. Doll Top Position Calculation
  2. Doll Assembly Check
  3. Doll Lower and Higher Part Alignment

All of these are already kind of an error handling, so the HMI error menus were not that important. But the user-friendly HMI itself was required.

These tasks are quite complex to do at "one-shot", but even this was not the end of the project. You had no calibration grid for the camera. Instead of this you had a "Target" that had to be used directly on the robot gripper to create the grid. The iRVision Package includes the utilities for grid generations with the target. During this process the robot will do many snaps of the target in a defined by you area, on 2 planes. This method has an advantage of high precision (if an appropriate lights are provided). On the other hand you should be careful on the movements, because you can't predict the next position of the robot with the consequence of the breaking a part of the system.

Documentation is also one of the main targets, it had to be as comprehensible and as graphical as possible at the same time. The operator should be able to find immediately what he needs and fix everything in case of system or user errors.

Lastly the Digital Twin is the first and the last thing the integrator MUST do. The role of the digital twin is essential for the performance of the integrator. The simulation should be as equal as possible to the reality, it defines if the layout is appropriate, positions are reachable, test the cycle time and allows to the integrator to make any change to the robot far more faster than in reality.

Key Features

Robot Generated Grid

Grid creation by a robot with a target mounted on the tool offers several advantages for camera calibration in robotics and computer vision applications. Here’s why this approach is beneficial:

Improved Calibration Accuracy

  1. Precise Control of Grid Positions: The robot can move the target to predefined positions with high repeatability, ensuring a well-defined calibration grid.
  2. Minimized Human Error: Manual grid positioning can introduce errors due to misalignment or movement, whereas a robot follows programmed paths precisely.

Automation and Efficiency

  1. Fully Automated Process: The robot can autonomously move the target to predefined positions, reducing manual effort.
  2. Faster Point Acquisition: Grid points are captured efficiently, leading to quicker calibration and reduced downtime due to auto-created calibration program.

Custom Grid Configurations

  1. Dynamic Grid Adjustments: The robot can create a grid with varying density or patterns to optimize calibration results.
  2. Calibration for Different Camera Angles: The target can be positioned at different orientations to improve robustness against perspective distortions.
DCS Feature Images

Position Acquisition

Position acquisition is a common task in Line-Tracking and Bin-Picking. It offers less sophisticated machine operations in input, leaving the "cleaning" work to the robot.

Development

For this application I decided to snap the tray on every iteration and searching 1 part at time. This solution permits you to slightly reduce the Time-To-Find and erases the queue scheduling.

From one application to another the teaching method may vary in terms of searching the perimeter or the area of the part due to light conditions. In the simulation, as the "perfect world" the perimeter was enough. But there is a problem of high probability of too close location of the parts. So, for this purpose, the sides of the part should be checked for the free space (showed on image). Exactly here there is another reason why I search 1 part at time. because during the pick operation, the gripper fingers might move another doll top and as the consequence: change the actual position.

Accuracy

In the competition, every cell was provided with a Camera of 8mm focal-length. This means that the distortion on the borders of the camera view is extremely high.

This means that the tray of the doll tops should be as much centered as possible to reduce the camera error, even if you've done an excellent calibration.

HMI Feature Images

Assembly Check

In order to decide on which tray the part should be placed, you had to check the correctness of the assembly.

The basic path was by simply touching the microswitch. Whilst the extended path was by using vision.

As in Position Acquisition, is enough to search for the outline of the doll by using Inspection tools provided in iRVision Package.

Vision Feature Images

Letter Alignment

This task is probably the most tricky one. The teams who placed the chuck on the basement had to rethink the layout. In this task you had to make the most of the doll incorporated magnets.

In order to align both of the parts accurately I used the parallel method with interruptions. While the robot was rotating the doll, in parallel, the camera was "spamming" the snaps continuously until it detects the whole word.

In my video of this project yo can see the medium-worst scenario because the limit for this competition that the doll may be disaligned by +/-22°.

Specifics

As explained before, as we have the magnets the doll align could be handled only by the gripper. Whilst the top part could be align only after the camera view is perpendicular to the letters of the bottom.

Opinion

For this type of application I'm not sure that it is the best choice to use the vision systems. I'm afraid that if the high accuracy is required, in the worst case scenario you have to perform the rotation really slow with the cost of cycle time. Instead I would think about a more "mechanical solution".

Sealing Feature Images

Deburring

Deburring in robotics refers to the automated removal of sharp edges, burrs, and excess material from manufactured parts. This process is crucial in industries like automotive, aerospace, and metalworking to ensure high-quality, smooth finishes.

In this project, I suppose, implicitly the judges wanted to see the circular movement for the deburring simulation because there was a limit of 100 mm/s. I did, instead, the one-point-rotation, this way you don't need to calculate the coordinates of the circle and your deburring result will be homogeneous.

However it was a competition aimed to challenge integrators more on vision knowledge than on deburring accuracy.

Home Check Feature Images

Other projects

Let's work together!

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.