Clumped Box Vision System

A project as part of the mechatronics systems integration course, was to implement an improvement in the conveyor system used during our labs. This system consisted of several conveyor belts in a circle, each with a diverter arm. A camera at the start would detect the shape on the box and then sort it at the corresponding diverter arm. A problem that we noticed was that if the boxes were too close together on the conveyor belt, the diverter system would divert the whole group of boxes (see the video below for an idea of what happens). This would cause the sorting of all following boxes to be incorrect. Our goal was to implement an addition to the current system to fix this problem.


Design Parameters

There were a few design parameters that were set for this project. These are as follows:
  1. If a clump (2 or more boxes in close proximity) is detected, the system will divert the boxes, and let it go to an assumed accumulator, where an operator can help deal with the issue in a timely manner.
  2. System will be able to detect two boxes with a maximum distance of one box length apart.
  3. System will be able to detect groups of two objects, regardless of orientation (limited to a range of box sizes).
  4. System should be able to flag, and relay clumped box information to the operator.
  5. System should be able to retain all functionality from the reference case. In other words, the system should be an addition to the pre-existing conveyor system.

Image Detection

To determine whether a clumped case was present, image detection was implemented. The existing conveyor architecture already included a camera and vision system to allow for the sorting of the boxes. Thus, to solve this problem, an additional script was implemented. By taking advantage of simple image detection techniques, clumped cases could be determined. This involved cropping the captured video so that all unnecessary visual noise could be removed. This was possible due to the controlled environment. A HSV (Hue, Saturation, and Value) mask was then applied to this cropped frame. This allowed for the filtering and removal of any additional noise. Then using the built in contour detection within the OpenCV library, the boxes can be detected. Further filtering was done by using the area of the detected polygons and removing false detections. The image below showcases a sample processed image (left) after running the contour algorithm on the masked image (right).


Human Machine Interface (HMI)

One of the required design parameters for this project was to be able to flag and relay this information to an operator. This is where the HMI comes in. A simple graphical user interface (GUI) was created using Ignition 8 Designer. MQTT, an internet of things (IOT) messaging protocal, was used to retrieve the data from the vision system and thus allow for the flaging and relaying of relevant information. The GUI can be seen below.


Result

After completing the clumped box detection algorithm, it was integrated into the already existing shape detection algorithm, to ensure all features were retained. Then with the use of Node-Red, the diversion of the clumped boxes was implemented. Putting these 3 components (algorithm, hmi, and diverter control), we were able to successfully detect and divert all cases where the boxes were too close together during tests. As well, the HMI was able to consistently flag and relay the correct information. A screen capture of the algorithm detecting clumped cases (note the red border on the window) is shown below. The video at the top of this page also showcases the system successfully diverting a clumped case and flagging it on the HMI.