PlantEye Processing Workflow

There are multiple steps involved to calculate the parameters described from the RAW point data. The complete processing workflow is done on the machine and is illustrated below:

  1. Scanning

RAW data received from the PlantEye system contains a 3D point cloud of every plant in PLY format. Each point contains 3 spatial coordinates (X,Y,Z). In addition to the spatial coordinates, the PlantEye F500 records for every 3D point individually the NIR, reg, green and blue value. Initially, after scanning, the point cloud is located within the PlantEye coordinate system. The axes of the PlantEye coordinate system are defined as following:

  • X-axis: parallel to the laser slit in the front part of PlantEye. Axis origin (X = 0) is in the middle of the slit.
  • Y-axis: parallel to the scanning direction. Axis origin is at the position where the scan starts.
  • Z-axis: perpendicular to the PlantEye laser slit (distance to PlantEye). Axis origin is on the bottom of PlantEye.
When visualizing a raw 3D point cloud the image is upside-down. That is because along the Z-axis, the PlantEye measures the distances to objects. So objects that are further away have a higher Z-value and are in visualization software shown at the top because the have the highest distance to the PlantEye than closer objects.

Note: depending on the scanning direction, the PlantEye coordinate system can be right- or left-handed.

  1. Transformation into Plant Coordinates

In this step the RAW data is transformed from the PlantEye coordinate system to the plant coordinate system. This coordinate system has the ground as the X-Y plane with height 0. To complete the transformation, the orientation of PlantEye in the system is used for rotating, shifting and stretching the RAW point cloud. In particular, the following operations are supported:

  • Axes inversion (needed for example to transform the left-handed coordinate system to the right-handed one).
  • Shift of the coordinate system by a constant vector.
  • Compensation and transformation of the coordinate system in regards to the actuator/gantry and PlantEye orientation.  The yaw (rotation around the Z-axis), pitch (rotation around the X-axis) and roll (rotation around the Y-axis) of the actuator and the PlantEye are accounted for.
  • Reference point(s) correction using barcodes. Up to two reference barcodes per block can be used being the barcode at the beginning of the block and the barcode at the end of the block. With one barcode it is possible to account for the tilt (roll, rotation around the X-axis) of the platform or table the plants are situated on. With two barcodes it is possible to account for the slope and the yaw of the platform where the plants are located. Two-point reference point correction is mostly used in large systems with uneven ground levels, such as in field systems.
  1. Segmentation

In the segmentation step, groups of 3D points are identified and individual 3D points given an ID (group number). The segmentation algorithm is based on region growing techniques. There are many configurable parameters, though the most important are proximity to other 3D points and the density of a cluster of 3D points. The density determination of 3D points is achieved by finding 3D points that are laying on an outer border, such as the sides of a leaf and at first removing them. If there are still 3D points left within the group, it can be grown again from this group. In the case that after shrinking there would be no 3D points left, it can not be grow anymore. Therefore it accounts for the density of 3D points in a group. It furthermore supports the filtering of 3D points by intensity and that can be used to discern the plant, which generally has a reflection intensity of the laser which is medium from a black, matte pot. A black matte pot generally has a low reflection intensity.

  1. Triangulation

In the triangulation stage, the individual 3D points are connected via lines. If it is possible to connect 3 points together via lines the result is a triangle. Of a triangle, the area and angle can be calculated as opposed to a 3D point. A 3D point occupies no area or space, it is just a coordinate. A 3D point also has for the purpose of the PlantEye no orientation. Parameters that feed into the triangulation algorithm are distance (it will look for the closest points with a maximum distance) and aspect ratio (how "well"  the resulting triangle should results an ideal triangle with corners of 60 degrees).

  1. Merge Point Clouds (only for DualScan)

When using the DualScan concept, the 3D points of the master and slave scanner are compared. The merging algorithm will then search for the 3D points that are unique to the slave scanner as opposed to the master scanner. The duplicate points of the slave scan are then removed resulting on only the unique 3D points, called the complimentary scan. The following processing steps will take into account both the master scan and the complementary scan.

  1. Splitting into Sectors

The 3D point cloud can be split into three dimensional areas called units. Along the Y-axis, the parameter entered for unit length is used while taking into account the number of units along the length(columns). Along the X-axis the parameter for unit width is used while taking into account for the number of units along the width (rows). For the Z-axis the pot height is used to remove all 3D points that are below pot height. The 3D points above pot height are then kept for the next stage, for each unit individually.

  1. Calculate Parameters

In this step the parameters are calculated based on the triangulated dataset for each unit. If needed in a DualScan setup, the master scan and complimentary scan are combined to calculate the parameters. A detailed description of how these parameters are calculated is described below.

  1. Storing Parameters and RAW Data

After the parameters are calculated, the RAW data and all parameters are stored.  The parameters are stored as .csv files and imported into the HortControl database. The intermediary segmented, triangulated, split and if applicable complementary 3D files are discarded. Only the raw file will be stored. If in the future it needs to be calculated again, the necessary parameters are stored in the 3D file. The reason to do it is space constraints, since 3D point clouds with added segmentation and triangulation information take up a lot of space on a storage medium.