The merging phase is almost identical to the previously described method for binary images. The only difference in the algorithms occurs when the group manager solves the problem of finding the connected components of the graph representing the border pixels during the merge of two subimages.
We use an adjacency list representation for the graph, and add vertices to the graph representing nonzero-pixels. As before, two types of edges are added to the graph. First, pixels are scanned down the left (or upper) border, and edges are strung linearly down the list between pixels containing the same connected component label. The same is done for pixels on the right (or lower) border. The second step adds edges between pixels of the left (upper) and right (lower) border which are both the same nonzero color and adjacent to each other.
A sequential breadth-first search based connected components algorithm computes the connected components of this graph. The pixels in this graph are then scanned again, and any changes in the labeling are stored in a sorted array of all unique changes , and propagated back to the clients processors.
Using the ``hook'' data structures, each processor then updates its border pixels from the current list of label changes. After the last merge step, each processor relabels its interior pixel labels.
The complexity for this algorithm remains the same as for binary images and is both efficient and optimal, i.e., for ,
Results for the 256-grey level DARPA Image Understanding Benchmark image of size pixels, shown in Figure 2, is given in Figure 10 for p = 16 to 128 processors on the CM-5, and for a wide range of configurations on the SP-1 and Meiko CS-2 parallel machines.