The technique of dynamically redistributing data such that all processors have a uniform workload is an essential primitive to many irregular problems, such as computational adaptive graph (grid) problems ([30], [19], [13]) including finite element calculations, molecular dynamics [24], particle dynamics [18], plasma particle-in-cell [20], raytraced volume rendering [22], region growing and computer vision [33], and statistical physics [7], and, as we will show, the selection problem. The running time of these parallel algorithms is categorized by the maximum running time of any of the p processors' subproblems. Equalizing the amount of work assigned to each processor is an attempt at minimizing the maximum single processor running time, and thus, reducing the overall execution time. Here, the input is distributed across p processors with a distribution that is irregular and not known a priori. We present two methods for the dynamic redistribution of data which remap the data such that no processor contains more than the average number of data elements. The first method is similar to a method presented in ([26], [27]), and only a brief sketch will be given. The second method, which is shown to be superior, will be presented in greater detail.