[ad_1]
Pipeline parallelism splits a mannequin “vertically” by layer. It’s additionally doable to “horizontally” cut up sure operations inside a layer, which is normally known as Tensor Parallel coaching. For a lot of trendy fashions (such because the Transformer), the computation bottleneck is multiplying an activation batch matrix with a big weight matrix. Matrix multiplication might be considered dot merchandise between pairs of rows and columns; it’s doable to compute impartial dot merchandise on totally different GPUs, or to compute components of every dot product on totally different GPUs and sum up the outcomes. With both technique, we will slice the load matrix into even-sized “shards”, host every shard on a special GPU, and use that shard to compute the related a part of the general matrix product earlier than later speaking to mix the outcomes.
One instance is Megatron-LM, which parallelizes matrix multiplications throughout the Transformer’s self-attention and MLP layers. PTD-P makes use of tensor, knowledge, and pipeline parallelism; its pipeline schedule assigns a number of non-consecutive layers to every system, lowering bubble overhead at the price of extra community communication.
Typically the enter to the community might be parallelized throughout a dimension with a excessive diploma of parallel computation relative to cross-communication. Sequence parallelism is one such concept, the place an enter sequence is cut up throughout time into a number of sub-examples, proportionally lowering peak reminiscence consumption by permitting the computation to proceed with extra granularly-sized examples.
[ad_2]
Source link