Match is a bad word, the don’t match, they are duals. The residual stream aka identity mapping needs to be the identity of the attention mechanism as the attention mechanism learns.
But this is the same for all residual streams, not just those in transformers.
Gradient descent is just how neural networks (including auto-encoders) optimize parameters to minimize the loss function. They do this using derivatives to descend down the slope of the function. Autodiff is one way to compute the derivatives. Maybe we’re saying the same thing.
But this is the same for all residual streams, not just those in transformers.
Join my discord to discuss this further https://discord.gg/mr9TAhpyBW