Translations:Deep Residual Learning for Image Recognition/13/en
where $ F(x, \{W_i\}) $ represents the residual mapping to be learned (typically two or three convolutional layers with batch normalization and ReLU activations). The addition is element-wise and requires that $ F $ and $ x $ have the same dimensions. When dimensions differ (e.g., at downsampling stages), a linear projection $ W_s $ is applied to the shortcut: