1 write to LayerNorm
Microsoft.ML.TorchSharp (1)
Roberta\Modules\AttentionOutput.cs (1)
26LayerNorm = torch.nn.LayerNorm(new long[] { hiddenSize });
2 references to LayerNorm
Microsoft.ML.TorchSharp (2)
Roberta\Modules\AttentionOutput.cs (2)
35hiddenStates = LayerNorm.forward(hiddenStates + inputTensor); 47LayerNorm.Dispose();