1 write to self
Microsoft.ML.TorchSharp (1)
Roberta\Modules\Attention.cs (1)
22self = new AttentionSelf(numAttentionHeads, hiddenSize, layerNormEps, attentionDropoutRate);
2 references to self
Microsoft.ML.TorchSharp (2)
Roberta\Modules\Attention.cs (2)
30var x = self.forward(hiddenStates, attentionMask); 41self.Dispose();