1 write to attention_dropout
Microsoft.ML.TorchSharp (1)
Roberta\Modules\AttentionSelf.cs (1)
40attention_dropout = torch.nn.Dropout(attentionDropoutRate);
2 references to attention_dropout
Microsoft.ML.TorchSharp (2)
Roberta\Modules\AttentionSelf.cs (2)
65attentionProbs = attention_dropout.forward(attentionProbs); 95attention_dropout.Dispose();