欧陆资讯 NEWS CENTER
联系我们 CONTACT US
- 手机:
- 18888889999
- 电话:
- 0898-66889888
- 邮箱:
- admin@youweb.com
- 地址:
- 海南省海口市玉沙路58号
PyTorch-Adam优化算法原理,公式,应用
2024-07-08 14:18:56 点击量:
1 import torch 2 3 # N is batch size; D_in is input dimension; 4 # H is hidden dimension; D_out is output dimension. 5 N, D_in, H, D_out=64, 1000, 100, 10 6 7 # Create random Tensors to hold inputs and outputs 8 x= torch.randn(N, D_in) 9 y= torch.randn(N, D_out) 10 11 # Use the nn package to define our model and loss function. 12 model= torch.nn.Sequential( 13 torch.nn.Linear(D_in, H), 14 torch.nn.ReLU(), 15 torch.nn.Linear(H, D_out), 16 ) 17 loss_fn=torch.nn.MSELoss(reduction='sum') 18 19 # Use the optim package to define an Optimizer that will update the weights of 20 # the model for us. Here we will use Adam; the optim package contains many other 21 # optimization algoriths. The first argument to the Adam constructor tells the 22 # optimizer which Tensors it should update. 23 learning_rate=1e-4 24 optimizer=torch.optim.Adam(model.parameters(), lr=learning_rate) 25 for t in range(500): 26 # Forward pass: compute predicted y by passing x to the model. 27 y_pred= model(x) 28 29 # Compute and print loss. 30 loss= loss_fn(y_pred, y) 31 print(t, loss.item()) 32 33 # Before the backward pass, use the optimizer object to zero all of the 34 # gradients for the variables it will update (which are the learnable 35 # weights of the model). This is because by default, gradients are 36 # accumulated in buffers( i.e, not overwritten) whenever .backward() 37 # is called. Checkout docs of torch.autograd.backward for more details. 38 optimizer.zero_grad() 39 40 # Backward pass: compute gradient of the loss with respect to model 41 # parameters 42 loss.backward() 43 44 # Calling the step function on an Optimizer makes an update to its 45 # parameters 46 optimizer.step()