The nn
package in PyTorch
provides high level abstraction
for building neural networks. In this post we will build a simple Neural
Network using PyTorch nn
package.
Import torch
and define layers dimensions
import torch
batch_size, input_dim, hidden_dim, out_dim = 32, 100, 100, 10
Create input, output tensors
input_tensor = torch.randn(batch_size, input_dim)
output_tensor = torch.randn(batch_size, out_dim)
Define model
using nn
package
model = torch.nn.Sequential(
torch.nn.Linear(input_dim, hidden_dim),
torch.nn.Tanh(),
torch.nn.Linear(hidden_dim, out_dim),
)
Define loss function, optimizer and learning rate
loss_function = torch.nn.MSELoss(reduction='sum')
lr = 1e-5
sgd_optimizer = torch.optim.SGD(model.parameters(), lr=lr)
Train the model
for i in range(200):
predicted_value = model(input_tensor)
loss = loss_function(predicted_value, output_tensor)
print(i, loss.item())
sgd_optimizer.zero_grad()
loss.backward()
sgd_optimizer.step()
Complete Code
import torch
batch_size, input_dim, hidden_dim, out_dim = 32, 100, 100, 10
input_tensor = torch.randn(batch_size, input_dim)
output_tensor = torch.randn(batch_size, out_dim)
model = torch.nn.Sequential(
torch.nn.Linear(input_dim, hidden_dim),
torch.nn.Tanh(),
torch.nn.Linear(hidden_dim, out_dim),
)
loss_function = torch.nn.MSELoss(reduction='sum')
lr = 1e-5
sgd_optimizer = torch.optim.SGD(model.parameters(), lr=lr)
for i in range(200):
predicted_value = model(input_tensor)
loss = loss_function(predicted_value, output_tensor)
print(i, loss.item())
sgd_optimizer.zero_grad()
loss.backward()
sgd_optimizer.step()
Output
0 314.1348571777344
1 313.5540466308594
2 312.9750061035156
3 312.3976135253906
4 311.8220520019531
5 311.2481689453125
6 310.67596435546875
7 310.10546875
8 309.53668212890625
9 308.96954345703125
10 308.40411376953125
11 307.84033203125
....
Category: PyTorch