Open In Colab

# !pip install nbdev
Requirement already satisfied: nbdev in /usr/local/lib/python3.12/dist-packages (3.0.12)
Requirement already satisfied: fastcore>=1.12.3 in /usr/local/lib/python3.12/dist-packages (from nbdev) (1.12.26)
Requirement already satisfied: execnb>=0.1.12 in /usr/local/lib/python3.12/dist-packages (from nbdev) (0.1.18)
Requirement already satisfied: astunparse in /usr/local/lib/python3.12/dist-packages (from nbdev) (1.6.3)
Requirement already satisfied: ghapi>=1.0.3 in /usr/local/lib/python3.12/dist-packages (from nbdev) (1.0.13)
Requirement already satisfied: watchdog in /usr/local/lib/python3.12/dist-packages (from nbdev) (6.0.0)
Requirement already satisfied: asttokens in /usr/local/lib/python3.12/dist-packages (from nbdev) (3.0.1)
Requirement already satisfied: setuptools in /usr/local/lib/python3.12/dist-packages (from nbdev) (75.2.0)
Requirement already satisfied: build in /usr/local/lib/python3.12/dist-packages (from nbdev) (1.4.0)
Requirement already satisfied: fastgit in /usr/local/lib/python3.12/dist-packages (from nbdev) (0.0.4)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.12/dist-packages (from nbdev) (6.0.3)
Requirement already satisfied: ipython in /usr/local/lib/python3.12/dist-packages (from execnb>=0.1.12->nbdev) (7.34.0)
Requirement already satisfied: wheel<1.0,>=0.23.0 in /usr/local/lib/python3.12/dist-packages (from astunparse->nbdev) (0.46.3)
Requirement already satisfied: six<2.0,>=1.6.1 in /usr/local/lib/python3.12/dist-packages (from astunparse->nbdev) (1.17.0)
Requirement already satisfied: packaging>=24.0 in /usr/local/lib/python3.12/dist-packages (from build->nbdev) (26.0)
Requirement already satisfied: pyproject_hooks in /usr/local/lib/python3.12/dist-packages (from build->nbdev) (1.2.0)
Requirement already satisfied: jedi>=0.16 in /usr/local/lib/python3.12/dist-packages (from ipython->execnb>=0.1.12->nbdev) (0.19.2)
Requirement already satisfied: decorator in /usr/local/lib/python3.12/dist-packages (from ipython->execnb>=0.1.12->nbdev) (4.4.2)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.12/dist-packages (from ipython->execnb>=0.1.12->nbdev) (0.7.5)
Requirement already satisfied: traitlets>=4.2 in /usr/local/lib/python3.12/dist-packages (from ipython->execnb>=0.1.12->nbdev) (5.7.1)
Requirement already satisfied: prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0 in /usr/local/lib/python3.12/dist-packages (from ipython->execnb>=0.1.12->nbdev) (3.0.52)
Requirement already satisfied: pygments in /usr/local/lib/python3.12/dist-packages (from ipython->execnb>=0.1.12->nbdev) (2.19.2)
Requirement already satisfied: backcall in /usr/local/lib/python3.12/dist-packages (from ipython->execnb>=0.1.12->nbdev) (0.2.0)
Requirement already satisfied: matplotlib-inline in /usr/local/lib/python3.12/dist-packages (from ipython->execnb>=0.1.12->nbdev) (0.2.1)
Requirement already satisfied: pexpect>4.3 in /usr/local/lib/python3.12/dist-packages (from ipython->execnb>=0.1.12->nbdev) (4.9.0)
Requirement already satisfied: parso<0.9.0,>=0.8.4 in /usr/local/lib/python3.12/dist-packages (from jedi>=0.16->ipython->execnb>=0.1.12->nbdev) (0.8.6)
Requirement already satisfied: ptyprocess>=0.5 in /usr/local/lib/python3.12/dist-packages (from pexpect>4.3->ipython->execnb>=0.1.12->nbdev) (0.7.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.12/dist-packages (from prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0->ipython->execnb>=0.1.12->nbdev) (0.6.0)
# !nbdev-install-quarto
# !pip install jupyterlab-quarto
!git clone https://github.com/yowashi23/miniai.git
Cloning into 'miniai'...
warning: You appear to have cloned an empty repository.
!cd miniai
!nbdev-new --user yowashi23 --author yowashi23 --author_email shin.yongwhan23@gmail.com
pyproject.toml created.
No git repo found. Run: gh repo create yowashi23/content --public --source=.
pandoc -o README.md
  to: >-
    commonmark+autolink_bare_uris+emoji+footnotes+gfm_auto_identifiers+pipe_tables+strikeout+task_lists+tex_math_dollars
  output-file: index.html
  standalone: true
  default-image-extension: png
  variables: {}
  
metadata
  engines:
    - path: /opt/quarto/share/extension-subtrees/julia-engine/_extensions/julia-engine/julia-engine.js
  title: content
  
Output created: _docs/README.md
MNIST_URL='https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/data/mnist.pkl.gz?raw=true'
path_data = Path('data')
path_data.mkdir(exist_ok=True)
path_gz = path_data/'mnist.pkl.gz'

from urllib.request import urlretrieve
if not path_gz.exists(): urlretrieve(MNIST_URL, path_gz)
from fastcore.test import test_close

torch.set_printoptions(precision=2, linewidth=140, sci_mode=False)
torch.manual_seed(1)
mpl.rcParams['image.cmap'] = 'gray'

path_data = Path('data')
path_gz = path_data/'mnist.pkl.gz'
with gzip.open(path_gz, 'rb') as f: ((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1')
x_train, y_train, x_valid, y_valid = map(tensor, [x_train, y_train, x_valid, y_valid])
VisibleDeprecationWarning: dtype(): align should be passed as Python or NumPy boolean but got `align=0`. Did you mean to pass a tuple to create a subarray type? (Deprecated NumPy 2.4)
  with gzip.open(path_gz, 'rb') as f: ((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1')

Initial setup

Data

n,m = x_train.shape
c = y_train.max()+1
nh = 50
class Model(nn.Module):
    def __init__(self, n_in, nh, n_out):
        super().__init__()
        self.layers = [nn.Linear(n_in,nh), nn.ReLU(), nn.Linear(nh,n_out)]

    def __call__(self, x):
        for l in self.layers: x = l(x)
        return x
model = Model(m, nh, 10)
pred = model(x_train)
pred.shape
torch.Size([50000, 10])

Cross entropy loss

First, we will need to compute the softmax of our activations. This is defined by:

\[\hbox{softmax(x)}_{i} = \frac{e^{x_{i}}}{e^{x_{0}} + e^{x_{1}} + \cdots + e^{x_{n-1}}}\]

or more concisely:

\[\hbox{softmax(x)}_{i} = \frac{e^{x_{i}}}{\sum\limits_{0 \leq j \lt n} e^{x_{j}}}\]

In practice, we will need the log of the softmax when we calculate the loss.

def log_softmax(x): return (x.exp()/(x.exp().sum(-1,keepdim=True))).log()
log_softmax(pred)
tensor([[-2.37, -2.49, -2.36,  ..., -2.31, -2.28, -2.22],
        [-2.37, -2.44, -2.44,  ..., -2.27, -2.26, -2.16],
        [-2.48, -2.33, -2.28,  ..., -2.30, -2.30, -2.27],
        ...,
        [-2.33, -2.52, -2.34,  ..., -2.31, -2.21, -2.16],
        [-2.38, -2.38, -2.33,  ..., -2.29, -2.26, -2.17],
        [-2.33, -2.55, -2.36,  ..., -2.29, -2.27, -2.16]], grad_fn=<LogBackward0>)

Note that the formula

\[\log \left ( \frac{a}{b} \right ) = \log(a) - \log(b)\]

gives a simplification when we compute the log softmax:

def log_softmax(x): return x - x.exp().sum(-1,keepdim=True).log()

Then, there is a way to compute the log of the sum of exponentials in a more stable way, called the LogSumExp trick. The idea is to use the following formula:

\[\log \left ( \sum_{j=1}^{n} e^{x_{j}} \right ) = \log \left ( e^{a} \sum_{j=1}^{n} e^{x_{j}-a} \right ) = a + \log \left ( \sum_{j=1}^{n} e^{x_{j}-a} \right )\]

where a is the maximum of the \(x_{j}\).

def logsumexp(x):
    m = x.max(-1)[0]
    return m + (x-m[:,None]).exp().sum(-1).log()

This way, we will avoid an overflow when taking the exponential of a big activation. In PyTorch, this is already implemented for us.

def log_softmax(x): return x - x.logsumexp(-1,keepdim=True)
test_close(logsumexp(pred), pred.logsumexp(-1))
sm_pred = log_softmax(pred)
sm_pred
tensor([[-2.37, -2.49, -2.36,  ..., -2.31, -2.28, -2.22],
        [-2.37, -2.44, -2.44,  ..., -2.27, -2.26, -2.16],
        [-2.48, -2.33, -2.28,  ..., -2.30, -2.30, -2.27],
        ...,
        [-2.33, -2.52, -2.34,  ..., -2.31, -2.21, -2.16],
        [-2.38, -2.38, -2.33,  ..., -2.29, -2.26, -2.17],
        [-2.33, -2.55, -2.36,  ..., -2.29, -2.27, -2.16]], grad_fn=<SubBackward0>)

The cross entropy loss for some target \(x\) and some prediction \(p(x)\) is given by:

\[ -\sum x\, \log p(x) \]

But since our \(x\)s are 1-hot encoded (actually, they’re just the integer indices), this can be rewritten as \(-\log(p_{i})\) where i is the index of the desired target.

This can be done using numpy-style integer array indexing. Note that PyTorch supports all the tricks in the advanced indexing methods discussed in that link.

y_train[:3]
tensor([5, 0, 4])
y_train.shape
torch.Size([50000])
sm_pred[0,5],sm_pred[1,0],sm_pred[2,4]
(tensor(-2.20, grad_fn=<SelectBackward0>),
 tensor(-2.37, grad_fn=<SelectBackward0>),
 tensor(-2.36, grad_fn=<SelectBackward0>))
sm_pred.shape
torch.Size([50000, 10])
sm_pred[[0,1,2], y_train[:3]]
tensor([-2.20, -2.37, -2.36], grad_fn=<IndexBackward0>)
def nll(input, target): return -input[range(target.shape[0]), target].mean()
loss = nll(sm_pred, y_train)
loss
tensor(2.30, grad_fn=<NegBackward0>)

Then use PyTorch’s implementation.

test_close(F.nll_loss(F.log_softmax(pred, -1), y_train), loss, 1e-3)

In PyTorch, F.log_softmax and F.nll_loss are combined in one optimized function, F.cross_entropy.

test_close(F.cross_entropy(pred, y_train), loss, 1e-3)

Basic training loop

Basically the training loop repeats over the following steps: - get the output of the model on a batch of inputs - compare the output to the labels we have and compute a loss - calculate the gradients of the loss with respect to every parameter of the model - update said parameters with those gradients to make them a little bit better

loss_func = F.cross_entropy
bs=50                  # batch size

xb = x_train[0:bs]     # a mini-batch from x
preds = model(xb)      # predictions
preds[0], preds.shape
(tensor([-0.09, -0.21, -0.08,  0.10, -0.04,  0.08, -0.04, -0.03,  0.01,  0.06], grad_fn=<SelectBackward0>),
 torch.Size([50, 10]))
yb = y_train[0:bs]
yb
tensor([5, 0, 4, 1, 9, 2, 1, 3, 1, 4, 3, 5, 3, 6, 1, 7, 2, 8, 6, 9, 4, 0, 9, 1, 1, 2, 4, 3, 2, 7, 3, 8, 6, 9, 0, 5, 6, 0, 7, 6, 1, 8, 7, 9,
        3, 9, 8, 5, 9, 3])
loss_func(preds, yb)
tensor(2.30, grad_fn=<NllLossBackward0>)
preds.argmax(dim=1)
tensor([3, 9, 3, 8, 5, 9, 3, 9, 3, 9, 5, 3, 9, 9, 3, 9, 9, 5, 8, 7, 9, 5, 3, 8, 9, 5, 9, 5, 5, 9, 3, 5, 9, 7, 5, 7, 9, 9, 3, 9, 3, 5, 3, 8,
        3, 5, 9, 5, 9, 5])

source

accuracy


def accuracy(
    out, yb
):

Call self as a function.

accuracy(preds, yb)
tensor(0.08)
lr = 0.5   # learning rate
epochs = 3 # how many epochs to train for

source

report


def report(
    loss, preds, yb
):

Call self as a function.

xb,yb = x_train[:bs],y_train[:bs]
preds = model(xb)
report(loss_func(preds, yb), preds, yb)
2.30, 0.08
for epoch in range(epochs):
    for i in range(0, n, bs):
        s = slice(i, min(n,i+bs))
        xb,yb = x_train[s],y_train[s]
        preds = model(xb)
        loss = loss_func(preds, yb)
        loss.backward()
        with torch.no_grad():
            for l in model.layers:
                if hasattr(l, 'weight'):
                    l.weight -= l.weight.grad * lr
                    l.bias   -= l.bias.grad   * lr
                    l.weight.grad.zero_()
                    l.bias  .grad.zero_()
    report(loss, preds, yb)
0.12, 0.98
0.12, 0.94
0.08, 0.96

Using parameters and optim

Parameters

m1 = nn.Module()
m1.foo = nn.Linear(3,4)
m1
Module(
  (foo): Linear(in_features=3, out_features=4, bias=True)
)
list(m1.named_children())
[('foo', Linear(in_features=3, out_features=4, bias=True))]
m1.named_children()
<generator object Module.named_children>
list(m1.parameters())
[Parameter containing:
 tensor([[ 0.57,  0.43, -0.30],
         [ 0.13, -0.32, -0.24],
         [ 0.51,  0.04,  0.22],
         [ 0.13, -0.17, -0.24]], requires_grad=True),
 Parameter containing:
 tensor([-0.01, -0.51, -0.39,  0.56], requires_grad=True)]
class MLP(nn.Module):
    def __init__(self, n_in, nh, n_out):
        super().__init__()
        self.l1 = nn.Linear(n_in,nh)
        self.l2 = nn.Linear(nh,n_out)
        self.relu = nn.ReLU()

    def forward(self, x): return self.l2(self.relu(self.l1(x)))
model = MLP(m, nh, 10)
model.l1
Linear(in_features=784, out_features=50, bias=True)
model
MLP(
  (l1): Linear(in_features=784, out_features=50, bias=True)
  (l2): Linear(in_features=50, out_features=10, bias=True)
  (relu): ReLU()
)
for name,l in model.named_children(): print(f"{name}: {l}")
l1: Linear(in_features=784, out_features=50, bias=True)
l2: Linear(in_features=50, out_features=10, bias=True)
relu: ReLU()
for p in model.parameters(): print(p.shape)
torch.Size([50, 784])
torch.Size([50])
torch.Size([10, 50])
torch.Size([10])
def fit():
    for epoch in range(epochs):
        for i in range(0, n, bs):
            s = slice(i, min(n,i+bs))
            xb,yb = x_train[s],y_train[s]
            preds = model(xb)
            loss = loss_func(preds, yb)
            loss.backward()
            with torch.no_grad():
                for p in model.parameters(): p -= p.grad * lr
                model.zero_grad()
        report(loss, preds, yb)
fit()
0.19, 0.96
0.11, 0.96
0.04, 1.00

Behind the scenes, PyTorch overrides the __setattr__ function in nn.Module so that the submodules you define are properly registered as parameters of the model.

class MyModule:
    def __init__(self, n_in, nh, n_out):
        self._modules = {}
        self.l1 = nn.Linear(n_in,nh)
        self.l2 = nn.Linear(nh,n_out)

    def __setattr__(self,k,v):
        if not k.startswith("_"): self._modules[k] = v
        super().__setattr__(k,v)

    def __repr__(self): return f'{self._modules}'

    def parameters(self):
        for l in self._modules.values(): yield from l.parameters()
mdl = MyModule(m,nh,10)
mdl
{'l1': Linear(in_features=784, out_features=50, bias=True), 'l2': Linear(in_features=50, out_features=10, bias=True)}
for p in mdl.parameters(): print(p.shape)
torch.Size([50, 784])
torch.Size([50])
torch.Size([10, 50])
torch.Size([10])

Registering modules

from functools import reduce

We can use the original layers approach, but we have to register the modules.

layers = [nn.Linear(m,nh), nn.ReLU(), nn.Linear(nh,10)]
class Model(nn.Module):
    def __init__(self, layers):
        super().__init__()
        self.layers = layers
        for i,l in enumerate(self.layers): self.add_module(f'layer_{i}', l)

    def forward(self, x): return reduce(lambda val,layer: layer(val), self.layers, x)
model = Model(layers)
model
Model(
  (layer_0): Linear(in_features=784, out_features=50, bias=True)
  (layer_1): ReLU()
  (layer_2): Linear(in_features=50, out_features=10, bias=True)
)
model(xb).shape
torch.Size([50, 10])

nn.ModuleList

nn.ModuleList does this for us.

class SequentialModel(nn.Module):
    def __init__(self, layers):
        super().__init__()
        self.layers = nn.ModuleList(layers)

    def forward(self, x):
        for l in self.layers: x = l(x)
        return x
model = SequentialModel(layers)
model
SequentialModel(
  (layers): ModuleList(
    (0): Linear(in_features=784, out_features=50, bias=True)
    (1): ReLU()
    (2): Linear(in_features=50, out_features=10, bias=True)
  )
)
fit()
0.12, 0.96
0.11, 0.96
0.07, 0.98

nn.Sequential

nn.Sequential is a convenient class which does the same as the above:

model = nn.Sequential(nn.Linear(m,nh), nn.ReLU(), nn.Linear(nh,10))
fit()
loss_func(model(xb), yb), accuracy(model(xb), yb)
0.15, 0.96
0.11, 0.96
0.09, 0.94
(tensor(0.02, grad_fn=<NllLossBackward0>), tensor(1.))
model
Sequential(
  (0): Linear(in_features=784, out_features=50, bias=True)
  (1): ReLU()
  (2): Linear(in_features=50, out_features=10, bias=True)
)

optim

class Optimizer():
    def __init__(self, params, lr=0.5): self.params,self.lr=list(params),lr

    def step(self):
        with torch.no_grad():
            for p in self.params: p -= p.grad * self.lr

    def zero_grad(self):
        for p in self.params: p.grad.data.zero_()
model = nn.Sequential(nn.Linear(m,nh), nn.ReLU(), nn.Linear(nh,10))
opt = Optimizer(model.parameters())
for epoch in range(epochs):
    for i in range(0, n, bs):
        s = slice(i, min(n,i+bs))
        xb,yb = x_train[s],y_train[s]
        preds = model(xb)
        loss = loss_func(preds, yb)
        loss.backward()
        opt.step()
        opt.zero_grad()
    report(loss, preds, yb)
0.18, 0.94
0.13, 0.96
0.11, 0.94

PyTorch already provides this exact functionality in optim.SGD (it also handles stuff like momentum, which we’ll look at later)

from torch import optim
def get_model():
    model = nn.Sequential(nn.Linear(m,nh), nn.ReLU(), nn.Linear(nh,10))
    return model, optim.SGD(model.parameters(), lr=lr)
model,opt = get_model()
loss_func(model(xb), yb)
tensor(2.33, grad_fn=<NllLossBackward0>)
for epoch in range(epochs):
    for i in range(0, n, bs):
        s = slice(i, min(n,i+bs))
        xb,yb = x_train[s],y_train[s]
        preds = model(xb)
        loss = loss_func(preds, yb)
        loss.backward()
        opt.step()
        opt.zero_grad()
    report(loss, preds, yb)
0.12, 0.98
0.09, 0.98
0.07, 0.98

Dataset and DataLoader

Dataset

It’s clunky to iterate through minibatches of x and y values separately:

    xb = x_train[s]
    yb = y_train[s]

Instead, let’s do these two steps together, by introducing a Dataset class:

    xb,yb = train_ds[s]

source

Dataset


def Dataset(
    x, y
):

Initialize self. See help(type(self)) for accurate signature.

train_ds,valid_ds = Dataset(x_train, y_train),Dataset(x_valid, y_valid)
assert len(train_ds)==len(x_train)
assert len(valid_ds)==len(x_valid)
xb,yb = train_ds[0:5]
assert xb.shape==(5,28*28)
assert yb.shape==(5,)
xb,yb
(tensor([[0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.]]),
 tensor([5, 0, 4, 1, 9]))
model,opt = get_model()
for epoch in range(epochs):
    for i in range(0, n, bs):
        xb,yb = train_ds[i:min(n,i+bs)]
        preds = model(xb)
        loss = loss_func(preds, yb)
        loss.backward()
        opt.step()
        opt.zero_grad()
    report(loss, preds, yb)
0.17, 0.96
0.11, 0.94
0.09, 0.96

DataLoader

Previously, our loop iterated over batches (xb, yb) like this:

for i in range(0, n, bs):
    xb,yb = train_ds[i:min(n,i+bs)]
    ...

Let’s make our loop much cleaner, using a data loader:

for xb,yb in train_dl:
    ...
class DataLoader():
    def __init__(self, ds, bs): self.ds,self.bs = ds,bs
    def __iter__(self):
        for i in range(0, len(self.ds), self.bs): yield self.ds[i:i+self.bs]
train_dl = DataLoader(train_ds, bs)
valid_dl = DataLoader(valid_ds, bs)
xb,yb = next(iter(valid_dl))
xb.shape
torch.Size([50, 784])
yb
tensor([3, 8, 6, 9, 6, 4, 5, 3, 8, 4, 5, 2, 3, 8, 4, 8, 1, 5, 0, 5, 9, 7, 4, 1, 0, 3, 0, 6, 2, 9, 9, 4, 1, 3, 6, 8, 0, 7, 7, 6, 8, 9, 0, 3,
        8, 3, 7, 7, 8, 4])
plt.imshow(xb[0].view(28,28))
yb[0]
tensor(3)

model,opt = get_model()
def fit():
    for epoch in range(epochs):
        for xb,yb in train_dl:
            preds = model(xb)
            loss = loss_func(preds, yb)
            loss.backward()
            opt.step()
            opt.zero_grad()
        report(loss, preds, yb)
fit()
loss_func(model(xb), yb), accuracy(model(xb), yb)
0.11, 0.98
0.09, 0.98
0.06, 1.00
(tensor(0.03, grad_fn=<NllLossBackward0>), tensor(1.))

Random sampling

We want our training set to be in a random order, and that order should differ each iteration. But the validation set shouldn’t be randomized.

import random
class Sampler():
    def __init__(self, ds, shuffle=False): self.n,self.shuffle = len(ds),shuffle
    def __iter__(self):
        res = list(range(self.n))
        if self.shuffle: random.shuffle(res)
        return iter(res)
from itertools import islice
ss = Sampler(train_ds)
it = iter(ss)
for o in range(5): print(next(it))
0
1
2
3
4
list(islice(ss, 5))
[0, 1, 2, 3, 4]
ss = Sampler(train_ds, shuffle=True)
list(islice(ss, 5))
[9479, 15594, 48548, 36621, 15204]
import fastcore.all as fc
class BatchSampler():
    def __init__(self, sampler, bs, drop_last=False): fc.store_attr()
    def __iter__(self): yield from fc.chunked(iter(self.sampler), self.bs, drop_last=self.drop_last)
batchs = BatchSampler(ss, 4)
list(islice(batchs, 5))
[[37946, 23040, 42943, 49787],
 [19388, 29565, 4599, 22490],
 [1415, 23053, 5454, 35732],
 [46409, 36545, 48757, 12439],
 [15242, 42724, 42850, 36010]]
def collate(b):
    xs,ys = zip(*b)
    return torch.stack(xs),torch.stack(ys)
class DataLoader():
    def __init__(self, ds, batchs, collate_fn=collate): fc.store_attr()
    def __iter__(self): yield from (self.collate_fn(self.ds[i] for i in b) for b in self.batchs)
train_samp = BatchSampler(Sampler(train_ds, shuffle=True ), bs)
valid_samp = BatchSampler(Sampler(valid_ds, shuffle=False), bs)
train_dl = DataLoader(train_ds, batchs=train_samp)
valid_dl = DataLoader(valid_ds, batchs=valid_samp)
xb,yb = next(iter(valid_dl))
plt.imshow(xb[0].view(28,28))
yb[0]
tensor(3)

xb.shape,yb.shape
(torch.Size([50, 784]), torch.Size([50]))
model,opt = get_model()
fit()
0.12, 0.94
0.15, 0.92
0.16, 0.92

Multiprocessing DataLoader

import torch.multiprocessing as mp
from fastcore.basics import store_attr
train_ds[[3,6,8,1]]
(tensor([[0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.]]),
 tensor([1, 1, 1, 0]))
train_ds.__getitem__([3,6,8,1])
(tensor([[0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.]]),
 tensor([1, 1, 1, 0]))
for o in map(train_ds.__getitem__, ([3,6],[8,1])): print(o)
(tensor([[0., 0., 0.,  ..., 0., 0., 0.],
        [0., 0., 0.,  ..., 0., 0., 0.]]), tensor([1, 1]))
(tensor([[0., 0., 0.,  ..., 0., 0., 0.],
        [0., 0., 0.,  ..., 0., 0., 0.]]), tensor([1, 0]))
class DataLoader():
    def __init__(self, ds, batchs, n_workers=1, collate_fn=collate): fc.store_attr()
    def __iter__(self):
        with mp.Pool(self.n_workers) as ex: yield from ex.map(self.ds.__getitem__, iter(self.batchs))
train_dl = DataLoader(train_ds, batchs=train_samp, n_workers=2)
it = iter(train_dl)
xb,yb = next(it)
xb.shape,yb.shape
(torch.Size([50, 784]), torch.Size([50]))

PyTorch DataLoader

train_samp = BatchSampler(RandomSampler(train_ds),     bs, drop_last=False)
valid_samp = BatchSampler(SequentialSampler(valid_ds), bs, drop_last=False)
train_dl = DataLoader(train_ds, batch_sampler=train_samp, collate_fn=collate)
valid_dl = DataLoader(valid_ds, batch_sampler=valid_samp, collate_fn=collate)
model,opt = get_model()
fit()
loss_func(model(xb), yb), accuracy(model(xb), yb)
0.10, 0.94
0.10, 0.96
0.27, 0.98
(tensor(0.01, grad_fn=<NllLossBackward0>), tensor(1.))

PyTorch can auto-generate the BatchSampler for us:

train_dl = DataLoader(train_ds, bs, sampler=RandomSampler(train_ds), collate_fn=collate)
valid_dl = DataLoader(valid_ds, bs, sampler=SequentialSampler(valid_ds), collate_fn=collate)

PyTorch can also generate the Sequential/RandomSamplers too:

train_dl = DataLoader(train_ds, bs, shuffle=True, drop_last=True, num_workers=2)
valid_dl = DataLoader(valid_ds, bs, shuffle=False, num_workers=2)
model,opt = get_model()
fit()

loss_func(model(xb), yb), accuracy(model(xb), yb)
0.21, 0.92
0.15, 0.94
0.05, 0.98
(tensor(0.05, grad_fn=<NllLossBackward0>), tensor(0.98))

Our dataset actually already knows how to sample a batch of indices all at once:

train_ds[[4,6,7]]
(tensor([[0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.],
         [0., 0., 0.,  ..., 0., 0., 0.]]),
 tensor([9, 1, 3]))

…that means that we can actually skip the batch_sampler and collate_fn entirely:

train_dl = DataLoader(train_ds, sampler=train_samp)
valid_dl = DataLoader(valid_ds, sampler=valid_samp)
xb,yb = next(iter(train_dl))
xb.shape,yb.shape
(torch.Size([1, 50, 784]), torch.Size([1, 50]))

Validation

You always should also have a validation set, in order to identify if you are overfitting.

We will calculate and print the validation loss at the end of each epoch.

(Note that we always call model.train() before training, and model.eval() before inference, because these are used by layers such as nn.BatchNorm2d and nn.Dropout to ensure appropriate behaviour for these different phases.)


source

fit


def fit(
    epochs, model, loss_func, opt, train_dl, valid_dl
):

Call self as a function.


source

get_dls


def get_dls(
    train_ds, valid_ds, bs, kwargs:VAR_KEYWORD
):

Call self as a function.

Now, our whole process of obtaining the data loaders and fitting the model can be run in 3 lines of code:

train_dl,valid_dl = get_dls(train_ds, valid_ds, bs)
model,opt = get_model()
0 0.14236383073031902 0.958100004196167
1 0.12564024675637483 0.9632000041007995
2 0.1306914600916207 0.9645000052452087
3 0.1098845548601821 0.9670000064373017
4 0.11636366279795766 0.9678000068664551
CPU times: user 6.58 s, sys: 7.61 ms, total: 6.58 s
Wall time: 6.66 s