zamba.pytorch.layers¶
Classes¶
TimeDistributed
¶
Bases: torch.nn.Module
Applies module
over tdim
identically for each step, use low_mem
to compute one at a time.
NOTE: vendored (with minor adaptations) from fastai: https://github.com/fastai/fastai/blob/4b0785254fdece1a44859956b6e54eedb167a97e/fastai/layers.py#L510-L544
Updates
- super.init() in init
- assign attributes in init
- inherit from torch.nn.Module rather than fastai.Module
Source code in /home/runner/work/zamba/zamba/zamba/pytorch/layers.py
16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
|
Attributes¶
low_mem = low_mem
instance-attribute
¶
module = module
instance-attribute
¶
tdim = tdim
instance-attribute
¶
Functions¶
__init__(module, low_mem = False, tdim = 1)
¶
Source code in /home/runner/work/zamba/zamba/zamba/pytorch/layers.py
28 29 30 31 32 |
|
format_output(out, bs, seq_len)
¶
unstack from batchsize outputs
Source code in /home/runner/work/zamba/zamba/zamba/pytorch/layers.py
56 57 58 59 60 |
|
forward(*tensors, **kwargs)
¶
input x with shape:(bs,seq_len,channels,width,height)
Source code in /home/runner/work/zamba/zamba/zamba/pytorch/layers.py
34 35 36 37 38 39 40 41 42 43 |
|
low_mem_forward(*tensors, **kwargs)
¶
input x with shape:(bs,seq_len,channels,width,height)
Source code in /home/runner/work/zamba/zamba/zamba/pytorch/layers.py
45 46 47 48 49 50 51 52 53 54 |
|