Data Cyclers
Adapted from sinzlab/neuralpredictors/training/cyclers.py
LongCycler
Bases: IterableDataset
Cycles through a dictionary of data loaders until the loader with the largest size is exhausted. In practice, takes one batch from each loader in each iteration. Necessary for dataloaders of unequal size. Note: iterable dataloaders as this one can lead to duplicate data when using multiprocessing.
Source code in openretina/data_io/cyclers.py
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
|
ShortCycler
Bases: IterableDataset
Cycles through the elements of each dataloader without repeating any element.
Source code in openretina/data_io/cyclers.py
60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
|
cycle(iterable)
itertools.cycle without caching. See: https://github.com/pytorch/pytorch/issues/23900
Source code in openretina/data_io/cyclers.py
12 13 14 15 16 17 18 19 20 21 22 |
|