Add support for subshards
Created by: stephenroller
🚀 Feature Request
Fast forwarding our on-the-fly tokenizer can be very slow when our data shards are very large, taking over an hour in some cases.
One easy solution is to just chop the data into more shards. This requires manual labor, and as our corpus is composed of many hundreds of files now, this makes things annoying. So let's do this effectively in the data loader.
Sketch
- Add a new argument
--data-subshards <int>
flag to StreamingLanguageModel - When we load the data, use the epoch variable in order to skip documents: assuming subshards of 10, then on epoch 1 you'll take document 0, 10, 20... If epoch is 1, then you want documents 1, 11, 21, ...
- You'll need to modify JsonlDataset to be aware of this
- If epoch > subshards you'll need to wrap around
- The effect will be roughly the same as if we had round robin distributed our datasets into different shards.
In practice, setting --data-subshards to 10 or 20 should sort us.