Full Dataset

#3
by JamshidJDMY - opened

May I kindly ask when the complete version will be ready?

Hi there @JamshidJDMY - absolutely, and happy to ping you when it's complete. Should be up in the next few days (Ai2 is currently on holiday break, but working to get this finished). For now, I can point you to our data mix for 32B, which is complete:https://huggingface.co/datasets/allenai/dolma3_mix-5.5T-1125. Thanks for your patience!

Hey @JamshidJDMY , just wanted to give an update. This mix is complete now, except for olmocr_science_pdfs. Due to redactions that we had to make one some of these olmocr PDF texts, we are still working out how best to represent them in this data mix. For this reason, the exact full set of olmocr PDFs that we trained on will not be available to the public, as the redactions happened after training and we cannot release them. I should be able to provide the set of the non redacted PDFs in this mix (probably next week, after Ai2 returns from break). However, I'd recommend using the 32B mix PDFs as a supplement so that the mix is more complete with all sources. Thank you!

Hi @baileyk , thanks for the update! Just to confirm—would it be okay to use the 32B mix PDFs as a supplement alongside the other 7B sources to make the dataset more complete?

JamshidJDMY changed discussion status to closed
JamshidJDMY changed discussion status to open

Hi, kindly ask if the dataset for 7B model has been complete? Just saw some updates 2 days ago on both 7B and 32B.

@JamshidJDMY actually, I think even easier would be to just use the 32B mix directly (all sources), unless you are specifically trying to reproduce the 7B model. For all other use cases (e.g. training from scratch, etc.), I would recommend using the 32B mix, as it's actually the same as our 7B mix except for the PDFs!

Hi @Jianxiao0203 , the messages above in this discussion might help guide you. If you specifically want to reproduce our 7B model, then I can ping you in the next day or two when our olmOCR PDFs are complete for this mix -- all other sources for the 7B mix are complete except that source (see comment above for context). For all other use cases, you can simply use our 32B mix. I will be adding some clear notes about this to the README of this repo so that this is clear!

Hi @Jianxiao0203 , the messages above in this discussion might help guide you. If you specifically want to reproduce our 7B model, then I can ping you in the next day or two when our olmOCR PDFs are complete for this mix -- all other sources for the 7B mix are complete except that source (see comment above for context). For all other use cases, you can simply use our 32B mix. I will be adding some clear notes about this to the README of this repo so that this is clear!

Thank you! I have actually downloaded the 7B dataset, but I found that there are only 1.6B docs, which was just roughly about 2T tokens (I thought it should be around 5.7T). I found that the shard index sequence under most common-crawl subfolders are non-continuous, does that meet your expectation?

@Jianxiao0203 thanks for the info! It is expected that the shards are non-continuous for common crawl due to the way we filter / upsample. I'm currently double checking on the docs though - it should be more around 3B documents after upsampling, so I'm checking on how the upsampling documents were reconstructed. Will keep you posted, but 32B repo should be good to use.

@Jianxiao0203 thanks for the info! It is expected that the shards are non-continuous for common crawl due to the way we filter / upsample. I'm currently double checking on the docs though - it should be more around 3B documents after upsampling, so I'm checking on how the upsampling documents were reconstructed. Will keep you posted, but 32B repo should be good to use.

Hi baileyk. Just checking in to see if there are any updates on how the upsampled documents were reconstructed. We noticed the total should be around ~3B documents after upsampling, and this part is pretty important for us since we’re trying to faithfully reproduce the 7B results and want to make sure our data setup is aligned with yours.

Hi @Jianxiao0203 - thanks for checking in! I was able to confirm that upsampling is all that's needed on the current set, and I currently have that updated reconstruction running! Unfortunately it just takes some time because of the size of the dataset (we also significantly process the jsonls after reconstruction to ensure a clean / unified schema). I have the projected common crawl counts for upsampling at a little over 3B, around 3.1B. If all goes well, should have them up by tomorrow. Please note though, because of the olmOCR redactions mentioned above, exact reproducibility will not be fully possible. Additionally, there is some (though relatively small) margin of error with the upsampling reconstruction process. We are planning to fix this in future releases by handling upsampling at the data phase rather than later in the process. If you'd like, you can reach out to me via my allenai email address to see if we can find the best solution for your team and stay up to date: baileyk@allenai.org. Thank you!

Thanks a lot for the detailed update, this is super helpful!

Good to know that upsampling on the reconstructed Common Crawl will be around ~3.1B documents. We understand the limitations due to the olmOCR redactions and the small margin of error in the reconstruction.

We’ll keep an eye out for the updated release and will try to proceed with that for our 7B reproduction. Really appreciate you taking the time to double-check this and keep us posted!

@Jianxiao0203 no problem, and my sincere apologies for the time it has taken to get this out to you! We made the decision that we wanted to include the fully upsampled documents for ease of our users, and that has taken a bit of extra time. Thanks for your patience and interest :)

Ai2 org

Hi @Jianxiao0203 just wanted to update you. Common Crawl data should now be all set, fully upsampled! It came out to ~3.34B documents. Dataset should be good to use except for olmOCR science PDFs, as mentioned. Your options there will be to either pull them from the 32B mix, or, if you can wait until Monday (possibly Tuesday - waiting on some assistance from a colleague), then I'd have the 7B set good to go with the upsampled 7B set, with redacted documents indicated by [REMOVED] in the text field.

Hi @Jianxiao0203 just wanted to update you. Common Crawl data should now be all set, fully upsampled! It came out to ~3.34B documents. Dataset should be good to use except for olmOCR science PDFs, as mentioned. Your options there will be to either pull them from the 32B mix, or, if you can wait until Monday (possibly Tuesday - waiting on some assistance from a colleague), then I'd have the 7B set good to go with the upsampled 7B set, with redacted documents indicated by [REMOVED] in the text field.

That’s great to hear, thanks a lot for the update! We’ll wait for the upsampled 7B set with the science PDFs and the [REMOVED] indicators. Really appreciate you putting this together and keeping us in the loop.

Sign up or log in to comment