For those interested, the reference dataset has been posted in the Data section. This file contains the ground truth 'chunks' used to evaluate the retrieval quality of the proposed solutions. By comparing the contents of this file with the output of your solutions, you can identify the tasks where your RAG pipelines performed well and the areas that need improvement. Once again, we extend our gratitude to everyone who participated in the challenge!
Thanks.