intfloat/simlm-base-msmarco-finetuned

117次阅读

intfloat/simlm-base-msmarco-finetuned


SimLM: Pre-training with Representation Bottleneck for Dense Passage Retrieval

paper available at https://arxiv.org/pdf/2207.02578
code available at https://github.com/microsoft/unilm/tree/master/simlm


Paper abstract

In this paper, we propose SimLM (Similarity matching with Language Model pre-training), a simple yet effective pre-training method for dense passage retrieval.
It employs a simple bottleneck architecture that learns to compress the passage information into a dense vector through self-supervised pre-training.
We use a replaced language modeling objective, which is inspired by ELECTRA,
to improve the sample efficiency and reduce the mismatch of the input distribution between pre-training and fine-tuning.
SimLM only requires access to unlabeled corpus, and is more broadly applicable when there are no labeled data or queries.
We conduct experiments on several large-scale passage retrieval datasets, and show substantial improvements over strong baselines under various settings.
Remarkably, SimLM even outperforms multi-vector approaches such as ColBERTv2 which incurs significantly more storage cost.


Results on MS-MARCO passage ranking task

Model dev MRR@10 dev R@50 dev R@1k TREC DL 2019 nDCG@10 TREC DL 2020 nDCG@10
RocketQAv2 38.8 86.2 98.1
coCondenser 38.2 86.5 98.4 71.7 68.4
ColBERTv2 39.7 86.8 98.4
SimLM (this model) 41.1 87.8 98.7 71.4 69.7

前往AI网址导航

正文完
 0
微草录
版权声明:本站原创文章,由 微草录 2024-01-04发表,共计1199字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。