prithivida/bert-for-patents-64d

95次阅读

prithivida/bert-for-patents-64d


Motivation

This model is based on anferico/bert-for-patents – a BERTLARGE model (See next section for details below). By default, the pre-trained model’s output embeddings with size 768 (base-models) or with size 1024 (large-models). However, when you store Millions of embeddings, this can require quite a lot of memory/storage. So have reduced the embedding dimension to 64 i.e 1/16th of 1024 using Principle Component Analysis (PCA) and it still gives a comparable performance. Yes! PCA gives better performance than NMF. Note: This process neither improves the runtime, nor the memory requirement for running the model. It only reduces the needed space to store embeddings, for example, for semantic search using vector databases.


BERT for Patents

BERT for Patents is a model trained by Google on 100M+ patents (not just US patents).
If you want to learn more about the model, check out the blog post, white paper and GitHub page containing the original TensorFlow checkpoint.



Projects using this model (or variants of it):

  • Patents4IPPC (carried out by Pi School and commissioned by the Joint Research Centre (JRC) of the European Commission)

前往AI网址导航

正文完
 0
微草录
版权声明:本站原创文章,由 微草录 2024-01-03发表,共计975字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。