klotz: tuning*

0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag

  1. In this tutorial, learn how to improve the performance of large language models (LLMs) by utilizing a proxy tuning approach, which enables more efficient fine-tuning and better integration with the AI model.
    2024-05-11 Tags: , , , by klotz
  2. Learn how to build an efficient pipeline with Hydra and MLflow
  3. So, if you only need word-vectors, sure, just use Word2Vec. If you only need doc-vectors, use Doc2Vec in a mode that doesn't create or word-vectors (pure PV-DBOW, dm=0, dbow_words=1) or a Doc2Vec mode that also happens to create word-vectors but just choose to ignore them. If you need both from the same data, use a Doc2Vec mode that also creates word-vectors (like PV-DM dm=1 or PV-DBOW-with-interleaved-skip-gram-word-training, dm=0, dbow_words=1). If you need both but do it in two separate steps, you'll spend more time training, and the vectors won't be inherently compatible. – 
    gojomo
    Nov 29 '18 at 12:54
    2021-09-27 Tags: , , , , by klotz
  4. 2019-03-02 Tags: , , , by klotz
  5. 2018-09-21 Tags: , by klotz
  6. -set hive.exec.orc.split.strategy=ETL; -- this will work only for specific values scan, if full table scan will be required anyway, use default (HYBRID) or BI.
    2017-06-12 Tags: , , , , , , , , by klotz
  7. 2017-03-09 Tags: , , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: Tags: tuning

About - Propulsed by SemanticScuttle