Optuna is an open-source hyperparameter optimization framework designed to automate the hyperparameter search process for machine learning models. It supports various frameworks like TensorFlow, Keras, Scikit-Learn, XGBoost, and LightGBM, offering features like eager search spaces, state-of-the-art algorithms, and easy parallelization.
This paper analyzes the performance of 20 large language models (LLMs) using two inference libraries: vLLM and HuggingFace Pipelines. The study investigates how hyperparameters influence inference performance and reveals that throughput landscapes are irregular, highlighting the importance of hyperparameter optimization.