DeepScientist is a goal-oriented, fully autonomous scientific discovery system. It uses Bayesian Optimization and a hierarchical 'hypothesize, verify, and analyze' process with a Findings Memory to balance exploration and exploitation. It generated and validated thousands of scientific ideas, surpassing human SOTA on three AI tasks.
A comprehensive guide covering the most critical machine learning equations, including probability, linear algebra, optimization, and advanced concepts, with Python implementations.
This article explains how derivatives, gradients, Jacobians, and Hessians fit together and shows examples of what they are used for, including optimization and rendering.
Optuna is an open-source hyperparameter optimization framework designed to automate the hyperparameter search process for machine learning models. It supports various frameworks like TensorFlow, Keras, Scikit-Learn, XGBoost, and LightGBM, offering features like eager search spaces, state-of-the-art algorithms, and easy parallelization.
"An example of simultaneously optimizing two policies for two adversarial agents, looking specifically at the cat and mouse game."
The article explores developing strategies for two players with conflicting goals, using methods like game trees, reinforcement learning, and hill-climbing optimization. The focus is on determining optimal policies for each player to either catch or evade capture, considering board configurations and player turn orders. The article further details how hill climbing is applied to improve strategies incrementally, using variations in policies to evaluate and enhance performance over numerous iterations.
This article provides an overview of feature selection in machine learning, detailing methods to maximize model accuracy, minimize computational costs, and introduce a novel method called History-based Feature Selection (HBFS).
Support Vector Machine (SVM) algorithm with a focus on classification tasks, using a simple 2D dataset for illustration. It explains key concepts like hard and soft margins, support vectors, kernel tricks, and optimization probles.
Elbow curve and Silhouette plots both are very useful techniques for finding the optimal K for K-means clustering