Hyperparameter Optimisation (HPO), also known as Hyperparameter Tuning, is the process of searching for a configuration of hyperparameters that yields good performance of a model or a learning algorithm. This process is challenging, and many machine learning algorithms are expensive to run and evaluate.
In this lab, you will get hands-on experience with different HPO methods. Starting with both simple grid and random search, you will learn how to implement and analyse HPO from scratch. The lab then introduces more sophisticated model-free and model-based approaches such as Bayesian optimisation. You will focus on having clean, robust, and efficient code.
The lab will be graded according to the following elements:
The lab is fully conducted in English.
Good knowledge of Python programming language. Familiarity with basic ML models (e.g. linear regression, decision trees, random forests, SVMs, …etc.).
Registration to the seminar is handled via the SuPra system.
E-mail: hh[at]aim[dot]rwth-aachen[dot]de
Phone: +49 241 80 21451