Machine Learning Database Auto Tuning
- Learning Database Programming
- Machine Learning Database Auto Tuning For Sale
- Machine Learning Database Auto Tuning Free
- Machine Learning Database Auto Tuning Chart
- Machine Learning Database Auto Tuning System
- Machine Learning Database Auto Tuning Online
- Machine Learning Database Auto Tuning Software
One common approach to tuning a DBMS is for the DBA to copy the database to another machine and manually measure the perfor-mance of a sample workload from the real application. Based on the outcome of this test, they will then tweak the DBMS’s configu-ration according to some combination of tuning. AutoTiKV is a machine-learning-based tuning tool that helps decrease tuning costs and make life easier for DBAs. This post shows AutoTiKV's design, its machine learning model, and the automatic tuning workflow.
-->APPLIES TO: Basic edition Enterprise edition (Upgrade to Enterprise edition)
Efficiently tune hyperparameters for your model using Azure Machine Learning. Hyperparameter tuning includes the following steps:
- Define the parameter search space
- Specify a primary metric to optimize
- Specify early termination criteria for poorly performing runs
- Allocate resources for hyperparameter tuning
- Launch an experiment with the above configuration
- Visualize the training runs
- Select the best performing configuration for your model
What are hyperparameters?
Hyperparameters are adjustable parameters you choose to train a model that govern the training process itself. For example, to train a deep neural network, you decide the number of hidden layers in the network and the number of nodes in each layer prior to training the model. These values usually stay constant during the training process.
In deep learning / machine learning scenarios, model performance depends heavily on the hyperparameter values selected. The goal of hyperparameter exploration is to search across various hyperparameter configurations to find a configuration that results in the best performance. Typically, the hyperparameter exploration process is painstakingly manual, given that the search space is vast and evaluation of each configuration can be expensive.
Azure Machine Learning allows you to automate hyperparameter exploration in an efficient manner, saving you significant time and resources. You specify the range of hyperparameter values and a maximum number of training runs. The system then automatically launches multiple simultaneous runs with different parameter configurations and finds the configuration that results in the best performance, measured by the metric you choose. Poorly performing training runs are automatically early terminated, reducing wastage of compute resources. These resources are instead used to explore other hyperparameter configurations.
Define search space
Automatically tune hyperparameters by exploring the range of values defined for each hyperparameter.
Types of hyperparameters
Each hyperparameter can either be discrete or continuous and has a distribution of values described by aparameter expression.
Discrete hyperparameters
Discrete hyperparameters are specified as a choice
among discrete values. choice
can be:
- one or more comma-separated values
- a
range
object - any arbitrary
list
object
In this case, batch_size
takes on one of the values [16, 32, 64, 128] and number_of_hidden_layers
takes on one of the values [1, 2, 3, 4].
Advanced discrete hyperparameters can also be specified using a distribution. The following distributions are supported:
quniform(low, high, q)
- Returns a value like round(uniform(low, high) / q) * qqloguniform(low, high, q)
- Returns a value like round(exp(uniform(low, high)) / q) * qqnormal(mu, sigma, q)
- Returns a value like round(normal(mu, sigma) / q) * qqlognormal(mu, sigma, q)
- Returns a value like round(exp(normal(mu, sigma)) / q) * q
Continuous hyperparameters
Continuous hyperparameters are specified as a distribution over a continuous range of values. Supported distributions include:
uniform(low, high)
- Returns a value uniformly distributed between low and highloguniform(low, high)
- Returns a value drawn according to exp(uniform(low, high)) so that the logarithm of the return value is uniformly distributednormal(mu, sigma)
- Returns a real value that's normally distributed with mean mu and standard deviation sigmalognormal(mu, sigma)
- Returns a value drawn according to exp(normal(mu, sigma)) so that the logarithm of the return value is normally distributed
An example of a parameter space definition:
This code defines a search space with two parameters - learning_rate
and keep_probability
. learning_rate
has a normal distribution with mean value 10 and a standard deviation of 3. keep_probability
has a uniform distribution with a minimum value of 0.05 and a maximum value of 0.1.
Sampling the hyperparameter space
You can also specify the parameter sampling method to use over the hyperparameter space definition. Azure Machine Learning supports random sampling, grid sampling, and Bayesian sampling.
Picking a sampling method
- Grid sampling can be used if your hyperparameter space can be defined as a choice among discrete values and if you have sufficient budget to exhaustively search over all values in the defined search space. Additionally, one can use automated early termination of poorly performing runs, which reduces wastage of resources.
- Random sampling allows the hyperparameter space to include both discrete and continuous hyperparameters. In practice it produces good results most of the times and also allows the use of automated early termination of poorly performing runs. Some users perform an initial search using random sampling and then iteratively refine the search space to improve results.
- Bayesian sampling leverages knowledge of previous samples when choosing hyperparameter values, effectively trying to improve the reported primary metric. Bayesian sampling is recommended when you have sufficient budget to explore the hyperparameter space - for best results with Bayesian Sampling we recommend using a maximum number of runs greater than or equal to 20 times the number of hyperparameters being tuned. Note that Bayesian sampling does not currently support any early termination policy.
Random sampling
In random sampling, hyperparameter values are randomly selected from the defined search space. Random sampling allows the search space to include both discrete and continuous hyperparameters.
Grid sampling
Grid sampling performs a simple grid search over all feasible values in the defined search space. It can only be used with hyperparameters specified using choice
. For example, the following space has a total of six samples:
Learning Database Programming
Bayesian sampling
Bayesian sampling is based on the Bayesian optimization algorithm and makes intelligent choices on the hyperparameter values to sample next. It picks the sample based on how the previous samples performed, such that the new sample improves the reported primary metric.
When you use Bayesian sampling, the number of concurrent runs has an impact on the effectiveness of the tuning process. Typically, a smaller number of concurrent runs can lead to better sampling convergence, since the smaller degree of parallelism increases the number of runs that benefit from previously completed runs.
Bayesian sampling only supports choice
, uniform
, and quniform
distributions over the search space.
Note
Bayesian sampling does not support any early termination policy (See Specify an early termination policy). When using Bayesian parameter sampling, set early_termination_policy = None
, or leave off the early_termination_policy
parameter.
Specify primary metric
Specify the primary metric you want the hyperparameter tuning experiment to optimize. Each training run is evaluated for the primary metric. Poorly performing runs (where the primary metric does not meet criteria set by the early termination policy) will be terminated. In addition to the primary metric name, you also specify the goal of the optimization - whether to maximize or minimize the primary metric.
primary_metric_name
: The name of the primary metric to optimize. The name of the primary metric needs to exactly match the name of the metric logged by the training script. See Log metrics for hyperparameter tuning.primary_metric_goal
: It can be eitherPrimaryMetricGoal.MAXIMIZE
orPrimaryMetricGoal.MINIMIZE
and determines whether the primary metric will be maximized or minimized when evaluating the runs.
Optimize the runs to maximize 'accuracy'. Make sure to log this value in your training script.
Log metrics for hyperparameter tuning
The training script for your model must log the relevant metrics during model training. When you configure the hyperparameter tuning, you specify the primary metric to use for evaluating run performance. (See Specify a primary metric to optimize.) In your training script, you must log this metric so it is available to the hyperparameter tuning process.
Log this metric in your training script with the following sample snippet:
The training script calculates the val_accuracy
and logs it as 'accuracy', which is used as the primary metric. Each time the metric is logged it is received by the hyperparameter tuning service. It is up to the model developer to determine how frequently to report this metric.
Specify early termination policy
Terminate poorly performing runs automatically with an early termination policy. Termination reduces wastage of resources and instead uses these resources for exploring other parameter configurations.
When using an early termination policy, you can configure the following parameters that control when a policy is applied:
evaluation_interval
: the frequency for applying the policy. Each time the training script logs the primary metric counts as one interval. Thus anevaluation_interval
of 1 will apply the policy every time the training script reports the primary metric. Anevaluation_interval
of 2 will apply the policy every other time the training script reports the primary metric. If not specified,evaluation_interval
is set to 1 by default.delay_evaluation
: delays the first policy evaluation for a specified number of intervals. It is an optional parameter that allows all configurations to run for an initial minimum number of intervals, avoiding premature termination of training runs. If specified, the policy applies every multiple of evaluation_interval that is greater than or equal to delay_evaluation.
Azure Machine Learning supports the following Early Termination Policies.
Bandit policy
Machine Learning Database Auto Tuning For Sale
Bandit is a termination policy based on slack factor/slack amount and evaluation interval. The policy early terminates any runs where the primary metric is not within the specified slack factor / slack amount with respect to the best performing training run. It takes the following configuration parameters:
slack_factor
orslack_amount
: the slack allowed with respect to the best performing training run.slack_factor
specifies the allowable slack as a ratio.slack_amount
specifies the allowable slack as an absolute amount, instead of a ratio.For example, consider a Bandit policy being applied at interval 10. Assume that the best performing run at interval 10 reported a primary metric 0.8 with a goal to maximize the primary metric. If the policy was specified with a
slack_factor
of 0.2, any training runs, whose best metric at interval 10 is less than 0.66 (0.8/(1+slack_factor
)) will be terminated. If instead, the policy was specified with aslack_amount
of 0.2, any training runs, whose best metric at interval 10 is less than 0.6 (0.8 -slack_amount
) will be terminated.evaluation_interval
: the frequency for applying the policy (optional parameter).delay_evaluation
: delays the first policy evaluation for a specified number of intervals (optional parameter).
In this example, the early termination policy is applied at every interval when metrics are reported, starting at evaluation interval 5. Any run whose best metric is less than (1/(1+0.1) or 91% of the best performing run will be terminated.
Median stopping policy
Median stopping is an early termination policy based on running averages of primary metrics reported by the runs. This policy computes running averages across all training runs and terminates runs whose performance is worse than the median of the running averages. This policy takes the following configuration parameters:
Press Win+R keys to open Run. Dev c++ free download. You can find the path of bin folder by going to the directory where you have installed the compiler.5. After that click all OK buttons to save the information.6. In Variable name filed enter path and in Variable value filed enter the path of the bin folder of compiler.4.
evaluation_interval
: the frequency for applying the policy (optional parameter).delay_evaluation
: delays the first policy evaluation for a specified number of intervals (optional parameter).
In this example, the early termination policy is applied at every interval starting at evaluation interval 5. A run will be terminated at interval 5 if its best primary metric is worse than the median of the running averages over intervals 1:5 across all training runs.
Truncation selection policy
Truncation selection cancels a given percentage of lowest performing runs at each evaluation interval. Runs are compared based on their performance on the primary metric and the lowest X% are terminated. It takes the following configuration parameters:
truncation_percentage
: the percentage of lowest performing runs to terminate at each evaluation interval. Specify an integer value between 1 and 99.evaluation_interval
: the frequency for applying the policy (optional parameter).delay_evaluation
: delays the first policy evaluation for a specified number of intervals (optional parameter).
In this example, the early termination policy is applied at every interval starting at evaluation interval 5. A run will be terminated at interval 5 if its performance at interval 5 is in the lowest 20% of performance of all runs at interval 5.
No termination policy
If you want all training runs to run to completion, set policy to None. This will have the effect of not applying any early termination policy.
Default policy
If no policy is specified, the hyperparameter tuning service will let all training runs execute to completion.
Picking an early termination policy
- If you are looking for a conservative policy that provides savings without terminating promising jobs, you can use a Median Stopping Policy with
evaluation_interval
1 anddelay_evaluation
5. These are conservative settings, that can provide approximately 25%-35% savings with no loss on primary metric (based on our evaluation data). - If you are looking for more aggressive savings from early termination, you can either use Bandit Policy with a stricter (smaller) allowable slack or Truncation Selection Policy with a larger truncation percentage.
Allocate resources
Control your resource budget for your hyperparameter tuning experiment by specifying the maximum total number of training runs. Optionally specify the maximum duration for your hyperparameter tuning experiment.
max_total_runs
: Maximum total number of training runs that will be created. Upper bound - there may be fewer runs, for instance, if the hyperparameter space is finite and has fewer samples. Must be a number between 1 and 1000.max_duration_minutes
: Maximum duration in minutes of the hyperparameter tuning experiment. Parameter is optional, and if present, any runs that would be running after this duration are automatically canceled.
Note
If both max_total_runs
and max_duration_minutes
are specified, the hyperparameter tuning experiment terminates when the first of these two thresholds is reached.
Additionally, specify the maximum number of training runs to run concurrently during your hyperparameter tuning search.
max_concurrent_runs
: Maximum number of runs to run concurrently at any given moment. If none specified, allmax_total_runs
will be launched in parallel. If specified, must be a number between 1 and 100.
Note
Machine Learning Database Auto Tuning Free
The number of concurrent runs is gated on the resources available in the specified compute target. Hence, you need to ensure that the compute target has the available resources for the desired concurrency.
Allocate resources for hyperparameter tuning:
This code configures the hyperparameter tuning experiment to use a maximum of 20 total runs, running four configurations at a time.
Configure experiment
Machine Learning Database Auto Tuning Chart
Configure your hyperparameter tuning experiment using the defined hyperparameter search space, early termination policy, primary metric, and resource allocation from the sections above. Additionally, provide an estimator
that will be called with the sampled hyperparameters. The estimator
describes the training script you run, the resources per job (single or multi-gpu), and the compute target to use. Since concurrency for your hyperparameter tuning experiment is gated on the resources available, ensure that the compute target specified in the estimator
has sufficient resources for your desired concurrency. (For more information on estimators, see how to train models.)
Configure your hyperparameter tuning experiment:
Submit experiment
Once you define your hyperparameter tuning configuration, submit an experiment:
experiment_name
is the name you assign to your hyperparameter tuning experiment, and workspace
is the workspace in which you want to create the experiment (For more information on experiments, see How does Azure Machine Learning work?)
Warm start your hyperparameter tuning experiment (optional)
Often, finding the best hyperparameter values for your model can be an iterative process, needing multiple tuning runs that learn from previous hyperparameter tuning runs. Reusing knowledge from these previous runs will accelerate the hyperparameter tuning process, thereby reducing the cost of tuning the model and will potentially improve the primary metric of the resulting model. When warm starting a hyperparameter tuning experiment with Bayesian sampling, trials from the previous run will be used as prior knowledge to intelligently pick new samples, to improve the primary metric. Additionally, when using Random or Grid sampling, any early termination decisions will leverage metrics from the previous runs to determine poorly performing training runs.
Azure Machine Learning allows you to warm start your hyperparameter tuning run by leveraging knowledge from up to 5 previously completed / cancelled hyperparameter tuning parent runs. You can specify the list of parent runs you want to warm start from using this snippet:
Additionally, there may be occasions when individual training runs of a hyperparameter tuning experiment are cancelled due to budget constraints or fail due to other reasons. It is now possible to resume such individual training runs from the last checkpoint (assuming your training script handles checkpoints). Resuming an individual training run will use the same hyperparameter configuration and mount the outputs folder used for that run. The training script should accept the resume-from
argument, which contains the checkpoint or model files from which to resume the training run. You can resume individual training runs using the following snippet:
You can configure your hyperparameter tuning experiment to warm start from a previous experiment or resume individual training runs using the optional parameters resume_from
and resume_child_runs
in the config:
Visualize experiment
The Azure Machine Learning SDK provides a Notebook widget that visualizes the progress of your training runs. The following snippet visualizes all your hyperparameter tuning runs in one place in a Jupyter notebook:
This code displays a table with details about the training runs for each of the hyperparameter configurations.
You can also visualize the performance of each of the runs as training progresses.
Additionally, you can visually identify the correlation between performance and values of individual hyperparameters using a Parallel Coordinates Plot.
You can visualize all your hyperparameter tuning runs in the Azure web portal as well. For more information on how to view an experiment in the web portal, see how to track experiments.
Find the best model
Once all of the hyperparameter tuning runs have completed, identify the best performing configuration and the corresponding hyperparameter values:
Machine Learning Database Auto Tuning System
Sample notebook
Machine Learning Database Auto Tuning Online
Refer to train-hyperparameter-* notebooks in this folder:
Machine Learning Database Auto Tuning Software
Learn how to run notebooks by following the article Use Jupyter notebooks to explore this service.