
In this example table, there are 5 hyperparameters. SelectedFlag = 1 /* refrain j from further consideration */Ĭall TableWriteToDataSet( tbl, resultTable) ġ) Create a design table to specify the name, minimum and maximum of each hyperparameter. SelectedFlag = j(dimension, sampleSize, 0) Intervals = do(minValue, maxValue, (maxValue-minValue)/sampleSize) Įta= intervals + uniform(seed)*(maxValue-minValue)/sampleSize Start latinHyperCubeSamp( seed= 0, sampleSize, designTable, resultTable) SAS Implementation %macro latinHyperCubeSamp This gives us a Latin hypercube sample that we label as x(1). Increase i by 1 and repeat step 3 until all the Eta(i) are considered. Now, refrain j from further consideration. Pick an integer j at random such that 1 <= j <= T and let x(i) = (Eta(i), Nu(j)) be the first point in the Latin hypercube sample. Call them Eta(i) and Nu(i) when they are sampled from the interval(i), that is from [(i-1)/T, i/T).ģ. Randomly sample two points Eta and Nu within each subinterval. Divide the interval of each dimension into T equally spaced subintervals.Ģ. In order to obtain a Latin hypercube sample of size T from the two-dimensional space, we devise the following algorithm:ġ. To illustrate the algorithm, here we use a two-dimensional data’s sampling as an example.

The algorithm I used is from the paper Bayes Factors for Variance Components in the Mixed Linear Model, but in my implementation, I extended the algorithm to support any-dimensional data’s sampling, and the value space of hyperparameters does not limit to unit space.
#LATIN HYPERCUBE SAMPLING METHOD HOW TO#
In this article, I will show you how to implement the Latin Hypercube sampling method with SAS/IML. Some research shows that Latin Hypercube sampling performs better than random search and SAS Visual Data Mining and Machine Learning uses Latin Hypercube sampling to generate the initial set of hyperparameters configurations. There are several ways to solve the cold start problem, and mostly often used methods include random search and Latin Hypercube sampling.

However, Bayesian optimization has an issue of determining the initial samples in order to kick off the optimization process, and we call this issue “code start”. In my previous post “ Bayesian Optimization”, I demonstrated the optimization procedure based on Bayesian method.
