
Tutorials
Stochastic Local Search Based CSP Solver
Tutorial 4: Stochastic Local Search AlgorithmsThe SLS applet implements several Stochastic Local Search algorithms for solving CSPs. The algorithm to use while solving the CSP can be selected and the appropriate options for that algorithm set by opening the "Algorithm Options" dialog box by clicking on "Algorithm Options..." under the "Hill Options" menu. The dialog box will look like the picture below: Before you begin solving a CSP in this applet, the CSP must be initialized. This means that a value must be assigned to each variable in the CSP from its domain. The assignment can be done randomly or by lowest value by selecting the appropriate radio button next to "Initialization:". The SLS applet attempts to find a solution to the CSP in a specified number of steps using the local search algorithm. If a solution hasn't been found by the specified number of steps, the searching halts. This number can be specified in the field next to "Termination:". The default number of steps is 100. Next to "Local Search Steps:" there is a drop down list of search algorithms from which you can select the desired search algorithm to use. For each algorithm selected, you can view a description of it by selecting the "Description" tab. Some of the algorithms also have options that you can set for the search, which can be specified under the "Options" tab. Below is list of the available local search algorithms and a brief description of each: Random SamplingAt each step this algorithm randomly creates a new solution. For each variable a new value is randomly chosen. Random WalkAt each step this algorithm randomly chooses a new solution from the set of neighbouring solutions. The set of neighbouring solutions is defined in this applet as the solutions where a single variable has a different value. Options: You may choose between a one or two stage heuristic. In the first case, the variable and value are chosen together; in the second case, the variable is chosen first and then the value. Greedy DescentAt each step this algorithm chooses a value which improves the graph. Options: You may choose between a one or two stage heuristic. In a one stage heuristic the variable and value are chosen together. In the two stage heuristic the variable is chosen first and then the value. The variable with the greatest number of conflicts in its edges is chosen and then the value which resolves the highest number of conflicts is chosen from this variable's set of values. Greedy Descent with the Min Conflict HeuristicThe Min Conflict Heuristic is one of the first heuristics used for constraint satisfaction problems. (Minton et Al, 1992). It chooses a variable randomly from the set of variables with conflicts (red edges). The value which results in the fewest conflicts is chosen from the possible assignments for this variable. This heuristic is almost identical to Greedy Descent with Random Walk with a two stage heuristic choosing any red node 100% and best value 100%. The only difference is the MCH may return to the previous assignment. Greedy Descent with Random WalkAt each step this algorithm probabilistically decides whether or not to take a greedy step (change to the best neighbour) or to take a random walk (change to any neighbour) Options: You may choose between a one or two stage heuristic. In the first case, the variable and value are chosen together; in the second case, the variable is chosen first and then the value. To increase the random walk percentage increase the "random ..." values. Greedy Descent with Random RestartThis algorithm proceeds like the Greedy Descent algorithm but resets the variables to a new random solution at regular intervals. Options: Reset Graph specifies the number of steps between random resets of the graph. Greedy Descent with all OptionsThis algorithm provides all the options associated with Random Resets, Random Walk and Greedy Descent enabling free experimentation with these algorithms. Simulated AnnealingSimulated annealing performs random guesses in the set of neighbours. If the temperature is very high it accepts most of the guesses irrespective of how much "better" they are than the current value. The temperature is decreased according to a schedule. As the temperature decreases the chance of accepting a guess which does not improve the solution decreases. Options:

Main Tools: Graph Searching  Consistency for CSP  SLS for CSP  Deduction  Belief and Decision Networks  Decision Trees  Neural Networks  STRIPS to CSP 