If you would like to consider the basic AI framework in more detail before learning how we alter it to achieve different results based on the various criteria inherent in the problem we’re dealing with then it would be a good idea to read the post titled “Exploring the Basic AI Framework in More Detail” before reading this post.

A good way to understand how a computer solves a problem is to simply consider how a human would solve it. A good way of doing this would be to consider the classic problem we all got given as children: the proplem of the fox, the bag of grain and the chicken where each needed to be moved to the other side of the river but each having rules that make moving them tricky.

The way we would do this is by exploring each posibility in our minds and performing the best move at each point to achieve the task. Each time we make a move in real life, in our computer analogy we’re creating a node, and if at any point we get stuck we usually backtrack to a known “OK” node (the last time all three items were safe) and then we try again only this time taking a different object until all three items are safely on the other side. Eventually, through a process of trial and error we will have the one sequence of events needed to achieve the task and it is this sequence (the steps that we took) that is our solution.

What kind of problem is this? In the last post called What is AI? we discussed the three types of problems we have which are optimisation, modelling and prediction. In this specific problem, the possible moves we can make are dictated by the rules each object has so this will form the model of the system. We also know what the final output needs to be so we can think of this as an optimisation problem which we previously defined as “When given a model finds the inputs so that the output is minimized / maximized.”

The way a human approaches this problem (as we’ve just described) is by using a method called constructive depth-first search because unproductive branches can be ruled out before a solution is achieved. Where all possible actions are taken until a dead end is reached and where our move operator is defined (restricted) by the rules (or conditions) of each object. At the point of a dead end the user backtracks to a previously known good location and another option is tried until a dead end is reached again. This process of trial and error, while keeping track of past failures is what leads us to ultimately discovering the successful sequence of events needed to achieve the goal. The one path we need to take to get all this stuff done in the shortest time and easiest way possible.

An example of a perturbative depth-first search, which is used when you can only test complete solutions, is an algorithm that can crack a password or combination lock where we only know the solution has been completed when all of the conditions have been satisfied. The perturbative approach is where the smallest change made possible is made (in the case of cracking a lock the number would be incremented by 1 each time). 0000, 0001, 0002, 0003 etc as an example of depth first or 0000, 0001, 0011, 0111 as an example of breadth first (when considering the lock example).

Depth & Breadth-First Pseudocode

By using the previously discussed framework that was mentioned in the post What is AI? We can implement both depth and breadth-first searchs by making a small adjustment to the pseudocode (as detailed below) for each instance.

Variables workingCandidate, openList, closedList
SET workingCandidate = StartSolution
Evaluate (workingCandidate)
SET goalReached = CheckIfAtGoal(workingCandidate)
WHILE (! goalReached AND Openlist not empty) DO
	Foreach (1-step neighbor)
		neighbour = ApplyMoveOperator(workingCandidate)
		Evaluate(neighbor)
		IF (neighbor is feasible) // we can move there
			ADD (neighbor to end of openList)
		ELSE // if we're at a dead end
			ADD (neighbor to end of closedList)
COPY (working candidate to closedList)
// depth-first uses "last" and breadth-first uses "first" below
MOVE ("last" or "first" item from openList into working candidate)
SET goalReached = CheckIfAtGoal(workingCandidate)
OUTPUT (goalReached, workingCandidate)

Pros & Cons of Breadth and Depth-First Search

Depth-first search is efficient because it can find solutions quickly and doesn’t require much working memory (enough to store current solution, the best seen, plus path followed). It is efficient but not optimal or complete and could even get caught in a loop exploring a particularly deep branch on the tree but this is hard to avoid when carrying out a constructive search. This type of search is implemented as a stack system LIFO (last in first out) .

Breadth-first search on the other hand is complete and is guaranteed to find a solution (if one exists) and is optimal because it will find the closest solution to the start. It isn’t very efficient, especially when a solution is very deep, and because of the branching factors more memory is needed to solve the problem. Restraints on memory will depend on the amount of branches in a system (chess is a good example of when a lot of working memory is needed). Each node has to be stored at each (current) level and each tree is stored (caused my retracing steps). Breadth first search is implemented as a queue which operated using FIFO (first in first out).

Depth-first is often quicker but can be prone to wasting time in deep unproductive branches. To fix this we could apply a depth limit but in doing so we may never find solution. Depth-first will return as soon as a solution is found regardless on the quality of the solution.

Breadth-first will take a lot longer and use more storage, but we know the solution given is both “complete” and “optimal” because we find the best solution at any given depth.

Both methods are “tentative” in that they allow backtracking and both can be applied to either constructive or perturbative search. They are also not very clever in how the make decisions when they navigate the landscape because they’re not accounting for any kind of quality value and rather brute forcing their way through the process until a conclusion is found. By informing our search we can begin to map out far more intelligent paths through each problem we face by considering additional information that may be available to us. Read the post “Informing Our AI Search Algorithms” if you’re interested.

Reference: Information for this post was collected from lecture notes of a lecture written by Jim Smith who is a Professor in Interactive Artificial Intelligence (AI) in the Department of Computer Science and Creative Technologies at the University of the West of England.

Leave a Comment

Your email address will not be published. Required fields are marked *