After competitively filtering user module information, the user model manager passes it on to the search engine, whose job it is to generate a set of points from the feature space which represent the range of current user models.

The complexity of the search engine depends on the domain Pic1 is working in. When working with dynamically constructed images, such as Chernoff-like cartoon faces, the feature space is dense--an image can be constructed for any arbitrary point in the feature space. Databases of real images are not so flexible; images exist at certain points and may not reasonably be generated for arbitrary points.

In the first case, the engine may merely generate
a random set of points within the area of vector space
currently being considered by the user model.
In the second case, the engine must either explicitly generate
a subset of the points in the feature space
approximating such properties,
or generate a random set and then choose the images
closest to the points generated.
The second of these approaches seems
much more tractable and is the one that Pic1 currently adopts.
It leaves, however, the problem of finding the point
closest to an arbitrary point in a *k*-dimensional space.

A variety of approaches are possible here. One potentially well-suited one involves storing a list of the points sorted along each dimension, permitting existing points close to an arbitrary point in any given dimension to be located easily. Such an approach would make the application of a weighted distance metric, discussed above, viable.

The prototype implementation uses a search engine
that merely receives and and generates a set of points within that range.
Mapping to real points in the database
is done with a Manhattan distance metric and simple linear search.
This algorithm is on the order of and clearly is not scalable to large *n*.