Pic1 begins with a large database of existing images. We shall assume them to be real photographs, though they could also be Chernoff faces or other artificial visual data. Pic1's microfeature extractor then generates a vector space that maps each image to a point in a k-dimensional vector space. Microfeature extraction is presumed to be computationally expensive and is done in advance, with the results stored; the feature space only need change when the images change.
The remainder of Pic1's modules comprise the run-time system used to query the database. The central point is a cycle between three key modules: the user interface, the search engine, and the user model manager. The search engine locates a set of points in the feature space that seem plausible given Pic1's current user model, then passes them to the user interface; it presents the images corresponding to those points and lets the user select one. The selected image is passed to the user model manager, which passes it on to one (or more) user modeling heuristics; these provide an updated user model, which is passed to the search engine for the next round of image selection.