While it is actively servicing users, Pic1 does not generally have the time to adapt its future behavior to the current query. Further, Pic1 will not know how successful each session was until the user has successfully found the target image.
Once the image is found, however, Pic1 can go back and do a primitive post-hoc examination of the session, attempting to determine what microfeatures (i.e. what dimensions in the feature space) would have been most useful to each user selection. The idea is simple, but how to integrate this new information into the system is more complex; a variety of approaches, two of them currently under test, are possible:
For example, if Pic1's analysis suggests that the user chose an image by virtue of its redness, `redness' becomes a more significant feature of that object, and in the future when users select it Pic1 might consider that weight more highly. (Although note that Pic1 itself has no notion of `redness,' it just happens that several image analyzers notice the dominant color of images in various ways.)
For example, `redness' might become a more important feature, since users have been making selections based on it. The current prototype does this only within sessions; working across sessions should help generate more effective long-term adaptation. Current work centers around a singular-value decomposition of the vector space from run to run, and integrating weightings from post-hoc analysis of run-time behavior to attempt to generate better weightings for the dimensions.
A previous report on this work appeared in (Rawlins & VanHeyningen, 1994).