Several contemporary classifiers, including some developed in Rochester, incorporate semantic material and object detectors. Classification performance improves because because the gap between the features and the image semantics is narrowed. Spatial relationships between the objects or materials can help by distinguishing between certain types of scenes and by mitigating the effects of detector failures. While past work on spatial modeling has used logic- or rule-based models, we propose that a probabilistic framework can handle the loose spatial relationships that exist in many scene types.
To this end, we have developed MASSES, an experimental testbed that can generate virtual scenes. MASSES can be used to experiment with different spatial models, different detector characteristics, and different learning parameters. Using a tree-structured Bayesian network for inference on a series of simulated natural scenes, we have shown that the presence of key materials can effectively distinguish certain scene types. However, spatial relationships are needed to disambiguate other types of scenes, achieving a gain of 7% in one case.
However, our simple Bayes net is not expressive enough to model faulty detection at the level of individual regions. As future work, we propose first to evaluate full (DAG) Bayesian networks and Markov Random Fields as potential probabilistic frameworks. We then plan to extend the chosen framework for our problem. Finally, we will compare our results on real and simulated sets of images with those obtained by other systems using spatial features represented implicitly.