Second project concerned people's abilities to transfer object category knowledge across visual and haptic domains. If a person learns to categorize objects based on inputs from one sensory modality, can the person categorize these same objects when the objects are perceived through another modality? Our work makes three contributions. First, by fabricating Fribbles (3-D, multi-part objects with a categorical structure), we developed visual-haptic stimuli that are highly complex and realistic. Based on these stimuli, we developed the See & Grasp data set, a data set containing both visual and haptic features of the Fribbles, and is available online. Second, we conducted an experiment evaluating whether people transfer object category knowledge across these domains. Our data clearly indicate that we do. Third, we developed a computational model that learns amodal representations of prototypical 3-D shape. Similar to previous work, the model uses shape primitives to represent parts, and spatial relations among primitives to represent multi-part objects. However, it is distinct in its use of a Bayesian inference algorithm allowing it to acquire amodal representations, and sensory-specific forward models allowing it to predict visual or haptic features from multisensory representations. The model, combined with the experimental data, illustrates the potential importance of amodal representations and sensory-specific forward models to multisensory perception.