To appreciate the importance of embodiment, an intelligent system needs to have sensorimotor experiences by interacting with the world. Our solution is to attach multiple sensors to a real agent, so that the intelligence system can see as the agent sees, hears as the agent hears and says, and experiences the environment of the agent in a first-person sense. The function of those sensors are similar to the human sensory system and it allows the intelligent system to collect multisensory data to simulate the learning procedure in humans. Specifically, we are developing an intelligent system that can ground semantics in visual perception and action. Spoken names of objects are associated with their meanings that are perceptually grounded, and action verbs are grounded in the body movements of the agent. To achieve this goal, a multimodal learning algorithm will be developed based on the spatio-temporal constraints among co-occurring data from different modalities. As a result of the grounded word learning, the system can demonstrate its intelligence by describing visual events verbally.