Abstract: To enable human-level artificial intelligence, we want machines to have access to the same kind of commonsense knowledge about the world people have. This knowledge needs to be available in large volumes, at high quality, and in a form that supports reasoning. While the Web offers a vast quantity of text with a breadth of topics, it also presents the problem of dealing with casual, unedited writing, which can lead to low-quality knowledge. Ongoing work with the KNEXT system identifies good commonsense factoids from the bad and sharpens them into stronger, quantified claims to be used in the Epilog reasoning engine. This talk presents the state of the knowledge base produced by this large-scale automatic extraction and sharpening.
This will include some of the methods and findings from Gordon & Schubert: Quantificational Sharpening of Commonsense Knowledge (http://cl.ly/4UeV) and Gordon, Van Durme, & Schubert: Learning from the Web (http://cl.ly/4UWZ) as well as some of the background on our line of research and more recent results of applying the sharpening method to Web-scale text.