Invited Speakers

Project Aristo: Towards Machines that Capture and Reason with Science

Peter Clark, Allen Institute for AI

Talk Slides

AI2's Project Aristo seeks to build a system that has a deep understanding of science, using knowledge captured mainly from large-scale text. Recently, Aristo achieved surprising success on the Grade 8 New York Regents Science Exams, scoring over 90% on the exam's non-diagram, multiple choice (NDMC) questions, where even 3 years ago the best systems scored less than 60%. In this talk, I will describe the journey of Aristo through various knowledge capture technologies that have helped it over its development, including acquiring if/then rules, tables, knowledge graphs, and latent neural representations. I will also discuss the growing tension between capturing structured knowledge vs. capturing knowledge latently using neural models, the latter proving highly effective but hard to interpret. Finally I will speculate on the larger quest towards knowledgable machines that can reason, explain, and interact, and how knowledge captured in both structured and latent forms might help reach this goal.

Peter Clark is a Senior Research Manager and founding member of AI2, and has led Project Aristo since its inception in 2014. His research focuses upon natural language processing, machine inference, and commonsense reasoning, and the interplay between these three areas. He has researched these topics for 30 years with more than 120 refereed publications, 8000 citations, and 3 best paper awards. He is a Senior Member of AAAI, and co-chaired K-Cap in 2005.


ConceptNet at Twenty: Reflections on structured common sense in an era of machine learning

Catherine Havasi, MIT Media Lab

Talk Slides

Crowdsourcing common sense training data was born twenty years ago. It began with the idea to "harness the power of bored people on the Internet" to collect "what everyone knows but no one writes down". This was an era when we were all just starting to learn how to search the web, before people learned the dismal art of keywords, they tried typing their wants and needs. Search engines were woefully unequipped for these kinds of queries, and it was in that climate that we started ConceptNet, which we originally called OpenMind CommonSense (OMCS).

Over the years, the effort and its methods evolved to address new applications. Today, we explore how structured common sense is becoming more relevant in NLP and the role it can play in helping solve problems of explainability, scalability, low-resourced languages, domain transfer, and AI bias. We look at how structured common sense and transformers are complimentary and how we can combine them to keep the best of both worlds. Additionally, how does common sense need to evolve as we tackle larger and more complex and cross domain problems?

Dr. Catherine Havasi is an AI researcher and entrepreneur. In the late 90s, she was a crowdsourcing pioneer when she co-founded the Common Sense Computing Initiative, or ConceptNet, the first crowd-sourced project for artificial intelligence and the largest open knowledge graph for language understanding. Her work has spanned computational creativity, transfer and meta learning, natural language understanding, and educational outreach, resulting in several startup companies: Dalang, Luminoso, and Learning Unlimited. She is currently a visiting scientist at the MIT Media Lab; she directed the Digital Intuition group there under a prior appointment.