The recent widespread success of representation learning raises the question that which tasks are amenable to statistical machine learning, and which tasks require symbolic knowledge representation, and whether we can benefit from a tight integration of both. In this talk, we will describe our latest efforts towards their synthesis, with a main focus on how logical circuits can be the bridge between the two.
We start with a practical question, in the domains where we have symbolic knowledge in the form of logical constraints, how can we enforce it on deep neural networks. We derive a semantic loss for this that captures how close the outputs of the neural network are to satisfying the constraints. This loss can be calculated in linear time, after we compile the knowledge into a logical circuit.
Next, we explain how the compiled logical circuits themselves can be converted into statistical learning models. When deployed on assorted machine learning tasks, including density estimation and image classification, these probabilistic/logistic circuits yield the (near)-state-of-the-art results on multiple benchmark datasets. Furthermore, they achieve this by employing efficient parameter learning; parameter learning in these circuits is either simple model counting or convex optimization, which is beyond the reach of many alternative representations. Based on this, simple local search algorithms can induce strong structures for these circuits from data.