The advent of big data promises to revolutionize medicine by making it more personalized and effective, but big data also presents a grand challenge of information overload. Machine reading can play a key role in precision medicine, by helping structure medical data for interpretation. Deep learning has emerged as a powerful tool for machine reading, due to its superior capacity in representation learning. But its applicability is limited by the reliance on annotated examples, which are difficult to produce at scale. Self-supervision is a promising direction to address this bottleneck, either by introducing labeling functions to automatically generate noisy examples from unlabeled text, or by imposing constraints over interdependent label decisions. Probabilistic logic offers a unifying language to represent self-supervision, but end-to-end modeling with probabilistic logic is often infeasible. In this talk, I will present deep probabilistic logic (DPL) as a general framework for self-supervision, by composing probabilistic logic with deep learning. DPL models label decisions as latent variables, represents prior knowledge on their relations using weighted first-order logical formulas, and alternates between learning a deep neural network for the end task and refining uncertain formula weights for self-supervision, using variational EM. This enables us to train accurate machine readers without labeled examples, and extract knowledge from millions of publications to support precision oncology.