Many knowledge bases have constituted large collections of multi-relational data, provides machines with a shared understanding of human knowledge in different domains. In the scenario of machine learning, there are two critical problems related to multi-relational data. One is to develop effective representation learning approaches transform such relational knowledge into quantifiable and predictable latent representations. The other is to effectively extract relational knowledge from other modalities of data, such as text and other forms of sequence data. My presentation will be divided into two parts. In the first part, I will present some new representation learning approaches for complex multi-relational data. Our research has comprehensively extended representation learning models to capture various properties of multi-relational data, by preserving multilinguality, uncertainty, multi-modalities, and logical properties of structured knowledge in addition to semantics. We incorporate several critical learning strategies for capturing different types of structured knowledge, including semi-supervised co-training, multi-view learning, multi-task learning and learning with probabilistic soft logic rules. In the second part, I will briefly introduce our works on deep learning for knowledge acquisition from sequence data, which focus on two applications related to NLP and bioinformatics.