The vulnerability of neural networks to adversarial examples becomes a crucial issue for safety-critical applications, including autonomous driving, security cameras and aircraft control systems. Verifying the robustness of neural networks is thus an important task to guarantee safety in these situations. In this talk, I will present two linear relaxation based verification methods developed during my PhD, CROWN and Fast-Lin, which can efficiently give relatively tight linear upper and lower bounds for neural network outputs with respect to the input. Furthermore, I will discuss an unified convex relaxation based framework that covers many neural network verification algorithms proposed so far, as well as the fundamental limitations of these methods.