Most recent work on multilingual QA (MLQA) has focussed on zero-shot transfer learning. In this talk I'll first present strategies that bring multilingual embeddings close to each other in the same semantic space for effective cross-lingual transfer. Then I'll also show the vulnerabilities in the recent MLQA models by successfully attacking a system trained on multilingual BERT. In the second half of the talk, I'll focus on Domain Adaptation of QA models and show how we can make QA models trained on open-domain datasets like Natural Questions transfer to new unseen domains. Specifically, I'll show how current SOTA neural IR models like Dense Passage Retrieval (DPR) lags behind traditional term matching approaches such as BM25 in more specific and specialized target domains such as COVID-19 and the application of synthetically generated QA examples to improve performance on closed-domain retrieval and MRC.