Neural architecture search (NAS) is a promising research direction that has the potential to replace expert-designed networks with learned, task-specific architectures. In this talk, I present our recent work on grounding the empirical results in this field, building off the following observations: (i) NAS is a specialized hyperparameter optimization problem; and (ii) random search is a competitive baseline for hyperparameter optimization. Leveraging these observations, we evaluate simple baselines using random search on two standard NAS benchmarks---PTB and CIFAR-10. Our results show that random search with efficient evaluation strategies is a competitive NAS method for both benchmarks. Finally, we explore the existing reproducibility issues of published NAS results. We note the lack of source material needed to exactly reproduce these results, and further discuss the robustness of published results given the various sources of variability in NAS experimental setups.