Humans posess a vast sensorimotor knowledge base that must be utilized to accelerate robot policy learning. While existing methods of demonstration generally ask humans to directly manipulate the robot’s state, this work utilizes a more natural demonstration capture platform (i.e. mouse and keyboard) to train deep learning networks on a motion planning task. Through qualitative and quantitative analysis, we demonstrate that such methods can successfully supervise imitation learning policies.