We present several experiments demonstrating the efficiency and scalability of a biologically inspired spatial representation on navigation tasks using artificial neural networks. Specifically, we demonstrate that encoding coordinates with Spatial Semantic Pointers (SSPs) outperforms six other proposed encoding methods when training a neural network to navigate to arbitrary goals in a 2D environment. The SSP representation naturally generalizes to larger spaces, as there is no definition of a boundary required (unlike most other methods). Additionally, we show how this navigational policy can be integrated into a larger system that combines memory retrieval and self-localization to produce a behavioural agent capable of finding cued goal objects. We further demonstrate that explicitly incorporating a hexagonal grid cell-like structure in the generation of SSPs can improve performance. This biologically inspired spatial representation has been shown to be able to produce spiking neural models of spatial cognition. The link between SSPs and higher level cognition allows models using this representation to be seamlessly integrated into larger neural models to elicit complex behaviour.