The "backprop" algorithm has led to incredible successes for machines on object recognition tasks (among others), but how similar types of supervised learning may occur in the brain remains unclear. We present a fully spiking, biologically plausible supervised learning algorithm that extends the Feedback Alignment (FA) algorithm to run in spiking LIF neurons. This entirely spiking learning algorithm is a novel hypothesis about how biological systems may perform deep supervised learning. It addresses a number of the key problems with the biological plausibility of the backprop algorithm: 1) It does not use the transpose weight matrix to propagate error backwards, but rather uses a random weight matrix. 2) It does not use the derivative of the hidden unit activation function, but rather uses a function of the hidden neurons' filtered spiking outputs. We test this algorithm on a simple input-output function learning task with a two-hidden-layer deep network. The algorithm is able to learn at both hidden layers, and performs much better than shallow learning. Future work includes extending this algorithm to more challenging datasets, and comparing it with other candidate algorithms for more biologically plausible learning.