Neural Network(NN)-based learning to rank algorithm, such as LambdaRank, is intensively used in web search engine companies to increase the search relevance. Because i) the cost function for the ranking problem is much more complex than that of traditional Back Propagation(BP) NNs, and ii) no coarse-grained parallelism exists, LambdaRank is hard to be efficiently accelerated by computer clusters or GPUs. This paper presents a FPGA-based accelerator solution to provide high computing performance with low power consumption. A compact deep pipeline is proposed to handle the complex computing in the batch updating. The area scales linearly with the number of hidden nodes in the algorithm. We also carefully design a data format to enable streaming consumption of the training data from host computer. The accelerator shows up to 17.9x (with PCIe x4) and 24.6x (with PCIe x8) speedup compared with the pure software implementation on datasets from a commercial search engine.