We wrote our core inference engine in C++-11. We made heavy use of Boost Python to then perform all experiments and analysis from within Python, via Continuum's Anaconda Python Distribution on OSX. All large-scale compute jobs were run via pySpark on Amazon Web Services.
There are three repositories associated with this paper:
one with the canonical datasets
one with the code of the core inference engine
and one with the actual paper text and a lot of the experiment-running infrastructure
This grew out of us wanting a cleaner, more separate repository for the code and wanting to be able to reuse the inference engine without having to download all of the figs, etc. for the paper.