It is probably fair to say that Anaconda has become the most popular Python distribution in scientific circles. This should not come as a surprise as it comes with many features that make it stand out from its competitors:
- a convenient installer
- allows for installing several versions of the Python interpreters side-by-side using environments
- has access to the most recent versions of many important (scientific) packages
- does not require admin privileges!
Though Linux and macOS ship with their own Python interpreters, these are in many cases insufficient for development as access to the latest releases of Python packages is usually not possible with the default package managers (outdated packages only) or using
pip, especially when binary extensions are required that depend on non-Python libraries.
With Anaconda, it is quite easy to write and build new recipes for Python and non-Python software packages and upload them to a personal distribution channel. Conda-forge makes this even easier: just write the recipe, open a pull request on GitHub, and wait for the continuous-integration buildbots to generate binary packages for Linux (Circle-CI), macOS (Travis-CI) and Windows (AppVeyor). After merging the pull request on GitHub, the binary packages will be automatically uploaded to the conda-forge channel, at which point they become available to all Anaconda users by executing the command:
This procedure works fine as long as the installation script for the Python package uses distutils or setuptools, as it ensures that all files are installed in the appropriate location and that any binary extensions are compiled with the same compiler that was used to compile the Python interpreter. But what happens when the Python package uses a different buildsystem?
Enters my personal flagship project xraylib, a library providing convenient access to physical databases relevant in the field of X-ray physics. This library consists of a core shared library, written in ANSI-C, and comes with bindings for about a dozen of other languages such as Perl, IDL, Ruby, Lua, Fortran, and of course Python. In fact there are two Python bindings: the first extension module is generated with SWIG (deals with scalar arguments) and the second one with Cython (deals with NumPy arrays as arguments). In order to support building the core library as well as the many bindings using a single build and installation system (and thereby ignoring the recommended build systems each of these languages bindings has for binary extensions), I use Autotools, which consists of Autoconf, Automake and Libtool. Libtool is key in this setup, as it enables building both shared libraries (the core C library as well as the Fortran bindings) as well as dynamically loadable plugins (Python, Perl, Lua, Ruby and IDL bindings), in a platform-independent manner! See also this old blogpost if you are interested in reading how I accomplished this for the Python bindings.
A typical autotools project may be built and installed through the following three commands (after unpacking the source tarball):
1 2 3
The xraylib configure script figures out where the Python extensions need to be installed in the last command, thereby ensuring they will get picked up by the interpreter without fooling around with the PYTHONPATH environment variable. This works well in a conda recipe on both macOS and Linux, and has led to several people uploading xraylib packages to their personal conda channels for these two platforms.
Windows however is an entirely different beast.
Over the years, many have asked me to come up with providing better Python support for xraylib on Windows, in particular via pip and conda, as the Python binary extensions I provide in my xraylib Windows SDKs do not integrate easily with Python interpreters, in particular due to their dependency on specific NumPy versions.Read on →