Dask tutorials for Big Data Analysis and Machine Learning as Jupyter notebooks
MIT License
This is a collection of Jupyter notebooks intended to train the reader on different Dask concepts, from basic to advanced.
Dask provides multicore and distributed parallel execution on larger-than-memory datasets.
We can think of Dask at a high and a low level
collections that mimic NumPy, Lists, and Pandas but can operate in parallel on datasets that don't fit into memory. Dask's high-level collections are alternatives to NumPy and Pandas for large datasets.
threading
or multiprocessing
libraries in complex cases or other task scheduling systems like Luigi
or IPython parallel
.Diferent users operate at different levels but it is useful to understand both.
The basics of dask can be summarized as follows:
Dask comes with four available schedulers:
dask.threaded.get
: a scheduler backed by a thread pooldask.multiprocessing.get
: a scheduler backed by a process pooldask.get
: a synchronous scheduler, good for debuggingdistributed.Client.get
: a distributed scheduler for executing graphs on multiple machines.To select one of these for computation, you can specify at the time of asking for a result
myvalue.compute(get=dask.async.get_sync) # for debugging
or set the current default, either temporarily or globally
with dask.set_options(get=dask.multiprocessing.get):
# set temporarily fo this block only
myvalue.compute()
dask.set_options(get=dask.multiprocessing.get)
# set until further notice
For single-machine use, the threaded and multiprocessing schedulers are fine choices. However, for scaling out work across a cluster, the distributed scheduler is required. Indeed, this is now generally preferred for all work, because it gives you additional monitoring information not available in the other schedulers. (Some of this monitoring is also available with an explicit progress bar and profiler, see here.)
A good way of using these notebooks is by first cloning the repo, and then starting your own Jupyter notebook after installing all necessary packages.
git clone https://github.com/andersy005/dask-notebooks.git
and then install necessary packages.
You will need the following core libraries
conda install numpy pandas h5py Pillow matplotlib scipy toolz pytables snakeviz dask distributed
You may find the following libraries helpful for some notebooks
pip install graphviz cachey
In the repo directory
conda env create -f environment.yml
and then on osx/linux
source activate dask-tutorial
on windows
activate dask-tutorial
You can build a docker image out of the provided Dockerfile.
Windows users can install graphviz as follows
Alternatively one can use the following conda commands (one installs graphviz and one installs python-bindings for graphviz):
conda install -c conda-forge graphviz
conda install -c conda-forge python-graphviz
We will be using datasets from the KDD Cup 1999. The results of this competition can be found here.
The following notebooks can be examined individually, although there is a more or less linear 'story' when followed in sequence. By using the same dataset they try to solve a related set of tasks with it.
map
, filter
, compute
, persist
, flatten
Contributions are welcome! For bug reports or requests please submit an issue.
This section is for myself, but feel free to fork the repo and add your contributions!