This is a collection of IPython notebooks intended to train the reader on different Spark concepts, from basic to advanced, by using the Python language.
A good way of using these notebooks is by first cloning the GitHub repo, and then
starting your own IPython notebook in
pySpark mode. For example, if we have a standalone Spark installation
running in our
localhost with a maximum of 6Gb per node assigned to IPython:
MASTER="spark://127.0.0.1:7077" SPARK_EXECUTOR_MEMORY="6G" IPYTHON_OPTS="notebook --pylab inline" ~/spark-1.2.1-bin-hadoop2.4/bin/pyspark
Notice that the path to the
pyspark command will depend on your specific
installation. So as requirement, you need to have
Spark installed in
the same machine you are going to start the
IPython notebook server.
For more Spark options see here. In general it works the rule of passign options
described in the form
We will be using datasets from the KDD Cup 1999.
The following notebooks can be examined individually, although there is a more or less linear ‘story’ when followed in sequence. By using the same dataset they try to solve a related set of tasks with it.
About reading files and parallelize.
A look at
RDD sampling methods explained.
Brief introduction to some of the RDD pseudo-set operations.
This is an ongoing project. New notebooks will be available soon. The best way to be up to date is to watch our GitHub repo.