Jupyter Notebook Python, Scala, R, Spark, Mesos Stack

[Git Hub] https://github.com/jupyter/docker-stacks/tree/master/all-spark-notebook

 

What it Gives You


  • Jupyter Notebook 4.2.x
  • Conda Python 3.x and Python 2.7.x environments
  • Conda R 3.2.x environment
  • Scala 2.10.x
  • pyspark, pandas, matplotlib, scipy, seaborn, scikit-learn pre-installed for Python
  • ggplot2, rcurl preinstalled for R
  • Spark 1.6.0 for use in local mode or to connect to a cluster of Spark workers
  • Mesos client 0.22 binary that can communicate with a Mesos master
  • Unprivileged user jovyan (uid=1000, configurable, see options) in group users (gid=100) with ownership over /home/jovyan and /opt/conda
  • tini as the container entrypoint and start-notebook.sh as the default command
  • A start-singleuser.sh script for use as an alternate command that runs a single-user instance of the Notebook server, as required by JupyterHub
  • Options for HTTPS, password auth, and passwordless sudo

Basic Use

The following command starts a container with the Notebook server listening for HTTP connections on port 8888 without authentication configured.

docker run -d -p 8888:8888 jupyter/all-spark-notebook

Using Spark Local Mode

 

This configuration is nice for using Spark on small, local data.

In a Python Notebook

  1. Run the container as shown above.
  2. Open a Python 2 or 3 notebook.
  3. Create a SparkContext configured for local mode.

For example, the first few cells in a notebook might read:

import pyspark
sc = pyspark.SparkContext('local[*]')

# do something to prove it works
rdd = sc.parallelize(range(1000))
rdd.takeSample(False, 5)

In a R Notebook

  1. Run the container as shown above.
  2. Open a R notebook.
  3. Initialize sparkR for local mode.
  4. Initialize sparkRSQL.

For example, the first few cells in a R notebook might read:

library(SparkR)

sc <- sparkR.init("local[*]")
sqlContext <- sparkRSQL.init(sc)

# do something to prove it works
data(iris)
df <- createDataFrame(sqlContext, iris)
head(filter(df, df$Petal_Width > 0.2))

 

In an Apache Toree (Scala) Notebook

  1. Run the container as shown above.
  2. Open an Apache Toree (Scala) notebook.
  3. Use the pre-configured SparkContext in variable sc.

 

For example:

val rdd = sc.parallelize(0 to 999)
rdd.takeSample(false, 5)

Connecting to a Spark Cluster on Mesos

This configuration allows your compute cluster to scale with your data.

  1. Deploy Spark on Mesos.
  2. Configure each slave with the --no-switch_user flag or create the jovyan user on every slave node.
  3. Run the Docker container with --net=host in a location that is network addressable by all of your Spark workers. (This is a Spark networking requirement.)
  4. Follow the language specific instructions below.

 In a Python Notebook

  1. Open a Python 2 or 3 notebook.
  2. Create a SparkConf instance in a new notebook pointing to your Mesos master node (or Zookeeper instance) and Spark binary package location.
  3. Create a SparkContext using this configuration.

For example, the first few cells in a Python 3 notebook might read:

 

import os # make sure pyspark tells workers to use python3 not 2 if both are installed os.environ['PYSPARK_PYTHON'] = '/usr/bin/python3' import pyspark conf = pyspark.SparkConf() # point to mesos master or zookeeper entry (e.g., zk://10.10.10.10:2181/mesos) conf.setMaster("mesos://10.10.10.10:5050")

# point to spark binary package in HDFS or on local filesystem on all slave # nodes (e.g., file:///opt/spark/spark-1.6.0-bin-hadoop2.6.tgz) conf.set("spark.executor.uri", "hdfs://10.10.10.10/spark/spark-1.6.0-bin-hadoop2.6.tgz") # set other options as desired conf.set("spark.executor.memory", "8g") conf.set("spark.core.connection.ack.wait.timeout", "1200") # create the context sc = pyspark.SparkContext(conf=conf) # do something to prove it works rdd = sc.parallelize(range(100000000)) rdd.sumApprox(3)

To use Python 2 in the notebook and on the workers, change the PYSPARK_PYTHON environment variable to point to the location of the Python 2.x interpreter binary. If you leave this environment variable unset, it defaults to python.

Of course, all of this can be hidden in an IPython kernel startup script, but "explicit is better than implicit." :)

In a R Notebook

  1. Run the container as shown above.
  2. Open a R notebook.
  3. Initialize sparkR Mesos master node (or Zookeeper instance) and Spark binary package location.
  4. Initialize sparkRSQL.

For example, the first few cells in a R notebook might read:

library(SparkR)

# point to mesos master or zookeeper entry (e.g., zk://10.10.10.10:2181/mesos)\
# as the first argument
# point to spark binary package in HDFS or on local filesystem on all slave
# nodes (e.g., file:///opt/spark/spark-1.6.0-bin-hadoop2.6.tgz) in sparkEnvir
# set other options in sparkEnvir
sc <- sparkR.init("mesos://10.10.10.10:5050", sparkEnvir=list(
    spark.executor.uri="hdfs://10.10.10.10/spark/spark-1.6.0-bin-hadoop2.6.tgz",
    spark.executor.memory="8g"
    )
)
sqlContext <- sparkRSQL.init(sc)

# do something to prove it works
data(iris)
df <- createDataFrame(sqlContext, iris)
head(filter(df, df$Petal_Width > 0.2))

 

In an Apache Toree (Scala) Notebook

  1. Open a terminal via New -> Terminal in the notebook interface.
  2. Add information about your cluster to the SPARK_OPTS environment variable when running the container.
  3. Open an Apache Toree (Scala) notebook.
  4. Use the pre-configured SparkContext in variable sc.

The Apache Toree kernel automatically creates a SparkContext when it starts based on configuration information from its command line arguments and environment variables. You can pass information about your Mesos cluster via the SPARK_OPTS environment variable when you spawn a container.

For instance, to pass information about a Mesos master, Spark binary location in HDFS, and an executor options, you could start the container like so:

docker run -d -p 8888:8888 -e SPARK_OPTS '--master=mesos://10.10.10.10:5050 \ --spark.executor.uri=hdfs://10.10.10.10/spark/spark-1.6.0-bin-hadoop2.6.tgz \ --spark.executor.memory=8g' jupyter/all-spark-notebook

Note that this is the same information expressed in a notebook in the Python case above. Once the kernel spec has your cluster information, you can test your cluster in an Apache Toree notebook like so:

// should print the value of --master in the kernel spec
println(sc.master)

// do something to prove it works
val rdd = sc.parallelize(0 to 99999999)
rdd.sum()

 Connecting to a Spark Cluster on Standalone Mode

 

Connection to Spark Cluster on Standalone Mode requires the following set of steps:

  1. Verify that the docker image (check the Dockerfile) and the Spark Cluster which is being deployed, run the same version of Spark.
  2. Deploy Spark on Standalone Mode.
  3. Run the Docker container with --net=host in a location that is network addressable by all of your Spark workers. (This is a Spark networking requirement.)
  4. The language specific instructions are almost same as mentioned above for Mesos, only the master url would now be something like spark://10.10.10.10:7077

Notebook Options

You can pass Jupyter command line options through the start-notebook.sh command when launching the container. For example, to set the base URL of the notebook server you might do the following:

docker run -d -p 8888:8888 jupyter/all-spark-notebook start-notebook.sh --NotebookApp.base_url=/some/path

 

You can sidestep the start-notebook.sh script entirely by specifying a command other than start-notebook.sh. If you do, the NB_UID and GRANT_SUDO features documented below will not work. See the Docker Options section for details.

 

Docker Options

You may customize the execution of the Docker container and the Notebook server it contains with the following optional arguments.

  • -e PASSWORD="YOURPASS" - Configures Jupyter Notebook to require the given password. Should be conbined with USE_HTTPS on untrusted networks.
  • -e USE_HTTPS=yes - Configures Jupyter Notebook to accept encrypted HTTPS connections. If a pem file containing a SSL certificate and key is not provided (see below), the container will generate a self-signed certificate for you.
  • -e NB_UID=1000 - Specify the uid of the jovyan user. Useful to mount host volumes with specific file ownership. For this option to take effect, you must run the container with --user root. (The start-notebook.sh script will su jovyan after adjusting the user id.)
  • -e GRANT_SUDO=yes - Gives the jovyan user passwordless sudo capability. Useful for installing OS packages. For this option to take effect, you must run the container with --user root. (The start-notebook.sh script will su jovyan after adding jovyan to sudoers.) You should only enable sudo if you trust the user or if the container is running on an isolated host.
  • -v /some/host/folder/for/work:/home/jovyan/work - Host mounts the default working directory on the host to preserve work even when the container is destroyed and recreated (e.g., during an upgrade).
  • -v /some/host/folder/for/server.pem:/home/jovyan/.local/share/jupyter/notebook.pem - Mounts a SSL certificate plus key for USE_HTTPS. Useful if you have a real certificate for the domain under which you are running the Notebook server.
  • -p 4040:4040 - Opens the port for the Spark Monitoring and Instrumentation UI. Note every new spark context that is created is put onto an incrementing port (ie. 4040, 4041, 4042, etc.), and it might be necessary to open multiple ports. docker run -d -p 8888:8888 -p 4040:4040 -p 4041:4041 jupyter/all-spark-notebook

SSL Certificates

The notebook server configuration in this Docker image expects the notebook.pem file mentioned above to contain a base64 encoded SSL key and at least one base64 encoded SSL certificate. The file may contain additional certificates (e.g., intermediate and root certificates).

If you have your key and certificate(s) as separate files, you must concatenate them together into the single expected PEM file. Alternatively, you can build your own configuration and Docker image in which you pass the key and certificate separately.

For additional information about using SSL, see the following:

Conda Environments

The default Python 3.x Conda environment resides in /opt/conda. A second Python 2.x Conda environment exists in /opt/conda/envs/python2. You can switch to the python2 environment in a shell by entering the following:

source activate python2

 

You can return to the default environment with this command:

source deactivate

 

The commands jupyter, ipython, python, pip, easy_install, and conda (among others) are available in both environments. For convenience, you can install packages into either environment regardless of what environment is currently active using commands like the following:

# install a package into the python2 environment
pip2 install some-package
conda install -n python2 some-package
# install a package into the default (python 3.x) environment
pip3 install some-packageconda 
install -n python3 some-package

 

JupyterHub


JupyterHub requires a single-user instance of the Jupyter Notebook server per user. To use this stack with JupyterHub and DockerSpawner, you must specify the container image name and override the default container run command in your jupyterhub_config.py:

# Spawn user containers from this image
c.DockerSpawner.container_image = 'jupyter/all-spark-notebook'

# Have the Spawner override the Docker run command
c.DockerSpawner.extra_create_kwargs.update({
    'command': '/usr/local/bin/start-singleuser.sh'
})
반응형

'IT > Apache Spark' 카테고리의 다른 글

Docker를 이용한 Apache Spark 설치하기  (0) 2016.06.06
블로그 이미지

조이풀 라이프

Lift is short, enjoy the life

,

Docker Hub 이미지를 이용한 Spark 설치하기

 

Apacher Spark 이미지

Docker Hub 이미지 중 가장많은 별포인트를 받은 아래의 Spark 이미지를 설치한다.

[Docker Hub] https://hub.docker.com/r/jupyter/all-spark-notebook/

[Git Hub] https://github.com/jupyter/docker-stacks

 

아래 명령어로 docker hub에서 이미지를 가져온다.


sudo docker pull jupyter/all-spark-notebook  


위의 명령어를 실행하면 아래와 같이 이미지를 다운받고 extract 하는 과정을 볼 수가 있다.

 

 

Image 다운이 다 되었으면, 아래의 명령어를 통해 다운 받은 Docker Image를 확인할 수가 있다.

 

sudo docker images


 

제대로 설치가 되었다면 아래와 유사한 목록 리스트가 나타나야 한다.

 

sudo docker images

 

 

컨테이너를 만들고 실행시키기 위해서 아래의 명령어를 이용한다.

 

sudo docker run -d -p [외부포트]:[컨테이너내부포트] -e GRANT_SUDO=yes --name [컨테이너 이름] jupyter/all-spark-notebook

ex) sudo docker run -d -p 8888:8888 -e GRANT_SUDO=yes --name test_spark jupyer/all-spark-notebook

ex) sudo docker run -d -p 8888:8888 jupyter/all-spark-notebook


 

docker ps 명령을 통해 해당 컨테이너가 잘 실행되었는지를 확인하자.

 

sudo docker ps


방금 만든 컨테이너가 보인다면, 해당 서비스가 잘 구동되어 있음을 나타내는 것이다.

이제 웹상에서 해당 machine의 ip와 port(ex) 8888)를 친후 ipython-notebook을 실행해보자.

 

[Docke Hub] https://hub.docker.com/r/sequenceiq/spark/

Apache Spark on Docker

 

This repository contains a Docker file to build a Docker image with Apache Spark. This Docker image depends on our previous Hadoop Docker

image, available at the SequenceIQ GitHub page.
The base Hadoop Docker image is also available as an official Docker image.

##Pull the image from Docker Repository

 

docker pull sequenceiq/spark:1.6.0

Building the image

docker build --rm -t sequenceiq/spark:1.6.0 .

Running the image

  • if using boot2docker make sure your VM has more than 2GB memory
  • in your /etc/hosts file add $(boot2docker ip) as host 'sandbox' to make it easier to access your sandbox UI
  • open yarn UI ports when running container
    docker run -it -p 8088:8088 -p 8042:8042 -h sandbox sequenceiq/spark:1.6.0 bash
    
    or
    docker run -d -h sandbox sequenceiq/spark:1.6.0 -d

Versions

Hadoop 2.6.0 and Apache Spark v1.6.0 on Centos

 

Testing

There are two deploy modes that can be used to launch Spark applications on YARN.

YARN-client mode

In yarn-client mode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.

# run the spark shell
spark-shell \
--master yarn-client \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1

# execute the the following command which should return 1000
scala> sc.parallelize(1 to 1000).count()

 

YARN-cluster mode

In yarn-cluster mode, the Spark driver runs inside an application master process which is managed by YARN on the cluster, and the client can go away after initiating the application.

Estimating Pi (yarn-cluster mode):

# execute the the following command which should write the "Pi is roughly 3.1418" into the logs
# note you must specify --files argument in cluster mode to enable metrics
spark-submit \
--class org.apache.spark.examples.SparkPi \
--files $SPARK_HOME/conf/metrics.properties \
--master yarn-cluster \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1 \
$SPARK_HOME/lib/spark-examples-1.6.0-hadoop2.6.0.jar

Estimating Pi (yarn-client mode):

# execute the the following command which should print the "Pi is roughly 3.1418" to the screen
spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1 \
$SPARK_HOME/lib/spark-examples-1.6.0-hadoop2.6.0.jar
반응형

'IT > Apache Spark' 카테고리의 다른 글

Docker Jupyter Notebook Python, Scala, R, Spark, Mesos Stack  (0) 2016.06.06
블로그 이미지

조이풀 라이프

Lift is short, enjoy the life

,

Docker 설치 및 기본 사용 방법

간단한 설치 방법 및 기본 사용 방법에 대해 설명하도록 하겠습니다.

 

Amazon Web Services(AWS) EC2(Ubuntu)를 생성

(기본으로 AWS의 EC2에서는 docker 패키지를 사용 가능)


docker 설치

$ sudo apt-get install docker.io


서비스 등록

$ sudo update-rc.d docker.io defaults


Docker Engine 정보 확인

$ sudo docker.io info

Containers: 0

Images: 0

Storage Driver: devicemapper

 Pool Name: docker-202:1-18845-pool

 Data file: /var/lib/docker/devicemapper/devicemapper/data

 Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata

 Data Space Used: 291.5 Mb

 Data Space Total: 102400.0 Mb

 Metadata Space Used: 0.7 Mb

 Metadata Space Total: 2048.0 Mb

Execution Driver: native-0.1

Kernel Version: 3.13.0-24-generic

WARNING: No swap limit support



docker 명령어로 사용하기 위한 심볼릭 링크 생성

$ sudo ln -sf /usr/bin/docker.io /usr/local/bin/docker


docker 명령어에 대한 권한 부여 (ubuntu도 사용할 수 있도록)

$ sudo usermod -G docker ubuntu


ubuntu 사용자로 logout->login하고 docker info 명령어 실행

$ docker info

Containers: 0

Images: 0

Storage Driver: devicemapper

 Pool Name: docker-202:1-18845-pool

 Data file: /var/lib/docker/devicemapper/devicemapper/data

 Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata

 Data Space Used: 291.5 Mb

 Data Space Total: 102400.0 Mb

 Metadata Space Used: 0.7 Mb

 Metadata Space Total: 2048.0 Mb

Execution Driver: native-0.1

Kernel Version: 3.13.0-24-generic

WARNING: No swap limit support



Dicker 클라이언트 / 서버 확인

$ docker version

Client version: 0.9.1

Go version (client): go1.2.1

Git commit (client): 3600720

Server version: 0.9.1

Git commit (server): 3600720

Go version (server): go1.2.1

Last stable version: 1.0.0, please update docker




ubuntu 이미지 Docker Hub Regstry에서 다운로드

$ docker pull ubuntu:latest

Pulling repository ubuntu

ad892dd21d60: Download complete

511136ea3c5a: Download complete

e465fff03bce: Download complete

23f361102fae: Download complete

9db365ecbcbb: Download complete 



다운로드 이미지 확인

$ docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE

ubuntu              latest              ad892dd21d60        7 days ago          275.4 MB 



Docker Container 생성 및 실행

$ docker run [옵션] [--name {컨테이너명}] {이미지명}[:{태그명}] [컨테이너로 실행할 명령어] [변수]


중요 옵션

-d : 백그라우드 실행. 

-i : 컨터이너의 표준 입력. /bin/bash등으로 컨테이너를 조작할 때 지정.

-t : tty(단말 디바이스)를 확보. /bin/bash등으로 컨테이너를 조작할 때 지정.

-p {호스트 포트번호} : {컨테이너 포트번호}  : Docker서버의 호스트와 포트 맵핑을 구성



ubuntu이미지에서 ubuntu1 컨테이너를 생성

$ docker run -it --name ubuntu1 ubuntu /bin/bash

root@b5c2f7a3f4de:/# 



nginx 설치 및 확인

root@b5c2f7a3f4de:/# apt-get update

root@b5c2f7a3f4de:/# apt-get install -y nginx


root@b5c2f7a3f4de:/# dpkg -l nginx

Desired=Unknown/Install/Remove/Purge/Hold

| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend

|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)

||/ Name                                                 Version                         Architecture                    Description

+++-====================================================-===============================-===============================-==============================================================================================================


ii  nginx                                                1.4.6-1ubuntu3                  all                             small, powerful, scalable web/proxy server

root@b5c2f7a3f4de:/# 


[Ctrl]+[d]로 bash 종료

- 종료하면 컨테이너 ubuntu1은 정지 상태가 된다.



Docker 컨테이너 리스트 확인

$ docker ps -a

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

b5c2f7a3f4de        ubuntu:latest       /bin/bash           12 minutes ago      Exit 0                                ubuntu1  


STATUS 항목 - 실행중 = UP {실행시간} / 정지 = Exit {종료코드}



Docker 이미지 생성

$ docker commit {컨테이너명}|{컨테이너 ID} [{사용자명}/]{이미지명}


$ docker commit ubuntu1 park/nginx

948a317510591c8af2ca49e205cc0558141ce5e18acc98c0e834cc8e93cf86cd



생성된 이미지 확인

$ docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE

park/nginx          latest              948a31751059        2 minutes ago       295.9 MB

ubuntu              latest              ad892dd21d60        7 days ago          275.4 MB



Docker 컨테이너를 백그라운드에서 실행

$ docker run -d -p 80:80 --name nginx1 park/nginx /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/nginx.conf

06f09849c046f5a949e7d82f52b140cfecd85b2d5f813a12c0adb431a5ea7a52




컨테이너 실행 상태 확인

$ docker ps

CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS                NAMES

06f09849c046        park/nginx:latest   /usr/sbin/nginx -g d   27 seconds ago      Up 25 seconds       0.0.0.0:80->80/tcp   nginx1




웹 서비스 기동 상태 확인

$ curl localhost:80



Docker 컨테이너 정지

$ docker stop {컨테이너명}|{컨테이너 ID}


$ docker stop nginx1

nginx1


$ docker ps

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES



Docker 컨테이너 삭제 및 이미지 삭제

$ docker rm {컨테이너명}|{컨테이너 ID} => 컨테이너 삭제

$ docker rmi {이미지명}|{이미지ID} => 이미지 삭


$ docker ps -a

CONTAINER ID        IMAGE               COMMAND                CREATED             STATUS              PORTS               NAMES

06f09849c046        park/nginx:latest   /usr/sbin/nginx -g d   17 minutes ago      Exit 0                                  nginx1

b5c2f7a3f4de        ubuntu:latest       /bin/bash              40 minutes ago      Exit 0                                  ubuntu1



컨테이너 삭제

$ docker rm nginx1

nginx1


$ docker ps -a

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

b5c2f7a3f4de        ubuntu:latest       /bin/bash           40 minutes ago      Exit 0                                  ubuntu1


$ docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE

park/nginx          latest              948a31751059        25 minutes ago      295.9 MB

ubuntu              latest              ad892dd21d60        7 days ago          275.4 MB



이미지 삭제

$ docker rmi park/nginx

Untagged: park/nginx:latest

Deleted: 948a317510591c8af2ca49e205cc0558141ce5e18acc98c0e834cc8e93cf86cd



$ docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE

ubuntu              latest              ad892dd21d60        7 days ago          275.4 MB



Docker 컨테이너 실행

$ docker start [-i] {컨테이너명}|{컨테이너 ID}

$ docker start -i ubuntu1

ubuntu1

       root@b5c2f7a3f4de:/#


반응형
블로그 이미지

조이풀 라이프

Lift is short, enjoy the life

,