Docker Hub 이미지를 이용한 Spark 설치하기

 

Apacher Spark 이미지

Docker Hub 이미지 중 가장많은 별포인트를 받은 아래의 Spark 이미지를 설치한다.

[Docker Hub] https://hub.docker.com/r/jupyter/all-spark-notebook/

[Git Hub] https://github.com/jupyter/docker-stacks

 

아래 명령어로 docker hub에서 이미지를 가져온다.


sudo docker pull jupyter/all-spark-notebook  


위의 명령어를 실행하면 아래와 같이 이미지를 다운받고 extract 하는 과정을 볼 수가 있다.

 

 

Image 다운이 다 되었으면, 아래의 명령어를 통해 다운 받은 Docker Image를 확인할 수가 있다.

 

sudo docker images


 

제대로 설치가 되었다면 아래와 유사한 목록 리스트가 나타나야 한다.

 

sudo docker images

 

 

컨테이너를 만들고 실행시키기 위해서 아래의 명령어를 이용한다.

 

sudo docker run -d -p [외부포트]:[컨테이너내부포트] -e GRANT_SUDO=yes --name [컨테이너 이름] jupyter/all-spark-notebook

ex) sudo docker run -d -p 8888:8888 -e GRANT_SUDO=yes --name test_spark jupyer/all-spark-notebook

ex) sudo docker run -d -p 8888:8888 jupyter/all-spark-notebook


 

docker ps 명령을 통해 해당 컨테이너가 잘 실행되었는지를 확인하자.

 

sudo docker ps


방금 만든 컨테이너가 보인다면, 해당 서비스가 잘 구동되어 있음을 나타내는 것이다.

이제 웹상에서 해당 machine의 ip와 port(ex) 8888)를 친후 ipython-notebook을 실행해보자.

 

[Docke Hub] https://hub.docker.com/r/sequenceiq/spark/

Apache Spark on Docker

 

This repository contains a Docker file to build a Docker image with Apache Spark. This Docker image depends on our previous Hadoop Docker

image, available at the SequenceIQ GitHub page.
The base Hadoop Docker image is also available as an official Docker image.

##Pull the image from Docker Repository

 

docker pull sequenceiq/spark:1.6.0

Building the image

docker build --rm -t sequenceiq/spark:1.6.0 .

Running the image

  • if using boot2docker make sure your VM has more than 2GB memory
  • in your /etc/hosts file add $(boot2docker ip) as host 'sandbox' to make it easier to access your sandbox UI
  • open yarn UI ports when running container
    docker run -it -p 8088:8088 -p 8042:8042 -h sandbox sequenceiq/spark:1.6.0 bash
    
    or
    docker run -d -h sandbox sequenceiq/spark:1.6.0 -d

Versions

Hadoop 2.6.0 and Apache Spark v1.6.0 on Centos

 

Testing

There are two deploy modes that can be used to launch Spark applications on YARN.

YARN-client mode

In yarn-client mode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.

# run the spark shell
spark-shell \
--master yarn-client \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1

# execute the the following command which should return 1000
scala> sc.parallelize(1 to 1000).count()

 

YARN-cluster mode

In yarn-cluster mode, the Spark driver runs inside an application master process which is managed by YARN on the cluster, and the client can go away after initiating the application.

Estimating Pi (yarn-cluster mode):

# execute the the following command which should write the "Pi is roughly 3.1418" into the logs
# note you must specify --files argument in cluster mode to enable metrics
spark-submit \
--class org.apache.spark.examples.SparkPi \
--files $SPARK_HOME/conf/metrics.properties \
--master yarn-cluster \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1 \
$SPARK_HOME/lib/spark-examples-1.6.0-hadoop2.6.0.jar

Estimating Pi (yarn-client mode):

# execute the the following command which should print the "Pi is roughly 3.1418" to the screen
spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn-client \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1 \
$SPARK_HOME/lib/spark-examples-1.6.0-hadoop2.6.0.jar
반응형

'IT > Apache Spark' 카테고리의 다른 글

Docker Jupyter Notebook Python, Scala, R, Spark, Mesos Stack  (0) 2016.06.06
블로그 이미지

조이풀 라이프

Lift is short, enjoy the life

,