Build Spark on local machine (only if using PySpark; otherwise, remote machine works) (http://spark.apache.org/docs/latest/building-with-maven.html)
export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"
mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -DskipTests clean package
Copy the assembly/target/scala-2.10/...jar
to the corresponding directory on
the cluster node and also into a location in HDFS.
Set the Spark JAR HDFS location
export SPARK_JAR=hdfs:///user/laserson/tmp/spark-assembly-1.2.0-SNAPSHOT-hadoop2.4.0.jar
On the cluster node, start up the shell
# by following where `which java` actually is
export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.51.x86_64/jre
export HADOOP_CONF_DIR=/etc/hadoop/conf
export IPYTHON=1
bin/pyspark --master yarn-client --num-executors 6 --executor-memory 4g --executor-cores 12
If you just want to use the Scala spark-shell
, you can build Spark on the
cluster too.