完整的代码如下
echo "Starting dfs!"
/opt/bmr/hadoop/sbin/start-dfs.sh
echo "
*******************************************************************"
echo "Starting copy!"
cp /opt/bmr/spark/conf/slaves.template /opt/bmr/spark/conf/slaves
echo "Copy finished!"
echo "Writing!"
sed -i '/localhost/d' /opt/bmr/spark/conf/slaves
for slaves_home in
`cat /opt/bmr/hadoop/etc/hadoop/slaves`
do
echo $slaves_home >> /opt/bmr/spark/conf/slaves
done
echo "
*******************************************************************"
echo "Starting spark!"
/opt/bmr/spark/sbin/start-all.sh
echo "
*******************************************************************"
echo "Watching the threads"
jps
查看到Master进程已经开启了,就大功告成了!
结言
只要把上面的代码保存到一个.shell文件下。给它加上可运行的权限,然后就大功告成了。理论上,百度BMR的spark的路径都是一致的,因而都能通用,希望能减轻大家每次配置的烦恼。