SparkSQL使用之Thrift JDBC Server

Thrift JDBC Server描述

Thrift JDBC Server使用的是HIVE0.12的HiveServer2实现。能够使用Spark或者hive0.12版本的beeline脚本与JDBC Server进行交互使用。Thrift JDBC Server默认监听端口是10000。

使用Thrift JDBC Server前需要注意:

1、将hive-site.xml配置文件拷贝到$SPARK_HOME/conf目录下;

2、需要在$SPARK_HOME/conf/spark-env.sh中的SPARK_CLASSPATH添加jdbc驱动的jar包

export SPARK_CLASSPATH=$SPARK_CLASSPATH:/home/Hadoop/software/mysql-connector-java-5.1.27-bin.jar

Thrift JDBC Server命令使用帮助:

cd $SPARK_HOME/sbin
start-thriftserver.sh --help


复制代码
Usage: ./sbin/start-thriftserver [options] [thrift server options]
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Options:
  --master MASTER_URL        spark://host:port, mesos://host:port, yarn, or local.
  --deploy-mode DEPLOY_MODE  Whether to launch the driver program locally ("client") or
                              on one of the worker machines inside the cluster ("cluster")
                              (Default: client).
  --class CLASS_NAME          Your application's main class (for Java / Scala apps).
  --name NAME                A name of your application.
  --jars JARS                Comma-separated list of local jars to include on the driver
                              and executor classpaths.
  --py-files PY_FILES        Comma-separated list of .zip, .egg, or .py files to place
                              on the PythonPATH for Python apps.
  --files FILES              Comma-separated list of files to be placed in the working
                              directory of each executor.

--conf PROP=VALUE          Arbitrary Spark configuration property.
  --properties-file FILE      Path to a file from which to load extra properties. If not
                              specified, this will look for conf/spark-defaults.conf.

--driver-memory MEM        Memory for driver (e.g. 1000M, 2G) (Default: 512M).
  --driver-java-options      Extra Java options to pass to the driver.
  --driver-library-path      Extra library path entries to pass to the driver.
  --driver-class-path        Extra class path entries to pass to the driver. Note that
                              jars added with --jars are automatically included in the
                              classpath.

--executor-memory MEM      Memory per executor (e.g. 1000M, 2G) (Default: 1G).

--help, -h                  Show this help message and exit
  --verbose, -v              Print additional debug output

Spark standalone with cluster deploy mode only:
  --driver-cores NUM          Cores for driver (Default: 1).
  --supervise                If given, restarts the driver on failure.

Spark standalone and Mesos only:
  --total-executor-cores NUM  Total cores for all executors.

YARN-only:
  --executor-cores NUM        Number of cores per executor (Default: 1).
  --queue QUEUE_NAME          The YARN queue to submit to (Default: "default").
  --num-executors NUM        Number of executors to launch (Default: 2).
  --archives ARCHIVES        Comma separated list of archives to be extracted into the
                              working directory of each executor.

Thrift server options:
    --hiveconf <property=value>  Use value for given property

master的描述与Spark SQL CLI一致

beeline命令使用帮助:

cd $SPARK_HOME/bin
beeline --help

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:http://www.heiqu.com/961b06a98bac8ffd68206cedd8d79f87.html