Druid0.15.0安装文档 1 集群规划
Master包含Coordinator和Overlord,4核16G*2;
data包含Historical和MiddleManager,16核64G*3;
query包含Broker和Router,4核16G*1。
1.1 Hadoop配置文件设置本次安装使用HDFS作为存储,进入3个data节点,/data1/druid/druid-0.15.0/conf/druid/cluster/_common目录,软链到对应hadoop的配置文件目录,此步骤为了识别Hadoop HA模式,否则深度存储使用HDFS无法识别路径。
ln -s /usr/hdp/2.6.5.0-292/hadoop/conf hadoop-xml 1.2 jdk1.8安装,此处省略。 1.3 data节点作为HDFS的datanode,此处省略 1.4 common配置log4j2.xml配置
<Configuration status="WARN"> <Properties> <Property name="log.path">/data1/druid/log</Property> </Properties> <Appenders> <Console name="Console" target="SYSTEM_OUT"> <PatternLayout pattern="%d{ISO8601} %p [%t] %c - %m%n"/> </Console> <File name="log" fileName="${log.path}/one.log" append="false"> <PatternLayout pattern="[%d{yyyy-MM-dd HH:mm:ss:SSS}] [%p] - %l - %m%n"/> </File> <RollingFile name="RollingFileInfo" fileName="${log.path}/druid-data.log" filePattern="${log.path}/druid-data-%d{yyyy-MM-dd}-%i.out"> <ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY"/> <PatternLayout pattern="[%d{yyyy-MM-dd HH:mm:ss:SSS}] [%p] - %l - %m%n"/> <Policies> <TimeBasedTriggeringPolicy modulate="true" interval="1"/> <SizeBasedTriggeringPolicy size="100 MB"/> </Policies> </RollingFile> </Appenders> <Loggers> <Root level="info"> <AppenderRef ref="Console"/> <appender-ref ref="RollingFileInfo"/> <appender-ref ref="log"/> </Root> </Loggers> </Configuration>