跳到主要内容
版本:3.3-SNAPSHOT

声明

本手册适用于部署开发或生产环境。部署开发环境时,涉及的数据库名称带上_dev后缀。

一、系统架构图

二、系统部署图

画板

三、服务器准备要求

类型关键项测试/ 生产硬件资源升降级安装内容数量规格配置
1大数据集群可以和数据中台共用可配置大数据集群4台416核 64G内存,41TB存储,Linux 服务器
2应用服务器(生产环境)生产可配置数据开发平台DVS;客户数据平台CDP;实时事件平台EDT;营销自动化平台ME;社群运营SCRM4 台4* 8核32G内存 4*500GB存储,Linux 服务器
3数据库服务器(生产环境)生产可配置客户运营平台1台8核32G内存 2T存储,Linux 服务器
4应用服务器(测试环境)测试
(不用时可关闭)
可配置数据开发平台DVS;客户数据平台CDP;实时事件平台EDT;营销自动化平台ME;社群运营SCRM3台3* 8核32G内存 3*500GB存储,Linux 服务器

四、操作系统版本要求

  • 系统推荐,Linux服务器:
    • 国际通用:centos7 stream
    • 国内版本:统信UOS

五、安装组件清单

基础组件版本
jdk1.8+
msyql5.7+
redis5.0.13
kafka2.13-3.51
nginx1.20.1
postgresql15.6
mariadb10.4.12
minioRELEASE.2023-09-07T02-05-02Z
大数据组件版本共享大数据资源资源预估
zookeeper3.6.3
hadoop3.3.6240G内存,4T硬盘
hive3.1.2使用hadoop集群资源
hbase2.5.5500G磁盘
flink1.17.164G内存
spark3.4.1使用hadoop集群资源
yarn使用hadoop集群资源,且配置单独对立
clickhouse23.2.164G内存,500G磁盘
scala2.12.17
数据库名称数据库类型数据库版本说明
nacos_prodmysql1.10.4.12-MariaDB-log
2.5.7.3
nacos配置
nexus_datavsmysql1.10.4.12-MariaDB-log
2.5.7.3
dvs
nexus_portmysql1.10.4.12-MariaDB-log
2.5.7.3
dvs
xxl_jobmysql1.10.4.12-MariaDB-log
2.5.7.3
dvs
nexus_cdp_webpostgresqlPostgreSQL 15.4cdp
nexus_edtpostgresqlPostgreSQL 15.4edt
nexus_jobpostgresqlPostgreSQL 15.4cdp
nexus_mepostgresqlPostgreSQL 15.4cdp
nexus_midwarepostgresqlPostgreSQL 15.4cdp

六、具体的系统部署流程

本次服务通过jar启动,通过脚本启动每一个jar包,每个服务对应一个jar,涉及到应用已经启动样例如下:

存放路径:/data/standard/dvs/dataverse-admin-service.jar

启动脚本:sh start.sh

echo 'start dataverse admin'

cd `pwd`

kill -9 `ps -ef |grep dataverse-admin-service |grep -v grep|awk '{print $2}'`

nohup java -Xms512m -Xmx1024m -jar ./dataverse-admin-service.jar -Djava.security.egd=file:/dev/./urandom > nohup.log 2>&1 &
~
应用应用部署路径jar名称归属项目数据库机器编号启动内存说明
dvs-admin/data/standard/dvs/dataverse-admin-service.jardvsmysqlM11G
dvs-manage/data/standard/dvs/dataverse-manage-service.jardvsmysqlM11G
dvs-port/data/standard/dvs/dataverse-port-server.jar
dataverse-spark-engine.jar
dvsmysqlM11G只启动dataverse-port-server.jar
dvs-gateway/data/standard/dvs/dataverse-gateway.jardvsmysqlM11G
xxl-job/data/standard/dvs/xxl-job-admin.jardvsmysqlM11G
midware-upms/data/standard/cdp/midware/nexus-midware-upms-biz.jarcdppostgresqlM11G
midware-auth/data/standard/cdp/midware/nexus-midware-auth.jarcdppostgresqlM11G
midware-gateway/data/standard/cdp/midware/nexus-midware-gateway.jarcdppostgresqlM11G
midware-file/data/standard/cdp/midware/nexus-midware-file.jarcdppostgresqlM11G
midware-i18n/data/standard/cdp/midware/nexus-midware-i18n.jarcdppostgresqlM11G
midware-tenant/data/standard/cdp/midware/nexus-midware-tenant-biz.jarcdppostgresqlM11G
midware-job-admin/data/standard/cdp/midware/nexus-midware-job-admin.jarcdppostgresqlM11G
behavior/data/standard/cdp/behavior/nexus-behavior-service.jar
nexus-behavior-worker.jar
cdppostgresqlM11G只启动
nexus-behavior-service.jar
box/data/standard/cdp/box/nexus-proxy-server.jarcdppostgresqlM11G
data/data/standard/cdp/data/nexus-data-service.jarcdppostgresqlM11G
data-auth/data/standard/cdp/data-auth/nexus-data-auth-service.jarcdppostgresqlM11G
label/data/standard/cdp/label/nexus-label-service.jar
nexus-label-worker.jar
cdppostgresqlM11G只启动nexus-label-service.jar
label-open/data/standard/cdp/label-open/nexus-label-open.jarcdppostgresqlM11G
nexus3-me-audience-service/data/standard/me/nexus3-me-audience-service/nexus-me-audience-service.jarmepostgresqlM21G
nexus3-me-automarketing-service/data/standard/me/nexus3-me-automarketing-service/nexus-me-automarketing-service.jarmepostgresqlM21G
nexus3-me-channel-biz/data/standard/me/nexus3-me-channel-biz/nexus-me-channel-biz.jarmepostgresqlM21G
nexus3-me-channel-gateway/data/standard/me/nexus3-me-channel-gateway/nexus-me-channel-gateway.jarmepostgresqlM21G
nexus3-me-channel-weixin/data/standard/me/nexus3-me-channel-weixin/nexus-me-channel-weixin-open.jarmepostgresqlM21G
nexus3-me-event-biz/data/standard/me/nexus3-me-event-biz/nexus-me-event-biz.jarmepostgresqlM21G
nexus3-me-flowrecord-service/data/standard/me/nexus3-me-flowrecord-service/nexus-me-flowrecord-service.jarmepostgresqlM21G
nexus3-me-release-service/data/standard/me/nexus3-me-release-service/nexus-me-release-service.jarmepostgresqlM21G
nexus3-me-sms-service/data/standard/me/nexus3-me-sms-service/nexus-me-sms-service.jarmepostgresqlM21G
nexus-event-open/data/standard/me/nexus-event-open/nexus-event-open.jarmepostgresqlM21G
stream-acceptor/data/standard/me/stream/stream-acceptor.jarmepostgresqlM21G
stream-executor-ack/data/standard/me/stream/stream-executor-ack.jarmepostgresqlM21G
stream-executor-coupon/data/standard/me/stream/stream-executor-coupon.jarmepostgresqlM21G
stream-executor-integral/data/standard/me/stream/stream-executor-integral.jarmepostgresqlM21G
stream-executor-label/data/standard/me/stream/stream-executor-label.jarmepostgresqlM21G
stream-executor-logger/data/standard/me/stream/stream-executor-logger.jarmepostgresqlM21G
stream-executor-realevent/data/standard/me/stream/stream-executor-realevent.jarmepostgresqlM21G
stream-executor-sealer/data/standard/me/stream/stream-executor-sealer.jarmepostgresqlM21G
stream-executor-sms/data/standard/me/stream/stream-executor-sms.jarmepostgresqlM21G
stream-executor-timer/data/standard/me/stream/stream-executor-timer.jarmepostgresqlM21G
stream-executor-wechat/data/standard/me/stream/stream-executor-wechat.jarmepostgresqlM21G
stream-route-dispenser/data/standard/me/stream/stream-route-dispenser.jarmepostgresqlM21G
nexus3-fdre/data/standard/me/stream/fdre-apps.jaredtpostgresqlM31G
edt-management-service/data/standard/edt/edt-management-service/edt-management-service.jaredtpostgresqlM31G
edt-report-api/data/standard/edt/edt-report-api/edt-report-api.jaredtpostgresqlM31G
edt-event-process-pipeline/data/standard/edt/edt-event-process-pipeline/edtpostgresqlM31G
lw-event-task/data/standard/scrm/lw-event-task/lw-event-task.jarSCRM(三阶段)mysqlM31G
lw-wx/data/standard/scrm/lw-wx/lw-wx.jarSCRM(三阶段)mysqlM31G
lw-auth/data/standard/scrm/lw-auth/lw-auth.jarSCRM(三阶段)mysqlM31G
lw-api/data/standard/scrm/lw-api/lw-api.jarSCRM(三阶段)mysqlM31G

七、脚本启动

示例:

echo 'start nexus-label-service'

cd `pwd`

kill -9 `ps -ef |grep nexus-label-service |grep -v grep|awk '{print $2}'`

nohup java -Xms1024m -Xmx1568m $SKYWORKING_OPTS -jar ./nexus-label-service.jar --spring.profiles.active=dev -Dspring.cloud.nacos.config.namespace=323220b1-b5cf-4d12-a704-1e3a0235d12b -Dspring.cloud.nacos.discovery.namespace=323220b1-b5cf-4d12-a704-1e3a0235d12b -Dspring.cloud.nacos.config.server-addr=10.25.19.3:8848 -Dspring.cloud.nacos.discovery.server-addr=10.25.19.3:8848 -Dspring.cloud.nacos.discovery.group=standard -Dspring.cloud.nacos.config.group=standard -Djava.security.egd=file:/dev/./urandom > nohup.log 2>&1 &
~
~
~

八、安装组件

1.安装JDK, 配置环境变量

# 解压缩文件:
tar -zxvf jdk1.8.0_181.tar.gz
#创建软连接:
ln -s jdk1.8.0_181 /data/jdk
#配置环境变量:
touch /etc/profile.d/jdk_env.sh
#输入如下内容:
export JAVA_HOME=/data/jdk
export PATH=$PATH:$JAVA_HOME/bin/12
export CLASSPATH=$JAVA_HOME/lib
export JRE_HOME=$JAVA_HOME/jre

结果验证:

2.安装mysql

#安装MySQL服务器
sudo yum install mysql-server
#启动MySQL服务
sudo systemctl start mysqld
#设置MySQL服务开机自启
sudo systemctl enable mysqld
#运行安全脚本设置密码和优化安全
sudo mysql_secure_installation
#登录MySQL
mysql -u root -p

3.安装postgresql

Install the repository RPM: 
sudo yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
Install PostgreSQL:
sudo yum install -y postgresql15-server
Optionally initialize the database and enable automatic start:
sudo /usr/pgsql-15/bin/postgresql-15-setup initdb
sudo systemctl enable postgresql-15
sudo systemctl start postgresql-15

4.安装redis

安装redis
解压
tar -zxvf redis5.0.13.tar.gz
编译
cd redis5.0.13
make
安装
sudo make install
修改redis.conf文件
daemonize yes
启动redis
./bin/redis-server ./bin/redis.conf

5.安装kafka

解压
tar -zxvf kafka_2.13-3.5.1.tar.gz
创建软连接:
ln -s kafka_2.13-3.5.1 /data/kafka
配置kafka 环境变量。
KAFKA_HOME=/data/kafka
PATH=$PATH:$KAFKA_HOME/bin

6.安装nginx

Nigix代理
解压
tar -zxvf nginx -1.20.1.tar.gz
编译
cd nginx-1.20.1
./configure --prefix=/usr/local/nginx
启动
./nginx

7.安装zookeeper

集群管理
假设有3台机器的Zookeeper集群
解压
tar zxvf apache-zookeeper-3.6.3-bin.tar.gz
软连接
ln -s apache-zookeeper-3.6.3-bin /data/zk
配置环境变量
zk_env.sh in /etc/profile.d/
输入内容:
export ZK_HOME=/data/zk
export PATH=$ZK_HOME/bin:$PATH
拷贝配置文件:
cp zoo_sample.cfg zoo.cfg
修改zoo.cfg
dataDir=/data/.zk/data
dataLogDir=/data/.zk/logs
server.1=zk1:2888:3888
server.2=zk2:2888:3888
server.3=zk3:2888:3888
创建myid
touch /data/.zk/data myid
输入1
将上面配置好的内容分发到对应的机器上,并修改myid
启动
./zkServer.sh start

8.安装hadoop

Hadoop集群
解压
tar -zxvf hadoop-3.3.6.tar.gz
软链接
ln -s hadoop-3.3.6 /data/hadoop
配置环境变量
hadoop_env.sh in /etc/profile.d/
输入内容:
export HADOOP_HOME=/data/hadoop
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export LD_LIBRARY_PATH=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_CLASSPATH=`hadoop classpath`

参考配置文件:
mapred-site.xml
hadoop-env.sh
yarn-env.sh
core-site.xml
hdfs-site.xml
yarn-site.xml

hadoop配置多个命名空间:

hdfs.site.xml中:
<property>
<name>dfs.nameservices</name>
<value>ns1,ns2</value>
</property>
<!--其中ns1,ns2分别指两个空间, 配置每个空间对应的IP-->
<property>
<name>dfs.namenode.rpc-address.ns1</name>
<value>nn-host1:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.ns1</name>
<value>nn-host1:50070</value>
</property>
<!-- Configuration for the second namespace -->
<property>
<name>dfs.namenode.rpc-address.ns2</name>
<value>nn-host2:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.ns2</name>
<value>nn-host2:50070</value>
</property>

启动 ./start-all.sh

9.安装hive

Hive数据库
解压
tar -zxvf apache-hive-3.1.3-bin.tar.gz
软链接
ln -s apache-hive-3.1.3-bin /data/hive
配置环境变量
hive_env.sh in /etc/profile.d/
输入内容
export HIVE_HOME=/data/hive
export PATH=$HIVE_HOME/bin:$PATH
参考配置文件:
[hive-site.xml](https://apexproduct.yuque.com/attachments/yuque/0/2024/xml/40401554/1719563958363-646eebaa-0531-4430-b195-72c21eea4bd8.xml)
[hive-env.sh](https://apexproduct.yuque.com/attachments/yuque/0/2024/sh/40401554/1719563958482-dc1fc2d6-b937-44c7-9a3f-4455fb89e819.sh)
创建warehouse
$HADOOP_HOME/bin/hadoop fs -mkdir /tmp
$HADOOP_HOME/bin/hadoop fs -mkdir /user/hive/warehouse $HADOOP_HOME/bin/hadoop fs -chmod g+w /tmp
$HADOOP_HOME/bin/hadoop fs -chmod g+w /user/hive/warehouse
启动
nohup hive --service metastore 2>&1 >> /dev/null &
$HIVE_HOME/bin/hiveserver2 2>&1 >> /dev/null &

10.安装hbase

Hbase数据库

解压
tar -zxvf hbase-2.5.5-hadoop3-bin.tar.gz
软链接
ln -s hbase-2.5.5-hadoop3-bin /data/hbase
配置环境变量
hbase_env.sh in /etc/profile.d/
输入内容
export HBASE_HOME=/data/software/hbase
export PATH=$PATH:$HBASE_HOME/bin

参考配置文件:
hbase-env.sh
hbase-site.xml 如果是集群模式,增加regionservers文件,里面配置主机信息

dn1
dn2
dn3
dn4
dn5
启动
./start-hbase.sh

解压
tar -zxvf flink-1.17.1-bin-scala_2.12.tgz
软链接
ln -s flink-1.17.1-bin-scala_2.12 /data/flink
配置环境变量
flink_env.sh in /etc/profile.d/
输入内容
export FLINK_HOME=/data/flink
export PATH=$PATH:$SPARK_HOME/bin
启动
./start-cluster.sh

12.安装spark

Spark引擎
解压
tar -zxvf spark-3.4.1-bin-hadoop3.tar.gz
软链接
ln -s spark-3.4.1-bin-hadoop3 /data/spark
配置环境变量
spark_env.sh in /etc/profile.d/
输入内容
export SPARK_HOME=/data/software/spark
export PATH=$PATH:$SPARK_HOME/bin

参考配置文件:
hbase-site.xml
hdfs-site.xml
hive-site.xml
spark-env.sh
core-site.xml

13.安装clickhouse

Clickhouse数据库
下载安装包 https://packages.clickhouse.com/rpm/stable/

配置

clickhouse-client-23.2.1.2537.x86_64.rpm
clickhouse-common-static-23.2.1.2537.x86_64.rpm
clickhouse-common-static-dbg-23.2.1.2537.x86_64.rpm
clickhouse-server-23.2.1.2537.x86_64.rpm

安装

rpm -ivh clickhouse-common-static-23.2.1.2537.x86_64.rpm
rpm -ivh clickhouse-common-static-23.2.1.2537.x86_64.rpm
rpm -ivh clickhouse-server-23.2.1.2537.x86_64.rpm #在这个步骤高版本会让你输入default用户初始化密码注意
rpm -ivh clickhouse-client-23.2.1.2537.x86_64.rpm

启动

sudo clickhouse start

14.安装scala

Scala环境
解压:
tar -zxvf scala-2.12.18.tar.gz
创建软链接:
ln -s scala-2.12.18 /data/scala
创建 scala 环境变量文件:
touch /etc/profile.d/scala_env.sh
输入内容:
export SCALA_HOME=/data/scala
export PATH=$PATH:$SCALA_HOME/bin

结果验证:

15.安装nacos

Nacos微服务管理
解压
tar -zxvf nacos-server-1.4.2.tar.gz
软链接
ln -s nacos-server-1.4.2 /data/nacos
启动
./startup.sh

九、部署前端

[@王稳定]