-
Notifications
You must be signed in to change notification settings - Fork 8
/
search.xml
155 lines (155 loc) · 294 KB
/
search.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
<?xml version="1.0" encoding="utf-8"?>
<search>
<entry>
<title><![CDATA[OSI网络参考模型]]></title>
<url>%2F2019%2F02%2F14%2FOSI%E7%BD%91%E7%BB%9C%E5%8F%82%E8%80%83%E6%A8%A1%E5%9E%8B%2F</url>
<content type="text"><![CDATA[欢迎加入王导的VIP学习qq群:==>932194668<== OSI七层模型OSI模型(Open System Interconnection Model)是一个由ISO提出得到概念模型,试图提供一个使各种不同的的计算机和网络在世界范围内实现互联的标准框架。 分层结构OSI参考模型采用分层结构,如图所示。 不得不说,这张图真的超经典呀。一张图搞定你你不懂的一切。主要分为以下七层(从下至上):物理层、数据链路层、网络层、传输层、会话层、表示层、应用层。 各层功能 物理层简单的说,物理层(Physical Layer)确保原始的数据可在各种物理媒体上传输。在这一层上面规定了激活、维持、关闭通信端点之间的机械特性、电气特性、功能特性以及过程特性,为上层协议提供了一个传输数据的物理媒体。这一层传输的是bit流。 数据链路层数据链路层(Data Link Layer)在不可靠的物理介质上提供可靠的传输。该层的作用包括:物理地址寻址、数据的成帧、流量控制、数据的检错、重发等。这一层中将bit流封装成frame帧。 网络层网络层(Network Layer)负责对子网间的数据包进行路由选择。此外,网络层还可以实现拥塞控制、网际互连等功能。在这一层,数据的单位称为数据包(packet)。 传输层传输层是第一个端到端,即主机到主机的层次。传输层负责将上层数据分段并提供端到端的、可靠的或不可靠的传输。此外,传输层还要处理端到端的差错控制和流量控制问题。在这一层,数据的单位称为数据段(segment)。 会话层这一层管理主机之间的会话进程,即负责建立、管理、终止进程之间的会话。会话层还利用在数据中插入校验点来实现数据的同步,访问验证和会话管理在内的建立和维护应用之间通信的机制。如服务器验证用户登录便是由会话层完成的。使通信会话在通信失效时从校验点继续恢复通信。 表示层这一层主要解决用户信息的语法表示问题。它将欲交换的数据从适合于某一用户的抽象语法,转换为适合于OSI系统内部使用的传送语法。即提供格式化的表示和转换数据服务。数据的压缩和解压缩, 加密和解密等工作都由表示层负责。 应用层这一层为操作系统或网络应用程序提供访问网络服务的接口。 各层传输协议、传输单元、主要功能性设备比较 名称 传输协议 主要功能设备/接口 主要功能设备/接口 物理层 IEEE 802.1A、IEEE 802.2 bit-flow 比特流 光纤、双绞线、中继器和集线器 & RJ-45(网线接口) 数据链路层 ARP、MAC、 FDDI、Ethernet、Arpanet、PPP、PDN frame 帧 网桥、二层交换机 网络层 IP、ICMP、ARP、RARP 数据包(packet) 路由器、三层交换机 传输层 TCP、UDP Segment/Datagram 四层交换机 会话层 SMTP、DNS 报文 QoS 表示层 Telnet、SNMP 报文 - 应用层 FTP、TFTP、Telnet、HTTP、DNS 报文 - 关于协议你应该知道这些以上通过图表、文字向大家阐述了七层模型每一层的具体功能及其相关协议,但知道了这些还不够,你还要知道以下这些。 TCP/UDP TCP/UDP是什么?TCP — Transmission Control Protocol,传输控制协议。UDP — User Data Protocol,用户数据报协议。 TCP/UDP的区别(优缺点)?(1)、TCP是面向连接的,UDP是面向无连接的。TCP在通信之前必须通过三次握手机制与对方建立连接,而UDP通信不必与对方建立连接,不管对方的状态就直接把数据发送给对方(2)、TCP连接过程耗时,UDP不耗时(3)、TCP连接过程中出现的延时增加了被攻击的可能,安全性不高,而UDP不需要连接,安全性较高(4)、TCP是可靠的,保证数据传输的正确性,不易丢包;UDP是不可靠的,易丢包(5)、tcp传输速率较慢,实时性差,udp传输速率较快。tcp建立连接需要耗时,并且tcp首部信息太多,每次传输的有用信息较少,实时性差。(6)、tcp是流模式,udp是数据包模式。tcp只要不超过缓冲区的大小就可以连续发送数据到缓冲区上,接收端只要缓冲区上有数据就可以读取,可以一次读取多个数据包,而udp一次只能读取一个数据包,数据包之间独立 TCP三次握手过程 STEP 1: 主机A通过向主机B发送一个含有同步序列号的标志位的数据段给主机B,向主机B请求建立连接,通过这个数据段,主机A告诉主机B两件事:我想要和你通信;你可以用哪个序列号作为起始数据段来回应我。STEP 2: 主机B收到主机A的请求后,用一个带有确认应答(ACK)和同步序列号(SYN)标志位的数据段响应主机A,也告诉主机A两件事:我已经收到你的请求了,你可以传输数据了;你要用哪佧序列号作为起始数据段来回应我。STEP 3: 主机A收到这个数据段后,再发送一个确认应答,确认已收到主机B的数据段:”我已收到回复,我现在要开始传输实际数据了。这样3次握手就完成了,主机A和主机B就可以传输数据了。 注意此时需要注意的是,TCP建立连接要进行3次握手,而断开连接要进行4次。 名词解释 ACK: TCP报头的控制位之一,对数据进行确认,确认由目的端发出,用它来告诉发送端这个序列号之前的数据段都收到了。比如,确认号为X,则表示前X-1个数据段都收到了,只有当ACK=1时,确认号才有效,当ACK=0时,确认号无效,这时会要求重传数据,保证数据的完整性。SYN: 同步序列号,TCP建立连接时将这个位置1。FIN : 发送端完成发送任务位,当TCP完成数据传输需要断开时,提出断开连接的一方将这位置1。 TCP可靠性的四大手段(1)、顺序编号: tcp在传输文件的时候,会将文件拆分为多个tcp数据包,每个装满的数据包大小大约在1k左右,tcp协议为保证可靠传输,会将这些数据包顺序编号(2)、确认机制: 当数据包成功的被发送方发送给接收方,接收方会根据tcp协议反馈给发送方一个成功接收的ACK信号,信号中包含了当前包的序号(3)、超时重传: 当发送方发送数据包给接收方时,会为每一个数据包设置一个定时器,当在设定的时间内,发送方仍没有收到接收方的ACK信号,会再次发送该数据包,直到收到接收方的ACK信号或者连接已断开(4)、校验信息: tcp首部校验信息较多,udp首部校验信息较少。上文部分协议简单讲 IEEE 802.1A、IEEE 802.2IEEE是英文Institute of Electrical and Electronics Engineers的简称,其中文译名是电气和电子工程师协会。IEEE 802规范定义了网卡如何访问传输介质(如光缆、双绞线、无线等),以及如何在传输介质上传输数据的方法,还定义了传输信息的网络设备之间连接建立、维护和拆除的途径。遵循IEEE 802标准的产品包括网卡、桥接器、路由器以及其他一些用来建立局域网络的组件。IEEE802.1A —— 局域网体系结构IEEE802.2 ——- 逻辑链路控制(LLC) FDDI光纤分布式数据接口(Fiber Distributed Data Interface) PPP点对点协议(Point to Point Protocol),为在点对点连接上传输多协议数据包提供了一个标准方法。 IP互联网协议(Internet Protocol),为计算机网络相互连接进行通信而设计的协议。任何厂家生产的计算机系统,只要遵守IP协议就可以与因特网互连互通。IP地址具有唯一性,根据用户性质的不同,可以分为5类。 ICMP控制报文协议(Internet Control Message Protocol)。TCP/IP设计了ICMP协议,当某个网关发现传输错误时,立即向信源主机发送ICMP报文,报告出错信息,让信源主机采取相应处理措施,它是一种差错和控制报文协议,不仅用于传输差错报文,还传输控制报文。 ARP/RARPARP (Address Resolution Protocol) 地址解析协议RARP (Reverse Address Resolution Protocol) 反向地址解析协议 SMTP简单邮件传输协议(Simple Mail Transfer Protocol),它是一组用于由源地址到目的地址传送邮件的规则,由它来控制信件的中转方式。SMTP协议属于TCP/IP协议簇,它帮助每台计算机在发送或中转信件时找到下一个目的地。通过SMTP协议所指定的服务器,就可以把E-mail寄到收信人的服务器上了。 SNMP简单网络管理协议(Simple Network Management Protocol ),该协议能够支持网络管理系统,用以监测连接到网络上的设备是否有任何引起管理上关注的情况。 DNS域名系统(Domain Name System),因特网上作为域名和IP地址相互映射的一个分布式数据库,能够使用户更方便的访问互联网,而不用去记住能够被机器直接读取的IP数串。通过主机名,最终得到该主机名对应的IP地址的过程叫做域名解析(或主机名解析)。DNS协议运行在UDP协议之上,使用端口号53。 FTP文本传输协议(File Transfer Protocol),用于Internet上的控制文件的双向传输。同时,它也是一个应用程序Application)。基于不同的操作系统有不同的FTP应用程序,而所有这些应用程序都遵守该协议以传输文件。在FTP的使用当中,用户经常“下载”(Download)和“上载”(Upload)。“下载”文件就是从远程主机拷贝文件至自己的计算机上;“上载”文件就是将文件从自己的计算机中拷贝至远程主机上。 HTTP超文本传输协议(HyperText Transfer Protocol),是互联网上应用最为广泛的一种网络协议。所有的WWW文件都必须遵守这个标准。它可以使浏览器更加高效,使网络传输减少。它不仅保证计算机正确快速地传输超文本文档,还确定传输文档中的哪一部分,以及哪部分内容首先显示(如文本先于图形)等。HTTP是一个应用层协议,由请求和响应构成,是一个标准的客户端服务器模型,是一个无状态的协议。]]></content>
<categories>
<category>Linux基础</category>
</categories>
</entry>
<entry>
<title><![CDATA[运维安全管理]]></title>
<url>%2F2019%2F02%2F13%2F%E8%BF%90%E7%BB%B4%E5%AE%89%E5%85%A8%E7%AE%A1%E7%90%86%2F</url>
<content type="text"><![CDATA[欢迎加入王导的VIP学习qq群:==>932194668<== 运维安全的四个层次网络安全网络设备的安全 思科、华为等网络设备定期升级,修复bug和曝出的漏洞 公网防火墙,核心交换机等核心网络设备的管理 外网安全策略 IDC,防火墙策略,严把上行端口开放 公网上下行流量监控 对DDos攻击提高警惕,提前准备应急预案 临时提高流量,硬抗 启动流量清洗,将攻击流量引入黑洞,有可能误杀正常用户 专线安全策略 对涉及金融、支付等项目设立专线 VPN安全策略 IPsec VPN:site to site OpenVPN: peer to site 摒弃PPTP等不含加密算法的vpn服务 端口全禁止,需要通信的申请审批后,再由管理员开放 数据安全数据库用户权限 管理员权限限定,不允许远程root 定期更换管理员密码 应用权限最小化,专人管理 手动查询权限可审计 数据库审计设备 数据库主库不能开一般查询日志(为了性能) 交换机上镜像流量,接入审计设备,实现实时审计 不要设计串行在系统里,形成单点和瓶颈 数据库脱敏 姓名、身份证、手机号、银行卡号等敏感信息应脱敏处理 对程序脱敏协同系统架构部共同出规范 对手动查询权限脱敏,按列授权,录屏 备份策略 每周全备,每天增备 备份文件要每天利用内网流量低谷时间,推送到远程主机,有条件的应跨机房备份 一定要规划定期恢复测试,保证备份的可用性 应用安全操作系统安全 系统基础优化(内核优化,优化工具) 日期,时区同步 root密码复杂度足够高,需要在操作系统里做定时过期策略 每三个月使用脚本更新服务器的root密码和iDrac密码,并将更换后的密码加密打包发送给指定管理员邮箱,同时提交gitlab 对系统关键文件进行md5监控,例如/etc/passwd,~/.ssh/authorized_keys文件等,如有变更,触发报警 定期查毒,漏扫,定期安排更新操作系统 /etc/ssh/sshd_config里配置: PasswordAuthentication no PermitRootLogin without-password 使用saltstack等批量管理软件进行特权命令执行和备份脚本执行(避开ssh协议) 应用系统安全WEB应用防火墙(WAF) 防SQL注入 防CC攻击 防XSS跨站脚本 应用系统漏洞 关注0day漏洞新闻 及时整改并上线投产 组织技术力量测试,复现 日志收集和分析 完善日志收集方案,集中转储 通过应用系统日志分析,进行安全预警 DNS劫持 全站https,购买泛域名证书 有条件的可以自己维护公网DNS,上dnssec数字签名 采购基调、听云等第三方拨测服务,分布式监控网站质量 向ISP投诉,工信部举报 Basic Auth 在nginx上做,非常简单 对防脚本攻击有奇效 企业邮箱服务器安全推荐使用微软的Exchange功能强大,维护相对简单 投产反垃圾邮件网关投产梭子鱼反垃圾邮件网关,防伪造发信人 群发审核管控用好邮件组接入AD域控域名安全管理做好ICP备案 域名证书 域名实名认证(公司模板) 接入商处蓝色幕布拍照 法人身份证、管理员身份证 网站真实性核验单 公网解析 专人管理,邮件申请,审批 将业务解析至不同公网IP出口,双活机房 智能解析,解析至不同线路 如有条件,可购买公网解析套餐服务,安全服务等 内网安全80%以上的企业IT安全问题出自内网安全 堡垒机 一定要强制使用堡垒机登录服务器 ssh私钥通行短语机制,避免密钥失窃 定期审计堡垒机操作日志 如果有必要,可以上2FA(双因子验证) AD域控有条件一定要接入windows域控,要求密码复杂度和定期过期 邮箱 wifi vpn账号密码 内网系统账号 业务系统账号 网络设备等 办公网安全 专业的HelpDesk团队 企业级杀毒软件 办公电脑接入域控 上网行为管理 流量监控,mac地址绑定 有条件的可以在办公环境上一个小型的业务机房 wifi管控,单做guest接入点,不能访问业务核心网络]]></content>
<categories>
<category>运维技术管理</category>
</categories>
</entry>
<entry>
<title><![CDATA[基于ITIL的IT运维管理]]></title>
<url>%2F2019%2F02%2F13%2F%E5%9F%BA%E4%BA%8EITIL%E7%9A%84IT%E8%BF%90%E7%BB%B4%E7%AE%A1%E7%90%86%2F</url>
<content type="text"><![CDATA[欢迎加入王导的VIP学习qq群:==>932194668<== IT管理中的PPT 人,流程,技术 服务是什么? 服务是向客户提供价值的一种手段,使客户不用承担特定的成本和风险就可以获得所期望的结果。 服务管理 服务管理是一套特定的组织能力,以服务的形式为客户提供价值。 ITIL简介是什么? Information Technology Infrastructure Library IT基础架构库,一个可以直接使用的标准,已于2005年12月15日被ISO接受为国际标准 – ISO20000 与ISO20000的区别 ITIL ISO20000 提供最佳实践指导 提供衡量ITSM的指标 没有固定的能力衡量指标 全球统一 对人员 对机构 目标 将IT管理工作标准化、模式化,减少人为误操作带来的隐患 通过服务目录,服务报告,告诉业务部门,我们可以做什么,做了什么 通过系列流程,知识库减轻对英雄式工程师的依赖。把经验积累下来 通过对流程的管控,减少成本,降低风险,提供客户满意度 IT Service CMM初始级个人英雄式工程师 可重复级潜规则 定义级 已将IT服务过程文档话,标准化,并综合成标准服务过程 根据客户需求调整服务产品和服务战略 适当的工具和信息报告 管理级 受监督、可测量的IT服务体系 根据业务战略调整服务体系 优化级 持续改进的IT服务体系 IT与业务指标建立关系 IT与业务协作改进流程 ITIL v3服务战略从组织能力和战略资产两个角度出发,为组织进行服务战略方面的决策和战略设计提供了一套结构化的方法 我们的业务是什么? 我们的客户是谁? 客户重视什么? 谁依赖我们的服务? 他们怎样使用我们的服务? 服务为什么对他们有价值? 4P观念面向其目标客户的业务定位或服务提供方式 定位描述了采纳和中立场的决策 计划描述了将蓝图转化为现实的手段 模式描述了一系列的稳定的决策和行动 服务设计对服务及服务管理流程设计和开发的指导 服务目录管理服务级别管理容量管理商业容量管理:吞吐量服务容量管理:响应时间资源容量管理:CPU可用性管理正常运行时间、宕机时间 IT服务持续性管理灾备 信息安全管理服务转换服务运营持续服务改进RACI模型谁负责,谁批准,咨询谁,通知谁 角色 服务所有者 流程所有者 流程 简单问题复杂化,多元化 效率、成本、质量、风险、稳定性、可持续性、用户体验 项目临时性 运营持续性]]></content>
<categories>
<category>运维技术管理</category>
</categories>
</entry>
<entry>
<title><![CDATA[实验文档1:跟我一步步安装部署kubernetes集群]]></title>
<url>%2F2019%2F01%2F18%2F%E5%AE%9E%E9%AA%8C%E6%96%87%E6%A1%A31%EF%BC%9A%E8%B7%9F%E6%88%91%E4%B8%80%E6%AD%A5%E6%AD%A5%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2kubernetes%E9%9B%86%E7%BE%A4%2F</url>
<content type="text"><![CDATA[欢迎加入王导的VIP学习qq群:==>932194668<== 实验环境基础架构 主机名 角色 ip HDSS7-11.host.com k8s代理节点1 10.4.7.11 HDSS7-12.host.com k8s代理节点2 10.4.7.12 HDSS7-21.host.com k8s运算节点1 10.4.7.21 HDSS7-22.host.com k8s运算节点2 10.4.7.22 HDSS7-200.host.com k8s运维节点(docker仓库) 10.4.7.200 硬件环境 5台vm,每台至少2c2g 软件环境 OS: CentOS Linux release 7.6.1810 (Core) docker: v1.12.6 docker引擎官方下载地址docker引擎官方selinux包 kubernetes: v1.13.2 kubernetes官方下载地址 etcd: v3.1.18 etcd官方下载地址 flannel: v0.10.0 flannel官方下载地址 bind9: v9.9.4 bind9官方下载地址 harbor: v1.7.1 harbor官方下载地址 证书签发工具CFSSL: R1.2 cfssl下载地址cfssljson下载地址cfssl-certinfo下载地址 其他 其他可能用到的软件,均使用操作系统自带的yum源和epel源进行安装 前置准备工作DNS服务安装部署 创建主机域host.com 创建业务域od.com 主辅同步(10.4.7.11主、10.4.7.12辅) 客户端配置指向自建DNS 略 准备签发证书环境运维主机HDSS7-200.host.com上: 安装CFSSL 证书签发工具CFSSL: R1.2 cfssl下载地址cfssljson下载地址cfssl-certinfo下载地址 1234[root@hdss7-200 ~]# curl -s -L -o /usr/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 [root@hdss7-200 ~]# curl -s -L -o /usr/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 [root@hdss7-200 ~]# curl -s -L -o /usr/bin/cfssl-certinfo https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 [root@hdss7-200 ~]# chmod +x /usr/bin/cfssl* 创建生成CA证书的JSON配置文件/opt/certs/ca-config.json12345678910111213141516171819202122232425262728293031323334{ "signing": { "default": { "expiry": "175200h" }, "profiles": { "server": { "expiry": "175200h", "usages": [ "signing", "key encipherment", "server auth" ] }, "client": { "expiry": "175200h", "usages": [ "signing", "key encipherment", "client auth" ] }, "peer": { "expiry": "175200h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } }} 证书类型client certificate: 客户端使用,用于服务端认证客户端,例如etcdctl、etcd proxy、fleetctl、docker客户端server certificate: 服务端使用,客户端以此验证服务端身份,例如docker服务端、kube-apiserverpeer certificate: 双向证书,用于etcd集群成员间通信 创建生成CA证书签名请求(csr)的JSON配置文件/opt/certs/ca-csr.json123456789101112131415161718192021{ "CN": "kubernetes-ca", "hosts": [ ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ], "ca": { "expiry": "175200h" }} CN: Common Name,浏览器使用该字段验证网站是否合法,一般写的是域名。非常重要。浏览器使用该字段验证网站是否合法C: Country, 国家ST: State,州,省L: Locality,地区,城市O: Organization Name,组织名称,公司名称OU: Organization Unit Name,组织单位名称,公司部门 生成CA证书和私钥/opt/certs1234567[root@hdss7-200 certs]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - 2019/01/18 09:31:19 [INFO] generating a new CA key and certificate from CSR2019/01/18 09:31:19 [INFO] generate received request2019/01/18 09:31:19 [INFO] received CSR2019/01/18 09:31:19 [INFO] generating key: rsa-20482019/01/18 09:31:19 [INFO] encoded CSR2019/01/18 09:31:19 [INFO] signed certificate with serial number 345276964513449660162382535043012874724976422200 生成ca.pem、ca.csr、ca-key.pem(CA私钥,需妥善保管) /opt/certs123456[root@hdss7-200 certs]# ls -l-rw-r--r-- 1 root root 836 Jan 16 11:04 ca-config.json-rw-r--r-- 1 root root 332 Jan 16 11:10 ca-csr.json-rw------- 1 root root 1675 Jan 16 11:17 ca-key.pem-rw-r--r-- 1 root root 1001 Jan 16 11:17 ca.csr-rw-r--r-- 1 root root 1354 Jan 16 11:17 ca.pem 部署docker环境HDSS7-200.host.com,HDSS7-21.host.com,HDSS7-22.host.com上: 安装 docker: v1.12.6 docker引擎官方下载地址docker引擎官方selinux包 1234# ls -l|grep docker-engine-rw-r--r-- 1 root root 20013304 Jan 16 18:16 docker-engine-1.12.6-1.el7.centos.x86_64.rpm-rw-r--r-- 1 root root 29112 Jan 16 18:15 docker-engine-selinux-1.12.6-1.el7.centos.noarch.rpm# yum localinstall *.rpm 配置/etc/docker/daemon.json 123456789# vi /etc/docker/daemon.json { "graph": "/data/docker", "storage-driver": "overlay", "insecure-registries": ["registry.access.redhat.com","quay.io","harbor.od.com"], "bip": "172.7.21.1/24", "exec-opts": ["native.cgroupdriver=systemd"], "live-restore": true} 注意:这里bip要根据宿主机ip变化 启动脚本/usr/lib/systemd/system/docker.service12345678910111213141516171819202122232425262728[Unit]Description=Docker Application Container EngineDocumentation=https://docs.docker.comAfter=network.target[Service]Type=notify# the default is not to use systemd for cgroups because the delegate issues still# exists and systemd currently does not support the cgroup feature set required# for containers run by dockerExecStart=/usr/bin/dockerdExecReload=/bin/kill -s HUP $MAINPID# Having non-zero Limit*s causes performance problems due to accounting overhead# in the kernel. We recommend using cgroups to do container-local accounting.LimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinity# Uncomment TasksMax if your systemd version supports it.# Only systemd 226 and above support this version.#TasksMax=infinityTimeoutStartSec=0# set delegate yes so that systemd does not reset the cgroups of docker containersDelegate=yes# kill only the docker process, not all processes in the cgroupKillMode=process[Install]WantedBy=multi-user.target 启动12# systemctl enable docker.service# systemctl start docker.service 部署docker镜像私有仓库harborHDSS7-200.host.com上: 下载软件二进制包并解压harbor下载地址 /opt/harbor123456[root@hdss7-200 harbor]# tar xf harbor-offline-installer-v1.7.1.tgz -C /opt[root@hdss7-200 harbor]# lltotal 583848drwxr-xr-x 3 root root 242 Jan 23 15:28 harbor-rw-r--r-- 1 root root 597857483 Jan 17 14:58 harbor-offline-installer-v1.7.1.tgz 配置/opt/harbor/harbor.cfg1hostname = harbor.od.com /opt/harbor/docker-compose.yml1234ports: - 180:80 - 1443:443 - 4443:4443 安装docker-compose123[root@hdss7-200 harbor]# yum install docker-compose -y[root@hdss7-200 harbor]# rpm -qa docker-composedocker-compose-1.18.0-2.el7.noarch 安装harbor/opt/harbor1[root@hdss7-200 harbor]# ./install.sh 检查harbor启动情况12345678910111213[root@hdss7-200 harbor]# docker-compose ps Name Command State Ports --------------------------------------------------------------------------------------------------------------------------------harbor-adminserver /harbor/start.sh Up harbor-core /harbor/start.sh Up harbor-db /entrypoint.sh postgres Up 5432/tcp harbor-jobservice /harbor/start.sh Up harbor-log /bin/sh -c /usr/local/bin/ ... Up 127.0.0.1:1514->10514/tcp harbor-portal nginx -g daemon off; Up 80/tcp nginx nginx -g daemon off; Up 0.0.0.0:1443->443/tcp, 0.0.0.0:4443->4443/tcp, 0.0.0.0:180->80/tcpredis docker-entrypoint.sh redis ... Up 6379/tcp registry /entrypoint.sh /etc/regist ... Up 5000/tcp registryctl /harbor/start.sh Up 配置harbor的dns内网解析/var/named/od.com.zone1harbor 60 IN A 10.4.7.200 检查 12[root@hdss7-200 harbor]# dig -t A harbor.od.com @10.4.7.11 +short10.4.7.200 安装nginx并配置安装123[root@hdss7-200 harbor]# yum install nginx -y[root@hdss7-200 harbor]# rpm -qa nginxnginx-1.12.2-2.el7.x86_64 配置/etc/nginx/conf.d/harbor.od.com.conf1234567891011121314151617181920212223242526server { listen 80; server_name harbor.od.com; client_max_body_size 1000m; location / { proxy_pass http://127.0.0.1:180; }}server { listen 443 ssl; server_name harbor.od.com; ssl_certificate "certs/harbor.od.com.pem"; ssl_certificate_key "certs/harbor.od.com-key.pem"; ssl_session_cache shared:SSL:1m; ssl_session_timeout 10m; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; client_max_body_size 1000m; location / { proxy_pass http://127.0.0.1:180; }} 注意:这里需要自签ssl证书,自签过程略 (umask 077; openssl genrsa -out od.key 2048)openssl req -new -key od.key -out od.csr -subj “/CN=*.od.com/ST=Beijing/L=beijing/O=od/OU=ops”openssl x509 -req -in od.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out od.crt -days 365 启动12345[root@hdss7-200 harbor]# nginx[root@hdss7-200 harbor]# netstat -luntp|grep nginxtcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 6590/nginx: master tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 6590/nginx: master 浏览器打开http://harbor.od.com 用户名:admin 密码: Harbor12345 部署Master节点服务部署etcd集群集群规划 主机名 角色 ip HDSS7-12.host.com etcd lead 10.4.7.12 HDSS7-21.host.com etcd follow 10.4.7.21 HDSS7-22.host.com etcd follow 10.4.7.22 注意:这里部署文档以HDSS7-12.host.com主机为例,另外两台主机安装部署方法类似 创建生成证书签名请求(csr)的JSON配置文件运维主机HDSS7-200.host.com上: /opt/certs/etcd-peer-csr.json12345678910111213141516171819202122{ "CN": "etcd-peer", "hosts": [ "10.4.7.11", "10.4.7.12", "10.4.7.21", "10.4.7.22" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ]} 生成etcd证书和私钥/opt/certs1234567891011[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssljson -bare etcd-peer2019/01/18 09:35:09 [INFO] generate received request2019/01/18 09:35:09 [INFO] received CSR2019/01/18 09:35:09 [INFO] generating key: rsa-20482019/01/18 09:35:09 [INFO] encoded CSR2019/01/18 09:35:10 [INFO] signed certificate with serial number 3241914913849289156052547640310960678721546490102019/01/18 09:35:10 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements"). 检查生成的证书、私钥/opt/certs12345[root@hdss7-200 certs]# ls -l|grep etcd-rw-r--r-- 1 root root 387 Jan 18 12:32 etcd-peer-csr.json-rw------- 1 root root 1679 Jan 18 12:32 etcd-peer-key.pem-rw-r--r-- 1 root root 1074 Jan 18 12:32 etcd-peer.csr-rw-r--r-- 1 root root 1432 Jan 18 12:32 etcd-peer.pem 创建etcd用户HDSS7-12.host.com上: 1[root@hdss7-12 ~]# useradd -s /sbin/nologin -M etcd 下载软件,解压,做软连接etcd下载地址HDSS7-12.host.com上: /opt/src12345678910[root@hdss7-12 src]# ls -ltotal 9604-rw-r--r-- 1 root root 9831476 Jan 18 10:45 etcd-v3.1.18-linux-amd64.tar.gz[root@hdss7-12 src]# tar xf etcd-v3.1.18-linux-amd64.tar.gz -C /opt[root@hdss7-12 src]# ln -s /opt/etcd-v3.1.18-linux-amd64 /opt/etcd[root@hdss7-12 src]# ls -l /opttotal 0lrwxrwxrwx 1 root root 24 Jan 18 14:21 etcd -> etcd-v3.1.18-linux-amd64drwxr-xr-x 4 478493 89939 166 Jun 16 2018 etcd-v3.1.18-linux-amd64drwxr-xr-x 2 root root 45 Jan 18 14:21 src 创建目录,拷贝证书、私钥HDSS7-12.host.com上: 123[root@hdss7-12 src]# mkdir -p /data/etcd /data/logs/etcd-server [root@hdss7-12 src]# chown -R etcd.etcd /data/etcd /data/logs/etcd-server/[root@hdss7-12 src]# mkdir -p /opt/etcd/certs 将运维主机上生成的ca.pem、etcd-peer-key.pem、etcd-peer.pem拷贝到/opt/etcd/certs目录中,注意私钥文件权限600 /opt/etcd/certs1234567[root@hdss7-12 certs]# chmod 600 etcd-peer-key.pem[root@hdss7-12 certs]# chown -R etcd.etcd /opt/etcd/certs/[root@hdss7-12 certs]# ls -ltotal 12-rw-r--r-- 1 etcd etcd 1354 Jan 18 14:45 ca.pem-rw------- 1 etcd etcd 1679 Jan 18 17:00 etcd-peer-key.pem-rw-r--r-- 1 etcd etcd 1444 Jan 18 17:02 etcd-peer.pem 创建etcd服务启动脚本HDSS7-12.host.com上: /opt/etcd/etcd-server-startup.sh1234567891011121314151617181920#!/bin/sh./etcd --name etcd-server-7-12 \ --data-dir /data/etcd/etcd-server \ --listen-peer-urls https://10.4.7.12:2380 \ --listen-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \ --quota-backend-bytes 8000000000 \ --initial-advertise-peer-urls https://10.4.7.12:2380 \ --advertise-client-urls https://10.4.7.12:2379,http://127.0.0.1:2379 \ --initial-cluster etcd-server-7-12=https://10.4.7.12:2380,etcd-server-7-21=https://10.4.7.21:2380,etcd-server-7-22=https://10.4.7.22:2380 \ --ca-file ./certs/ca.pem \ --cert-file ./certs/etcd-peer.pem \ --key-file ./certs/etcd-peer-key.pem \ --client-cert-auth \ --trusted-ca-file ./certs/ca.pem \ --peer-ca-file ./certs/ca.pem \ --peer-cert-file ./certs/etcd-peer.pem \ --peer-key-file ./certs/etcd-peer-key.pem \ --peer-client-cert-auth \ --peer-trusted-ca-file ./certs/ca.pem \ --log-output stdout 注意:etcd集群各主机的启动脚本略有不同,部署其他节点时注意修改。 调整权限和目录HDSS7-12.host.com上: 12[root@hdss7-12 certs]# chmod +x /opt/etcd/etcd-server-startup.sh[root@hdss7-12 certs]# mkdir -p /data/logs/etcd-server 安装supervisor软件HDSS7-12.host.com上: 123[root@hdss7-12 certs]# yum install supervisor -y[root@hdss7-12 certs]# systemctl start supervisord[root@hdss7-12 certs]# systemctl enable supervisord 创建etcd-server的启动配置HDSS7-12.host.com上: /etc/supervisord.d/etcd-server.ini1234567891011121314151617181920212223[program:etcd-server-7-12]command=/opt/etcd/etcd-server-startup.sh ; the program (relative uses PATH, can take args)numprocs=1 ; number of processes copies to start (def 1)directory=/opt/etcd ; directory to cwd to before exec (def no cwd)autostart=true ; start at supervisord start (default: true)autorestart=true ; retstart at unexpected quit (default: true)startsecs=22 ; number of secs prog must stay running (def. 1)startretries=3 ; max # of serial start failures (default 3)exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)stopsignal=QUIT ; signal used to kill process (default TERM)stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)user=etcd ; setuid to this UNIX account to run the programredirect_stderr=false ; redirect proc stderr to stdout (default false)stdout_logfile=/data/logs/etcd-server/etcd.stdout.log ; stdout log path, NONE for none; default AUTOstdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)stdout_events_enabled=false ; emit events on stdout writes (default false)stderr_logfile=/data/logs/etcd-server/etcd.stderr.log ; stderr log path, NONE for none; default AUTOstderr_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)stderr_logfile_backups=4 ; # of stderr logfile backups (default 10)stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)stderr_events_enabled=false ; emit events on stderr writes (default false) 注意:etcd集群各主机启动配置略有不同,配置其他节点时注意修改。 启动etcd服务并检查HDSS7-12.host.com上: 1234[root@hdss7-12 certs]# supervisorctl start alletcd-server-7-12: started[root@hdss7-12 certs]# supervisorctl status etcd-server-7-12 RUNNING pid 6692, uptime 0:00:05 安装部署启动检查所有集群规划主机上的etcd服务略 检查集群状态3台均启动后,检查集群状态 12345678910[root@hdss7-12 ~]# /opt/etcd/etcdctl cluster-healthmember 988139385f78284 is healthy: got healthy result from http://127.0.0.1:2379member 5a0ef2a004fc4349 is healthy: got healthy result from http://127.0.0.1:2379member f4a0cb0a765574a8 is healthy: got healthy result from http://127.0.0.1:2379cluster is healthy[root@hdss7-12 ~]# /opt/etcd/etcdctl member list988139385f78284: name=etcd-server-7-22 peerURLs=https://10.4.7.22:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.22:2379 isLeader=false5a0ef2a004fc4349: name=etcd-server-7-21 peerURLs=https://10.4.7.21:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.21:2379 isLeader=falsef4a0cb0a765574a8: name=etcd-server-7-12 peerURLs=https://10.4.7.12:2380 clientURLs=http://127.0.0.1:2379,https://10.4.7.12:2379 isLeader=true 部署kube-apiserver集群集群规划 主机名 角色 ip HDSS7-21.host.com kube-apiserver 10.4.7.21 HDSS7-22.host.com kube-apiserver 10.4.7.22 HDSS7-11.host.com 4层负载均衡 10.4.7.11 HDSS7-12.host.com 4层负载均衡 10.4.7.12 注意:这里10.4.7.11和10.4.7.12使用nginx做4层负载均衡器,用keepalived跑一个vip:10.4.7.10,代理两个kube-apiserver,实现高可用 这里部署文档以HDSS7-21.host.com主机为例,另外一台运算节点安装部署方法类似 下载软件,解压,做软连接HDSS7-21.host.com上:kubernetes下载地址 /opt/src123456789[root@hdss7-21 src]# ls -l|grep kubernetes-rw-r--r-- 1 root root 417761204 Jan 17 16:46 kubernetes-server-linux-amd64.tar.gz[root@hdss7-21 src]# tar xf kubernetes-server-linux-amd64.tar.gz -C /opt[root@hdss7-21 src]# mv /opt/kubernetes /opt/kubernetes-v1.13.2-linux-amd64[root@hdss7-21 src]# ln -s /opt/kubernetes-v1.13.2-linux-amd64 /opt/kubernetes[root@hdss7-21 src]# mkdir /opt/kubernetes/server/bin/{cert,conf}[root@hdss7-21 src]# ls -l /opt|grep kuberneteslrwxrwxrwx 1 root root 31 Jan 18 10:49 kubernetes -> kubernetes-v1.13.2-linux-amd64/drwxr-xr-x 4 root root 50 Jan 17 17:40 kubernetes-v1.13.2-linux-amd64 签发client证书运维主机HDSS7-200.host.com上: 创建生成证书签名请求(csr)的JSON配置文件/opt/certs/client-csr.json123456789101112131415161718{ "CN": "k8s-node", "hosts": [ ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ]} 生成client证书和私钥12345678910[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json | cfssljson -bare client2019/01/18 14:02:50 [INFO] generate received request2019/01/18 14:02:50 [INFO] received CSR2019/01/18 14:02:50 [INFO] generating key: rsa-20482019/01/18 14:02:51 [INFO] encoded CSR2019/01/18 14:02:51 [INFO] signed certificate with serial number 4231086510402793002423668841006379741553708614482019/01/18 14:02:51 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements"). 检查生成的证书、私钥1234[root@hdss7-200 certs]# ls -l|grep client-rw------- 1 root root 1679 Jan 21 11:13 client-key.pem-rw-r--r-- 1 root root 989 Jan 21 11:13 client.csr-rw-r--r-- 1 root root 1367 Jan 21 11:13 client.pem 签发kube-apiserver证书运维主机HDSS7-200.host.com上: 创建生成证书签名请求(csr)的JSON配置文件/opt/certs/apiserver-csr.json 12345678910111213141516171819202122232425262728{ "CN": "apiserver", "hosts": [ "127.0.0.1", "192.168.0.1", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local", "10.4.7.10", "10.4.7.21", "10.4.7.22", "10.4.7.23" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ]} 生成kube-apiserver证书和私钥12345678910[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json | cfssljson -bare apiserver 2019/01/18 14:05:44 [INFO] generate received request2019/01/18 14:05:44 [INFO] received CSR2019/01/18 14:05:44 [INFO] generating key: rsa-20482019/01/18 14:05:46 [INFO] encoded CSR2019/01/18 14:05:46 [INFO] signed certificate with serial number 6334066509606166245905105766856085804902186762272019/01/18 14:05:46 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements"). 检查生成的证书、私钥123456[root@hdss7-200 certs]# ls -l|grep apiservertotal 72-rw-r--r-- 1 root root 406 Jan 21 14:10 apiserver-csr.json-rw------- 1 root root 1675 Jan 21 14:11 apiserver-key.pem-rw-r--r-- 1 root root 1082 Jan 21 14:11 apiserver.csr-rw-r--r-- 1 root root 1599 Jan 21 14:11 apiserver.pem 拷贝证书至各运算节点,并创建配置HDSS7-21.host.com上: 拷贝证书、私钥,注意私钥文件属性600/opt/kubernetes/server/bin/cert12345678[root@hdss7-21 cert]# ls -l /opt/kubernetes/server/bin/certtotal 40-rw------- 1 root root 1676 Jan 21 16:39 apiserver-key.pem-rw-r--r-- 1 root root 1599 Jan 21 16:36 apiserver.pem-rw------- 1 root root 1675 Jan 21 13:55 ca-key.pem-rw-r--r-- 1 root root 1354 Jan 21 13:50 ca.pem-rw------- 1 root root 1679 Jan 21 13:53 client-key.pem-rw-r--r-- 1 root root 1368 Jan 21 13:53 client.pem 创建配置/opt/kubernetes/server/bin/conf/audit.yaml1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768apiVersion: audit.k8s.io/v1beta1 # This is required.kind: Policy# Don't generate audit events for all requests in RequestReceived stage.omitStages: - "RequestReceived"rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: "" # Resource "pods" doesn't match requests to any subresource of pods, # which is consistent with the RBAC policy. resources: ["pods"] # Log "pods/log", "pods/status" at Metadata level - level: Metadata resources: - group: "" resources: ["pods/log", "pods/status"] # Don't log requests to a configmap called "controller-leader" - level: None resources: - group: "" resources: ["configmaps"] resourceNames: ["controller-leader"] # Don't log watch requests by the "system:kube-proxy" on endpoints or services - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core API group resources: ["endpoints", "services"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: ["system:authenticated"] nonResourceURLs: - "/api*" # Wildcard matching. - "/version" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: "" # core API group resources: ["configmaps"] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces: ["kube-system"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata # Long-running requests like watches that fall under this rule will not # generate an audit event in RequestReceived. omitStages: - "RequestReceived" 创建启动脚本HDSS7-21.host.com上: /opt/kubernetes/server/bin/kube-apiserver.sh1234567891011121314151617181920212223#!/bin/bash./kube-apiserver \ --apiserver-count 2 \ --audit-log-path /data/logs/kubernetes/kube-apiserver/audit-log \ --audit-policy-file ./conf/audit.yaml \ --authorization-mode RBAC \ --client-ca-file ./cert/ca.pem \ --requestheader-client-ca-file ./cert/ca.pem \ --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \ --etcd-cafile ./cert/ca.pem \ --etcd-certfile ./cert/client.pem \ --etcd-keyfile ./cert/client-key.pem \ --etcd-servers https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \ --service-account-key-file ./cert/ca-key.pem \ --service-cluster-ip-range 192.168.0.0/16 \ --service-node-port-range 3000-29999 \ --target-ram-mb=1024 \ --kubelet-client-certificate ./cert/client.pem \ --kubelet-client-key ./cert/client-key.pem \ --log-dir /data/logs/kubernetes/kube-apiserver \ --tls-cert-file ./cert/apiserver.pem \ --tls-private-key-file ./cert/apiserver-key.pem \ --v 2 调整权限和目录HDSS7-21.host.com上: /opt/kubernetes/server/bin12[root@hdss7-21 bin]# chmod +x /opt/kubernetes/server/bin/kube-apiserver.sh[root@hdss7-21 bin]# mkdir -p /data/logs/kubernetes/kube-apiserver 创建supervisor配置HDSS7-21.host.com上: /etc/supervisord.d/kube-apiserver.ini1234567891011121314151617181920212223[program:kube-apiserver]command=/opt/kubernetes/server/bin/kube-apiserver.sh ; the program (relative uses PATH, can take args)numprocs=1 ; number of processes copies to start (def 1)directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)autostart=true ; start at supervisord start (default: true)autorestart=true ; retstart at unexpected quit (default: true)startsecs=22 ; number of secs prog must stay running (def. 1)startretries=3 ; max # of serial start failures (default 3)exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)stopsignal=QUIT ; signal used to kill process (default TERM)stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)user=root ; setuid to this UNIX account to run the programredirect_stderr=false ; redirect proc stderr to stdout (default false)stdout_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stdout.log ; stdout log path, NONE for none; default AUTOstdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)stdout_events_enabled=false ; emit events on stdout writes (default false)stderr_logfile=/data/logs/kubernetes/kube-apiserver/apiserver.stderr.log ; stderr log path, NONE for none; default AUTOstderr_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)stderr_logfile_backups=4 ; # of stderr logfile backups (default 10)stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)stderr_events_enabled=false ; emit events on stderr writes (default false) 启动服务并检查HDSS7-21.host.com上: 12345[root@hdss7-21 bin]# supervisorctl updatekube-apiserverr: added process group[root@hdss7-21 bin]# supervisorctl statusetcd-server-7-21 RUNNING pid 6661, uptime 1 day, 8:41:13kube-apiserver RUNNING pid 43765, uptime 2:09:41 安装部署启动检查所有集群规划主机上的kube-apiserver略 配4层反向代理HDSS7-11.host.com,HDSS7-12.host.com上: nginx配置/etc/nginx/nginx.conf123456789101112stream { upstream kube-apiserver { server 10.4.7.21:6443 max_fails=3 fail_timeout=30s; server 10.4.7.22:6443 max_fails=3 fail_timeout=30s; } server { listen 7443; proxy_connect_timeout 2s; proxy_timeout 900s; proxy_pass kube-apiserver; }} keepalived配置check_port.sh/etc/keepalived/check_port.sh 123456789101112131415161718#!/bin/bash#keepalived 监控端口脚本#使用方法:#在keepalived的配置文件中#vrrp_script check_port {#创建一个vrrp_script脚本,检查配置# script "/etc/keepalived/check_port.sh 6379" #配置监听的端口# interval 2 #检查脚本的频率,单位(秒)#}CHK_PORT=$1if [ -n "$CHK_PORT" ];then PORT_PROCESS=`ss -lt|grep $CHK_PORT|wc -l` if [ $PORT_PROCESS -eq 0 ];then echo "Port $CHK_PORT Is Not Used,End." exit 1 fielse echo "Check Port Cant Be Empty!"fi keepalived主HDSS7-11.host.com上 12[root@hdss7-11 ~]# rpm -qa keepalivedkeepalived-1.3.5-6.el7.x86_64 /etc/keepalived/keepalived.conf123456789101112131415161718192021222324252627282930313233! Configuration File for keepalivedglobal_defs { router_id 10.4.7.11}vrrp_script chk_nginx { script "/etc/keepalived/check_port.sh 7443" interval 2 weight -20}vrrp_instance VI_1 { state MASTER interface eth0 virtual_router_id 251 priority 100 advert_int 1 mcast_src_ip 10.4.7.11 nopreempt authentication { auth_type PASS auth_pass 11111111 } track_script { chk_nginx } virtual_ipaddress { 10.4.7.10 }} keepalived备HDSS7-12.host.com上 12[root@hdss7-12 ~]# rpm -qa keepalivedkeepalived-1.3.5-6.el7.x86_64 /etc/keepalived/keepalived.conf123456789101112131415161718192021222324252627! Configuration File for keepalivedglobal_defs { router_id 10.4.7.12}vrrp_script chk_nginx { script "/etc/keepalived/check_port.sh 7443" interval 2 weight -20}vrrp_instance VI_1 { state BACKUP interface eth0 virtual_router_id 251 mcast_src_ip 10.4.7.12 priority 90 advert_int 1 authentication { auth_type PASS auth_pass 11111111 } track_script { chk_nginx } virtual_ipaddress { 10.4.7.10 }} 启动代理并检查HDSS7-11.host.com,HDSS7-12.host.com上: 启动 1234567[root@hdss7-11 ~]# systemctl start keepalived[root@hdss7-11 ~]# systemctl enable keepalived[root@hdss7-11 ~]# nginx -s reload[root@hdss7-12 ~]# systemctl start keepalived[root@hdss7-12 ~]# systemctl enable keepalived[root@hdss7-12 ~]# nginx -s reload 检查 12345678[root@hdss7-11 ~]## netstat -luntp|grep 7443tcp 0 0 0.0.0.0:7443 0.0.0.0:* LISTEN 17970/nginx: master[root@hdss7-12 ~]## netstat -luntp|grep 7443tcp 0 0 0.0.0.0:7443 0.0.0.0:* LISTEN 17970/nginx: master[root@hdss7-11 ~]# ip add|grep 10.4.9.10 inet 10.9.7.10/32 scope global vir0[root@hdss7-11 ~]# ip add|grep 10.4.9.10 (空) 部署controller-manager集群规划 主机名 角色 ip HDSS7-21.host.com controller-manager 10.4.7.21 HDSS7-22.host.com controller-manager 10.4.7.22 注意:这里部署文档以HDSS7-21.host.com主机为例,另外一台运算节点安装部署方法类似 创建启动脚本HDSS7-21.host.com上: /opt/kubernetes/server/bin/kube-controller-manager.sh12345678910#!/bin/sh./kube-controller-manager \ --cluster-cidr 172.7.0.0/16 \ --leader-elect true \ --log-dir /data/logs/kubernetes/kube-controller-manager \ --master http://127.0.0.1:8080 \ --service-account-private-key-file ./cert/ca-key.pem \ --service-cluster-ip-range 192.168.0.0/16 \ --root-ca-file ./cert/ca.pem \ --v 2 调整文件权限,创建目录HDSS7-21.host.com上: /opt/kubernetes/server/bin12[root@hdss7-21 bin]# chmod +x /opt/kubernetes/server/bin/kube-controller-manager.sh[root@hdss7-21 bin]# mkdir -p /data/logs/kubernetes/kube-controller-manager 创建supervisor配置HDSS7-21.host.com上: /etc/supervisord.d/kube-conntroller-manager.ini1234567891011121314151617181920212223[program:kube-controller-manager]command=/opt/kubernetes/server/bin/kube-controller-manager.sh ; the program (relative uses PATH, can take args)numprocs=1 ; number of processes copies to start (def 1)directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)autostart=true ; start at supervisord start (default: true)autorestart=true ; retstart at unexpected quit (default: true)startsecs=22 ; number of secs prog must stay running (def. 1)startretries=3 ; max # of serial start failures (default 3)exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)stopsignal=QUIT ; signal used to kill process (default TERM)stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)user=root ; setuid to this UNIX account to run the programredirect_stderr=false ; redirect proc stderr to stdout (default false)stdout_logfile=/data/logs/kubernetes/kube-controller-manager/controll.stdout.log ; stdout log path, NONE for none; default AUTOstdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)stdout_events_enabled=false ; emit events on stdout writes (default false)stderr_logfile=/data/logs/kubernetes/kube-controller-manager/controll.stderr.log ; stderr log path, NONE for none; default AUTOstderr_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)stderr_logfile_backups=4 ; # of stderr logfile backups (default 10)stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)stderr_events_enabled=false ; emit events on stderr writes (default false) 启动服务并检查HDSS7-21.host.com上: 123456[root@hdss7-21 bin]# supervisorctl updatekube-controller-manager: added process group[root@hdss7-21 bin]# supervisorctl status etcd-server-7-21 RUNNING pid 6661, uptime 1 day, 8:41:13kube-apiserver RUNNING pid 43765, uptime 2:09:41kube-controller-manager RUNNING pid 44230, uptime 2:05:01 安装部署启动检查所有集群规划主机上的kube-controller-manager服务略 部署kube-scheduler集群规划 主机名 角色 ip HDSS7-21.host.com kube-scheduler 10.4.7.21 HDSS7-22.host.com kube-scheduler 10.4.7.22 注意:这里部署文档以HDSS7-21.host.com主机为例,另外一台运算节点安装部署方法类似 创建启动脚本HDSS7-21.host.com上: /opt/kubernetes/server/bin/kube-scheduler.sh123456#!/bin/sh./kube-scheduler \ --leader-elect \ --log-dir /data/logs/kubernetes/kube-scheduler \ --master http://127.0.0.1:8080 \ --v 2 调整文件权限,创建目录HDSS7-21.host.com上: /opt/kubernetes/server/bin12[root@hdss7-21 bin]# chmod +x /opt/kubernetes/server/bin/kube-scheduler.sh[root@hdss7-21 bin]# mkdir -p /data/logs/kubernetes/kube-scheduler 创建supervisor配置HDSS7-21.host.com上: /etc/supervisord.d/kube-scheduler.ini1234567891011121314151617181920212223[program:kube-scheduler]command=/opt/kubernetes/server/bin/kube-scheduler.sh ; the program (relative uses PATH, can take args)numprocs=1 ; number of processes copies to start (def 1)directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)autostart=true ; start at supervisord start (default: true)autorestart=true ; retstart at unexpected quit (default: true)startsecs=22 ; number of secs prog must stay running (def. 1)startretries=3 ; max # of serial start failures (default 3)exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)stopsignal=QUIT ; signal used to kill process (default TERM)stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)user=root ; setuid to this UNIX account to run the programredirect_stderr=false ; redirect proc stderr to stdout (default false)stdout_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stdout.log ; stdout log path, NONE for none; default AUTOstdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)stdout_events_enabled=false ; emit events on stdout writes (default false)stderr_logfile=/data/logs/kubernetes/kube-scheduler/scheduler.stderr.log ; stderr log path, NONE for none; default AUTOstderr_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)stderr_logfile_backups=4 ; # of stderr logfile backups (default 10)stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)stderr_events_enabled=false ; emit events on stderr writes (default false) 启动服务并检查HDSS7-21.host.com上: 1234567[root@hdss7-21 bin]# supervisorctl updatekube-scheduler: added process group[root@hdss7-21 bin]# supervisorctl statusetcd-server-7-21 RUNNING pid 6661, uptime 1 day, 8:41:13kube-apiserver RUNNING pid 43765, uptime 2:09:41kube-controller-manager RUNNING pid 44230, uptime 2:05:01kube-scheduler RUNNING pid 44779, uptime 2:02:27 安装部署启动检查所有集群规划主机上的kube-scheduler服务略 部署Node节点服务部署kubelet集群规划 主机名 角色 ip HDSS7-21.host.com kubelet 10.4.7.21 HDSS7-22.host.com kubelet 10.4.7.22 注意:这里部署文档以HDSS7-21.host.com主机为例,另外一台运算节点安装部署方法类似 签发kubelet证书运维主机HDSS7-200.host.com上: 创建生成证书签名请求(csr)的JSON配置文件kubelet-csr.json12345678910111213141516171819202122232425262728{ "CN": "kubelet-node", "hosts": [ "127.0.0.1", "10.4.7.10", "10.4.7.21", "10.4.7.22", "10.4.7.23", "10.4.7.24", "10.4.7.25", "10.4.7.26", "10.4.7.27", "10.4.7.28" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ]} 生成kubelet证书和私钥/opt/certs12345678910[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssljson -bare kubelet2019/01/18 17:51:16 [INFO] generate received request2019/01/18 17:51:16 [INFO] received CSR2019/01/18 17:51:16 [INFO] generating key: rsa-20482019/01/18 17:51:17 [INFO] encoded CSR2019/01/18 17:51:17 [INFO] signed certificate with serial number 488702681574151336980677123951523215469749434702019/01/18 17:51:17 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements"). 检查生成的证书、私钥/opt/certs123456[root@hdss7-200 certs]# ls -l|grep kubelettotal 88-rw-r--r-- 1 root root 415 Jan 22 16:58 kubelet-csr.json-rw------- 1 root root 1679 Jan 22 17:00 kubelet-key.pem-rw-r--r-- 1 root root 1086 Jan 22 17:00 kubelet.csr-rw-r--r-- 1 root root 1456 Jan 22 17:00 kubelet.pem 拷贝证书至各运算节点,并创建配置HDSS7-21.host.com上: 拷贝证书、私钥,注意私钥文件属性600/opt/kubernetes/server/bin/cert12345678910[root@hdss7-21 cert]# ls -l /opt/kubernetes/server/bin/certtotal 40-rw------- 1 root root 1676 Jan 21 16:39 apiserver-key.pem-rw-r--r-- 1 root root 1599 Jan 21 16:36 apiserver.pem-rw------- 1 root root 1675 Jan 21 13:55 ca-key.pem-rw-r--r-- 1 root root 1354 Jan 21 13:50 ca.pem-rw------- 1 root root 1679 Jan 21 13:53 client-key.pem-rw-r--r-- 1 root root 1368 Jan 21 13:53 client.pem-rw------- 1 root root 1679 Jan 22 17:00 kubelet-key.pem-rw-r--r-- 1 root root 1456 Jan 22 17:00 kubelet.pem 创建配置HDSS7-21.host.com上: 给kubectl创建软连接/opt/kubernetes/server/bin123[root@hdss7-21 bin]# ln -s /opt/kubernetes/server/bin/kubectl /usr/bin/kubectl[root@hdss7-21 bin]# which kubectl/usr/bin/kubectl set-cluster注意:在conf目录下 /opt/kubernetes/server/conf1234567[root@hdss7-21 conf]# kubectl config set-cluster myk8s \ --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \ --embed-certs=true \ --server=https://10.4.7.10:7443 \ --kubeconfig=kubelet.kubeconfigCluster "myk8s" set. set-credentials注意:在conf目录下 /opt/kubernetes/server/conf123[root@hdss7-21 conf]# kubectl config set-credentials k8s-node --client-certificate=/opt/kubernetes/server/bin/cert/client.pem --client-key=/opt/kubernetes/server/bin/cert/client-key.pem --embed-certs=true --kubeconfig=kubelet.kubeconfig User "k8s-node" set. set-context注意:在conf目录下 /opt/kubernetes/server/conf123456[root@hdss7-21 conf]# kubectl config set-context myk8s-context \ --cluster=myk8s \ --user=k8s-node \ --kubeconfig=kubelet.kubeconfigContext "myk8s-context" created. use-context注意:在conf目录下 /opt/kubernetes/server/conf123[root@hdss7-21 conf]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfigSwitched to context "myk8s-context". k8s-node.yaml 创建资源配置文件 /opt/kubernetes/server/bin/conf/k8s-node.yaml123456789101112apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: k8s-noderoleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:nodesubjects:- apiGroup: rbac.authorization.k8s.io kind: User name: k8s-node 应用资源配置文件 /opt/kubernetes/server/conf123[root@hdss7-21 conf]# kubectl create -f k8s-node.yamlclusterrolebinding.rbac.authorization.k8s.io/k8s-node created 检查 /opt/kubernetes/server/conf123[root@hdss7-21 conf]# kubectl get clusterrolebinding k8s-nodeNAME AGEk8s-node 3m 准备infra_pod基础镜像运维主机HDSS7-200.host.com上: 下载12345678[root@hdss7-200 ~]# docker pull xplenty/rhel7-pod-infrastructure:v3.4Trying to pull repository docker.io/xplenty/rhel7-pod-infrastructure ... sha256:9314554780673b821cb7113d8c048a90d15077c6e7bfeebddb92a054a1f84843: Pulling from docker.io/xplenty/rhel7-pod-infrastructure615bc035f9f8: Pull complete 1c5fd9dfeaa8: Pull complete 7653a8c7f937: Pull complete Digest: sha256:9314554780673b821cb7113d8c048a90d15077c6e7bfeebddb92a054a1f84843Status: Downloaded newer image for docker.io/xplenty/rhel7-pod-infrastructure:v3.4 提交至私有仓库(harbor)中 配置主机登录私有仓库 /root/.docker/config.json1234567{ "auths": { "harbor.od.com": { "auth": "YWRtaW46SGFyYm9yMTIzNDU=" } }} 这里代表:用户名admin,密码Harbor12345[root@hdss7-200 ~]# echo YWRtaW46SGFyYm9yMTIzNDU=|base64 -dadmin:Harbor12345 注意:也可以在各运算节点使用docker login harbor.od.com,输入用户名,密码 给镜像打tag 123[root@hdss7-200 ~]# docker images|grep v3.4xplenty/rhel7-pod-infrastructure v3.4 34d3450d733b 2 years ago 205 MB[root@hdss7-200 ~]# docker tag 34d3450d733b harbor.od.com/k8s/pod:v3.4 push到harbor 123456[root@hdss7-200 ~]# docker push harbor.od.com/k8s/pod:v3.4The push refers to a repository [harbor.od.com/k8s/pod]ba3d4cbbb261: Pushed 0a081b45cb84: Pushed df9d2808b9a9: Pushed v3.4: digest: sha256:73cc48728e707b74f99d17b4e802d836e22d373aee901fdcaa781b056cdabf5c size: 948 创建kubelet启动脚本HDSS7-21.host.com上: /opt/kubernetes/server/bin/kubelet-721.sh123456789101112131415161718#!/bin/sh./kubelet \ --anonymous-auth=false \ --cgroup-driver systemd \ --cluster-dns 192.168.0.2 \ --cluster-domain cluster.local \ --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice \ --fail-swap-on="false" \ --client-ca-file ./cert/ca.pem \ --tls-cert-file ./cert/kubelet.pem \ --tls-private-key-file ./cert/kubelet-key.pem \ --hostname-override 10.4.7.21 \ --image-gc-high-threshold 20 \ --image-gc-low-threshold 10 \ --kubeconfig ./conf/kubelet.kubeconfig \ --log-dir /data/logs/kubernetes/kube-kubelet \ --pod-infra-container-image harbor.od.com/k8s/pod:v3.4 \ --root-dir /data/kubelet 注意:kubelet集群各主机的启动脚本略有不同,部署其他节点时注意修改。 检查配置,权限,创建日志目录HDSS7-21.host.com上: /opt/kubernetes/server/conf12345[root@hdss7-21 conf]# ls -l|grep kubelet.kubeconfig -rw------- 1 root root 6471 Jan 22 17:33 kubelet.kubeconfig[root@hdss7-21 conf]# chmod +x /opt/kubernetes/server/bin/kubelet-721.sh[root@hdss7-21 conf]# mkdir -p /data/logs/kubernetes/kube-kubelet /data/kubelet 创建supervisor配置HDSS7-21.host.com上: /etc/supervisord.d/kube-kubelet.ini1234567891011121314151617181920212223[program:kube-kubelet]command=/opt/kubernetes/server/bin/kubelet-721.sh ; the program (relative uses PATH, can take args)numprocs=1 ; number of processes copies to start (def 1)directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)autostart=true ; start at supervisord start (default: true)autorestart=true ; retstart at unexpected quit (default: true)startsecs=22 ; number of secs prog must stay running (def. 1)startretries=3 ; max # of serial start failures (default 3)exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)stopsignal=QUIT ; signal used to kill process (default TERM)stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)user=root ; setuid to this UNIX account to run the programredirect_stderr=false ; redirect proc stderr to stdout (default false)stdout_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stdout.log ; stdout log path, NONE for none; default AUTOstdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)stdout_events_enabled=false ; emit events on stdout writes (default false)stderr_logfile=/data/logs/kubernetes/kube-kubelet/kubelet.stderr.log ; stderr log path, NONE for none; default AUTOstderr_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)stderr_logfile_backups=4 ; # of stderr logfile backups (default 10)stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)stderr_events_enabled=false ; emit events on stderr writes (default false) 启动服务并检查HDSS7-21.host.com上: 12345678[root@hdss7-21 bin]# supervisorctl updatekube-kubelet: added process group[root@hdss7-21 bin]# supervisorctl statusetcd-server-7-21 RUNNING pid 9507, uptime 22:44:48kube-apiserver RUNNING pid 9770, uptime 21:10:49kube-controller-manager RUNNING pid 10048, uptime 18:22:10kube-kubelet STARTING kube-scheduler RUNNING pid 10041, uptime 18:22:13 检查运算节点HDSS7-21.host.com上: 123[root@hdss7-21 bin]# kubectl get nodeNAME STATUS ROLES AGE VERSION10.4.7.21 Ready <none> 3m v1.13.2 非常重要! 安装部署启动检查所有集群规划主机上的kubelet服务略 部署kube-proxy集群规划 主机名 角色 ip HDSS7-21.host.com kube-proxy 10.4.7.21 HDSS7-22.host.com kube-proxy 10.4.7.22 注意:这里部署文档以HDSS7-21.host.com主机为例,另外一台运算节点安装部署方法类似 签发kube-proxy证书运维主机HDSS7-200.host.com上: 创建生成证书签名请求(csr)的JSON配置文件/opt/certs/kube-proxy-csr.json12345678910111213141516{ "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ]} 生成kube-proxy证书和私钥/opt/certs12345678910[root@hdss7-200 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json | cfssljson -bare kube-proxy-client2019/01/18 18:14:23 [INFO] generate received request2019/01/18 18:14:23 [INFO] received CSR2019/01/18 18:14:23 [INFO] generating key: rsa-20482019/01/18 18:14:23 [INFO] encoded CSR2019/01/18 18:14:23 [INFO] signed certificate with serial number 3757971455886547140992587508738205281270283906812019/01/18 18:14:23 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements"). 检查生成的证书、私钥/opt/certs12345[root@hdss7-200 certs]# ls -l|grep kube-proxy-rw------- 1 root root 1679 Jan 22 17:31 kube-proxy-client-key.pem-rw-r--r-- 1 root root 1005 Jan 22 17:31 kube-proxy-client.csr-rw-r--r-- 1 root root 1383 Jan 22 17:31 kube-proxy-client.pem-rw-r--r-- 1 root root 268 Jan 22 17:23 kube-proxy-csr.json 拷贝证书至各运算节点,并创建配置HDSS7-21.host.com上: 拷贝证书、私钥,注意私钥文件属性600/opt/kubernetes/server/bin/cert123456789101112[root@hdss7-21 cert]# ls -l /opt/kubernetes/server/bin/certtotal 40-rw------- 1 root root 1676 Jan 21 16:39 apiserver-key.pem-rw-r--r-- 1 root root 1599 Jan 21 16:36 apiserver.pem-rw------- 1 root root 1675 Jan 21 13:55 ca-key.pem-rw-r--r-- 1 root root 1354 Jan 21 13:50 ca.pem-rw------- 1 root root 1679 Jan 21 13:53 client-key.pem-rw-r--r-- 1 root root 1368 Jan 21 13:53 client.pem-rw------- 1 root root 1679 Jan 22 17:00 kubelet-key.pem-rw-r--r-- 1 root root 1456 Jan 22 17:00 kubelet.pem-rw------- 1 root root 1679 Jan 22 17:31 kube-proxy-client-key.pem-rw-r--r-- 1 root root 1383 Jan 22 17:31 kube-proxy-client.pem 创建配置set-cluster注意:在conf目录下 /opt/kubernetes/server/bin/conf1234567[root@hdss7-21 conf]# kubectl config set-cluster myk8s \ --certificate-authority=/opt/kubernetes/server/bin/cert/ca.pem \ --embed-certs=true \ --server=https://10.4.7.10:7443 \ --kubeconfig=kube-proxy.kubeconfigCluster "myk8s" set. set-credentials注意:在conf目录下 /opt/kubernetes/server/bin/conf1234567[root@hdss7-21 conf]# kubectl config set-credentials kube-proxy \ --client-certificate=/opt/kubernetes/server/bin/cert/kube-proxy-client.pem \ --client-key=/opt/kubernetes/server/bin/cert/kube-proxy-client-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfigUser "kube-proxy" set. set-context注意:在conf目录下 /opt/kubernetes/server/bin/conf123456[root@hdss7-21 conf]# kubectl config set-context myk8s-context \ --cluster=myk8s \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfigContext "myk8s-context" created. use-context注意:在conf目录下 /opt/kubernetes/server/bin/conf123[root@hdss7-21 conf]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfigSwitched to context "myk8s-context". 创建kube-proxy启动脚本HDSS7-21.host.com上: /opt/kubernetes/server/bin/kube-proxy-721.sh12345#!/bin/sh./kube-proxy \ --cluster-cidr 172.7.0.0/16 \ --hostname-override 10.4.7.21 \ --kubeconfig ./conf/kube-proxy.kubeconfig 注意:kube-proxy集群各主机的启动脚本略有不同,部署其他节点时注意修改。 检查配置,权限,创建日志目录HDSS7-21.host.com上: /opt/kubernetes/server/conf12345[root@hdss7-21 conf]# ls -l|grep kube-proxy.kubeconfig -rw------- 1 root root 6471 Jan 22 17:33 kube-proxy.kubeconfig[root@hdss7-21 conf]# chmod +x /opt/kubernetes/server/bin/kube-proxy-721.sh[root@hdss7-21 conf]# mkdir -p /data/logs/kubernetes/kube-proxy 创建supervisor配置HDSS7-21.host.com上: /etc/supervisord.d/kube-proxy.ini1234567891011121314151617181920212223[program:kube-proxy]command=/opt/kubernetes/server/bin/kube-proxy-721.sh ; the program (relative uses PATH, can take args)numprocs=1 ; number of processes copies to start (def 1)directory=/opt/kubernetes/server/bin ; directory to cwd to before exec (def no cwd)autostart=true ; start at supervisord start (default: true)autorestart=true ; retstart at unexpected quit (default: true)startsecs=22 ; number of secs prog must stay running (def. 1)startretries=3 ; max # of serial start failures (default 3)exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)stopsignal=QUIT ; signal used to kill process (default TERM)stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)user=root ; setuid to this UNIX account to run the programredirect_stderr=false ; redirect proc stderr to stdout (default false)stdout_logfile=/data/logs/kubernetes/kube-proxy/proxy.stdout.log ; stdout log path, NONE for none; default AUTOstdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)stdout_events_enabled=false ; emit events on stdout writes (default false)stderr_logfile=/data/logs/kubernetes/kube-proxy/proxy.stderr.log ; stderr log path, NONE for none; default AUTOstderr_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)stderr_logfile_backups=4 ; # of stderr logfile backups (default 10)stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)stderr_events_enabled=false ; emit events on stderr writes (default false) 启动服务并检查HDSS7-21.host.com上: 123456789[root@hdss7-21 bin]# supervisorctl updatekube-proxy: added process group[root@hdss7-21 bin]# supervisorctl statusetcd-server-7-21 RUNNING pid 9507, uptime 22:44:48kube-apiserver RUNNING pid 9770, uptime 21:10:49kube-controller-manager RUNNING pid 10048, uptime 18:22:10kube-kubelet RUNNING pid 14597, uptime 0:32:59kube-proxy STARTING kube-scheduler RUNNING pid 10041, uptime 18:22:13 安装部署启动检查所有集群规划主机上的kube-proxy服务略 部署addons插件验证kubernetes集群在任意一个运算节点,创建一个资源配置清单这里我们选择HDSS7-21.host.com主机 /root/nginx-ds.yaml12345678910111213141516171819202122232425262728293031323334apiVersion: v1kind: Servicemetadata: name: nginx-ds labels: app: nginx-dsspec: type: NodePort selector: app: nginx-ds ports: - name: http port: 80 targetPort: 80---apiVersion: extensions/v1beta1kind: DaemonSetmetadata: name: nginx-ds labels: addonmanager.kubernetes.io/mode: Reconcilespec: template: metadata: labels: app: nginx-ds spec: containers: - name: my-nginx image: nginx:1.7.9 ports: - containerPort: 80 应用资源配置,并检查/root12345[root@hdss7-21 ~]# kubectl create -f nginx-ds.yaml[root@hdss7-21 ~]# kubectl get podsNAME READY STATUS RESTARTS AGEnginx-ds-6hnc7 1/1 Running 0 99mnginx-ds-m5q6j 1/1 Running 0 18h 验证补 部署flannel集群规划 主机名 角色 ip HDSS7-21.host.com flannel 10.4.7.21 HDSS7-22.host.com flannel 10.4.7.22 注意:这里部署文档以HDSS7-21.host.com主机为例,另外一台运算节点安装部署方法类似 在各运算节点上增加iptables规则注意:iptables规则各主机的略有不同,其他运算节点上执行时注意修改。 优化SNAT规则,各运算节点之间的各POD之间的网络通信不再出网 12# iptables -t nat -D POSTROUTING -s 172.7.21.0/24 ! -o docker0 -j MASQUERADE# iptables -t nat -I POSTROUTING -s 172.7.21.0/24 ! -d 172.7.0.0/16 ! -o docker0 -j MASQUERADE 10.4.7.21主机上的,来源是172.7.21.0/24段的docker的ip,目标ip不是172.7.0.0/16段,网络发包不从docker0桥设备出站的,才进行SNAT转换 各运算节点保存iptables规则1[root@hdss7-21 ~]# iptables-save > /etc/sysconfig/iptables 下载软件,解压,做软连接HDSS7-21.host.com上: /opt/src12345678[root@hdss7-21 src]# ls -l|grep flannel-rw-r--r-- 1 root root 417761204 Jan 17 18:46 flannel-v0.10.0-linux-amd64.tar.gz[root@hdss7-21 src]# mkdir -p /opt/flannel-v0.10.0-linux-amd64/cert[root@hdss7-21 src]# tar xf flannel-v0.10.0-linux-amd64.tar.gz -C /opt/flannel-v0.10.0-linux-amd64[root@hdss7-21 src]# ln -s /opt/flannel-v0.10.0-linux-amd64 /opt/flannel[root@hdss7-21 src]# ls -l /opt|grep flannellrwxrwxrwx 1 root root 31 Jan 17 18:49 flannel -> flannel-v0.10.0-linux-amd64/drwxr-xr-x 4 root root 50 Jan 17 18:47 flannel-v0.10.0-linux-amd64 最终目录结构/opt123456789101112131415161718192021222324252627[root@hdss7-21 opt]# tree -L 2.|-- etcd -> etcd-v3.1.18-linux-amd64|-- etcd-v3.1.18-linux-amd64| |-- Documentation| |-- README-etcdctl.md| |-- README.md| |-- READMEv2-etcdctl.md| |-- certs| |-- etcd| |-- etcd-server-startup.sh| `-- etcdctl|-- flannel -> flannel-v0.10.0/|-- flannel-v0.10.0| |-- README.md| |-- cert| |-- flanneld| `-- mk-docker-opts.sh|-- kubernetes -> kubernetes-v1.13.2-linux-amd64/|-- kubernetes-v1.13.2-linux-amd64| |-- LICENSES| |-- addons| `-- server`-- src |-- etcd-v3.1.18-linux-amd64.tar.gz |-- flannel-v0.10.0-linux-amd64.tar.gz `-- kubernetes-server-linux-amd64.tar.gz 操作etcd,增加host-gwHDSS7-21.host.com上: /opt/etcd12[root@hdss7-21 etcd]# ./etcdctl set /coreos.com/network/config '{"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}}'{"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}} 创建配置HDSS7-21.host.com上: /opt/flannel/subnet.env1234FLANNEL_NETWORK=172.7.0.0/16FLANNEL_SUBNET=172.7.21.1/24FLANNEL_MTU=1500FLANNEL_IPMASQ=false 注意:flannel集群各主机的配置略有不同,部署其他节点时注意修改。 创建启动脚本HDSS7-21.host.com上: /opt/flannel/flanneld.sh12345678910#!/bin/sh./flanneld \ --public-ip=10.4.7.21 \ --etcd-endpoints=https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \ --etcd-keyfile=./cert/client-key.pem \ --etcd-certfile=./cert/client.pem \ --etcd-cafile=./cert/ca.pem \ --iface=eth0 \ --subnet-file=./subnet.env \ --healthz-port=2401 注意:flannel集群各主机的启动脚本略有不同,部署其他节点时注意修改。 检查配置,权限,创建日志目录HDSS7-21.host.com上: /opt/flannel123[root@hdss7-21 flannel]# chmod +x /opt/flannel/flanneld.sh [root@hdss7-21 flannel]# mkdir -p /data/logs/flanneld 创建supervisor配置HDSS7-21.host.com上: /etc/supervisord.d/flanneld.ini1234567891011121314151617181920212223[program:flanneld]command=/opt/flannel/flanneld.sh ; the program (relative uses PATH, can take args)numprocs=1 ; number of processes copies to start (def 1)directory=/opt/flannel ; directory to cwd to before exec (def no cwd)autostart=true ; start at supervisord start (default: true)autorestart=true ; retstart at unexpected quit (default: true)startsecs=22 ; number of secs prog must stay running (def. 1)startretries=3 ; max # of serial start failures (default 3)exitcodes=0,2 ; 'expected' exit codes for process (default 0,2)stopsignal=QUIT ; signal used to kill process (default TERM)stopwaitsecs=10 ; max num secs to wait b4 SIGKILL (default 10)user=root ; setuid to this UNIX account to run the programredirect_stderr=false ; redirect proc stderr to stdout (default false)stdout_logfile=/data/logs/flanneld/flanneld.stdout.log ; stdout log path, NONE for none; default AUTOstdout_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)stdout_logfile_backups=4 ; # of stdout logfile backups (default 10)stdout_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)stdout_events_enabled=false ; emit events on stdout writes (default false)stderr_logfile=/data/logs/flanneld/flanneld.stderr.log ; stderr log path, NONE for none; default AUTOstderr_logfile_maxbytes=64MB ; max # logfile bytes b4 rotation (default 50MB)stderr_logfile_backups=4 ; # of stderr logfile backups (default 10)stderr_capture_maxbytes=1MB ; number of bytes in 'capturemode' (default 0)stderr_events_enabled=false ; emit events on stderr writes (default false) 启动服务并检查HDSS7-21.host.com上: 12345678910[root@hdss7-21 flanneld]# supervisorctl updateflanneld: added process group[root@hdss7-21 flanneld]# supervisorctl statusetcd-server-7-21 RUNNING pid 9507, uptime 1 day, 20:35:42flanneld STARTING kube-apiserver RUNNING pid 9770, uptime 1 day, 19:01:43kube-controller-manager RUNNING pid 37646, uptime 0:58:48kube-kubelet RUNNING pid 32640, uptime 17:16:36kube-proxy RUNNING pid 15097, uptime 17:55:36kube-scheduler RUNNING pid 37803, uptime 0:55:47 安装部署启动检查所有集群规划主机上的flannel服务略 再次验证集群部署k8s资源配置清单的内网http服务在运维主机HDSS7-200.host.com上,配置一个nginx虚拟主机,用以提供k8s统一的资源配置清单访问入口/etc/nginx/conf.d/k8s-yaml.od.com.conf12345678910server { listen 80; server_name k8s-yaml.od.com; location / { autoindex on; default_type text/plain; root /data/k8s-yaml; }} 配置内网DNS解析HDSS7-11.host.com上 /var/named/od.com.zone1k8s-yaml 60 IN A 10.4.7.200 以后所有的资源配置清单统一放置在运维主机的/data/k8s-yaml目录下即可 1[root@hdss7-200 ~]# nginx -s reload 部署kube-dns(coredns)准备coredns-v1.3.1镜像运维主机HDSS7-200.host.com上: 12345678910111213[root@hdss7-200 ~]# docker pull coredns/coredns:1.3.11.3.1: Pulling from coredns/corednse0daa8927b68: Pull complete 3928e47de029: Pull complete Digest: sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4Status: Downloaded newer image for coredns/coredns:1.3.1[root@hdss7-200 ~]# docker tag eb516548c180 harbor.od.com/k8s/coredns:v1.3.1[root@hdss7-200 ~]# docker push harbor.od.com/k8s/coredns:v1.3.1docker push harbor.od.com/k8s/coredns:v1.3.1The push refers to a repository [harbor.od.com/k8s/coredns]c6a5fc8a3f01: Pushed fb61a074724d: Pushed v1.3.1: digest: sha256:e077b9680c32be06fc9652d57f64aa54770dd6554eb87e7a00b97cf8e9431fda size: 739 任意一台运算节点上: 1[root@hdss7-21 ~]# kubectl create secret docker-registry harbor --docker-server=harbor.od.com --docker-username=admin --docker-password=Harbor12345 [email protected] -n kube-system 准备资源配置清单运维主机HDSS7-200.host.com上: 1[root@hdss7-200 ~]# mkdir -p /data/k8s-yaml/coredns && cd /data/k8s-yaml/coredns RBACConfigMapDeploymentServicevi /data/k8s-yaml/coredns/rbac.yaml 123456789101112131415161718192021222324252627282930313233343536373839404142434445apiVersion: v1kind: ServiceAccountmetadata: name: coredns namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile\--\-apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: Reconcile name: system:corednsrules:- apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch\--\-apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: EnsureExists name: system:corednsroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:corednssubjects:- kind: ServiceAccount name: coredns namespace: kube-systemvi /data/k8s-yaml/coredns/configmap.yaml 123456789101112131415apiVersion: v1kind: ConfigMapmetadata: name: coredns namespace: kube-systemdata: Corefile: | .:53 { errors log health kubernetes cluster.local 192.168.0.0/16 proxy . /etc/resolv.conf cache 30 }vi /data/k8s-yaml/coredns/deployment.yaml 12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: coredns namespace: kube-system labels: k8s-app: coredns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS"spec: replicas: 1 selector: matchLabels: k8s-app: coredns template: metadata: labels: k8s-app: coredns spec: serviceAccountName: coredns containers: - name: coredns image: harbor.od.com/k8s/coredns:v1.3.1 args: - -conf - /etc/coredns/Corefile volumeMounts: - name: config-volume mountPath: /etc/coredns ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 dnsPolicy: Default imagePullSecrets: - name: harbor volumes: - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefilevi /data/k8s-yaml/coredns/svc.yaml 12345678910111213141516171819apiVersion: v1kind: Servicemetadata: name: coredns namespace: kube-system labels: k8s-app: coredns kubernetes.io/cluster-service: "true" kubernetes.io/name: "CoreDNS"spec: selector: k8s-app: coredns clusterIP: 192.168.0.2 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 依次执行创建浏览器打开:http://k8s-yaml.od.com/coredns 检查资源配置清单文件是否正确创建在任意运算节点上应用资源配置清单 12345678910[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/rbac.yamlserviceaccount/coredns createdclusterrole.rbac.authorization.k8s.io/system:coredns createdclusterrolebinding.rbac.authorization.k8s.io/system:coredns created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/configmap.yamlconfigmap/coredns created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/deployment.yamldeployment.extensions/coredns created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/coredns/svc.yamlservice/coredns created 检查12345678910[root@hdss7-21 ~]# kubectl get pods -n kube-system -o wideNAME READY STATUS RESTARTS AGEcoredns-7ccccdf57c-5b9ch 1/1 Running 0 3m4s[root@hdss7-21 coredns]# kubectl get svc -n kube-systemNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEcoredns ClusterIP 192.168.0.2 <none> 53/UDP,53/TCP 29s[root@hdss7-21 ~]# dig -t A nginx-ds.default.svc.cluster.local. @192.168.0.2 +short192.168.0.3 部署traefik(ingress)准备traefik镜像运维主机HDSS7-200.host.com上: 12345678910111213141516[root@hdss7-200 ~]# docker pull traefik:v1.7-alpinev1.7-alpine: Pulling from library/traefikbdf0201b3a05: Pull complete 9dfd896cc066: Pull complete de06b5685128: Pull complete c4d82a21fa27: Pull complete Digest: sha256:0531581bde9da0670fc2c7a4e419e1cc38abff74e7ba06410bf2b1b55c70ef15Status: Downloaded newer image for traefik:v1.7-alpine[root@hdss7-200 ~]# docker tag 1930b7508541 harbor.od.com/k8s/traefik:v1.7 [root@hdss7-200 ~]# docker push harbor.od.com/k8s/traefik:v1.7The push refers to a repository [harbor.od.com/k8s/traefik]a3e3d574f6ae: Pushed a7c355c1a104: Pushed e89059911fc9: Pushed a464c54f93a9: Mounted from infra/apollo-portal v1.7: digest: sha256:8f92899f5feb08db600c89d3016145e838fa7ff0d316ee21ecd63d9623643410 size: 1157 准备资源配置清单运维主机HDSS7-200.host.com上: 1[root@hdss7-200 ~]# mkdir -p /data/k8s-yaml/traefik && cd /data/k8s-yaml/traefik RBACDaemonSetServiceIngressvi /data/k8s-yaml/traefik/rbac.yaml 123456789101112131415161718192021222324252627282930313233343536373839404142apiVersion: v1kind: ServiceAccountmetadata: name: traefik-ingress-controller namespace: kube-system\--\-apiVersion: rbac.authorization.k8s.io/v1beta1kind: ClusterRolemetadata: name: traefik-ingress-controllerrules: - apiGroups: - "" resources: - services - endpoints - secrets verbs: - get - list - watch - apiGroups: - extensions resources: - ingresses verbs: - get - list - watch\--\-kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: traefik-ingress-controllerroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: traefik-ingress-controllersubjects:- kind: ServiceAccount name: traefik-ingress-controller namespace: kube-systemvi /data/k8s-yaml/traefik/daemonset.yaml 1234567891011121314151617181920212223242526272829303132333435363738394041424344apiVersion: extensions/v1beta1kind: DaemonSetmetadata: name: traefik-ingress-controller namespace: kube-system labels: k8s-app: traefik-ingress-lbspec: template: metadata: labels: k8s-app: traefik-ingress-lb name: traefik-ingress-lb spec: serviceAccountName: traefik-ingress-controller terminationGracePeriodSeconds: 60 containers: - image: harbor.od.com/k8s/traefik:v1.7 name: traefik-ingress-lb ports: - name: http containerPort: 80 hostPort: 81 - name: admin containerPort: 8080 securityContext: capabilities: drop: - ALL add: - NET_BIND_SERVICE args: - -\-api - -\-kubernetes - -\-logLevel=INFO - -\-insecureskipverify=true - -\-kubernetes.endpoint=https://10.4.7.10:7443 - -\-accesslog - -\-accesslog.filepath=/var/log/traefik_access.log - -\-traefiklog - -\-traefiklog.filepath=/var/log/traefik.log - -\-metrics.prometheus imagePullSecrets: - name: harborvi /data/k8s-yaml/traefik/svc.yaml 123456789101112131415kind: ServiceapiVersion: v1metadata: name: traefik-ingress-service namespace: kube-systemspec: selector: k8s-app: traefik-ingress-lb ports: - protocol: TCP port: 80 name: web - protocol: TCP port: 8080 name: adminvi /data/k8s-yaml/traefik/ingress.yaml 123456789101112131415apiVersion: extensions/v1beta1kind: Ingressmetadata: name: traefik-web-ui namespace: kube-system annotations: kubernetes.io/ingress.class: traefikspec: rules: - host: traefik.od.com http: paths: - backend: serviceName: traefik-ingress-service servicePort: 8080 解析域名HDSS7-11.host.com上 /var/named/od.com.zone1traefik 60 IN A 10.4.7.10 依次执行创建浏览器打开:http://k8s-yaml.od.com/traefik 检查资源配置清单文件是否正确创建在任意运算节点应用资源配置清单 12345678910111213[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/rbac.yaml serviceaccount/traefik-ingress-controller createdclusterrole.rbac.authorization.k8s.io/traefik-ingress-controller createdclusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/daemonset.yaml daemonset.extensions/traefik-ingress-controller created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/svc.yaml service/traefik-ingress-service created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/traefik/ingress.yaml ingress.extensions/traefik-web-ui created 配置反代HDSS7-11.host.com和HDSS7-12.host.com两台主机上的nginx均需要配置,这里可以考虑使用saltstack或者ansible进行统一配置管理 /etc/nginx/conf.d/od.com.conf12345678910111213upstream default_backend_traefik { server 10.4.7.21:81 max_fails=3 fail_timeout=10s; server 10.4.7.22:81 max_fails=3 fail_timeout=10s;}server { server_name *.od.com; location / { proxy_pass http://default_backend_traefik; proxy_set_header Host $http_host; proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for; }} 浏览器访问http://traefik.od.com 部署dashboard准备dashboard镜像运维主机HDSS7-200.host.com上: 1234567891011[root@hdss7-200 ~]# docker pull k8scn/kubernetes-dashboard-amd64:v1.8.3v1.8.3: Pulling from k8scn/kubernetes-dashboard-amd64a4026007c47e: Pull complete Digest: sha256:ebc993303f8a42c301592639770bd1944d80c88be8036e2d4d0aa116148264ffStatus: Downloaded newer image for k8scn/kubernetes-dashboard-amd64:v1.8.3[root@hdss7-200 ~]# docker tag 0c60bcf89900 harbor.od.com/k8s/dashboard:v1.8.3[root@hdss7-200 ~]# docker push harbor.od.com/k8s/dashboard:v1.8.3docker push harbor.od.com/k8s/dashboard:v1.8.3The push refers to a repository [harbor.od.com/k8s/dashboard]23ddb8cbb75a: Pushed v1.8.3: digest: sha256:e76c5fe6886c99873898e4c8c0945261709024c4bea773fc477629455631e472 size: 529 准备资源配置清单运维主机HDSS7-200.host.com上: 1[root@hdss7-200 ~]# mkdir -p /data/k8s-yaml/dashboard && cd /data/k8s-yaml/dashboard RBACSecretConfigMapServiceIngressDeploymentvi /data/k8s-yaml/dashboard/rbac.yaml 12345678910111213141516171819202122232425apiVersion: v1kind: ServiceAccountmetadata: labels: k8s-app: kubernetes-dashboard addonmanager.kubernetes.io/mode: Reconcile name: kubernetes-dashboard-admin namespace: kube-system\--\-apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: kubernetes-dashboard-admin namespace: kube-system labels: k8s-app: kubernetes-dashboard addonmanager.kubernetes.io/mode: ReconcileroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-adminsubjects:- kind: ServiceAccount name: kubernetes-dashboard-admin namespace: kube-systemvi /data/k8s-yaml/dashboard/secret.yaml 123456789101112131415161718192021apiVersion: v1kind: Secretmetadata: labels: k8s-app: kubernetes-dashboard # Allows editing resource and makes sure it is created first. addonmanager.kubernetes.io/mode: EnsureExists name: kubernetes-dashboard-certs namespace: kube-systemtype: Opaque\--\-apiVersion: v1kind: Secretmetadata: labels: k8s-app: kubernetes-dashboard # Allows editing resource and makes sure it is created first. addonmanager.kubernetes.io/mode: EnsureExists name: kubernetes-dashboard-key-holder namespace: kube-systemtype: Opaquevi /data/k8s-yaml/dashboard/configmap.yaml 123456789apiVersion: v1kind: ConfigMapmetadata: labels: k8s-app: kubernetes-dashboard # Allows editing resource and makes sure it is created first. addonmanager.kubernetes.io/mode: EnsureExists name: kubernetes-dashboard-settings namespace: kube-systemvi /data/k8s-yaml/dashboard/svc.yaml 123456789101112131415apiVersion: v1kind: Servicemetadata: name: kubernetes-dashboard namespace: kube-system labels: k8s-app: kubernetes-dashboard kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcilespec: selector: k8s-app: kubernetes-dashboard ports: - port: 443 targetPort: 8443vi /data/k8s-yaml/dashboard/ingress.yaml 123456789101112131415apiVersion: extensions/v1beta1kind: Ingressmetadata: name: kubernetes-dashboard namespace: kube-system annotations: kubernetes.io/ingress.class: traefikspec: rules: - host: dashboard.od.com http: paths: - backend: serviceName: kubernetes-dashboard servicePort: 443vi /data/k8s-yaml/dashboard/deployment.yaml 12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061apiVersion: apps/v1kind: Deploymentmetadata: name: kubernetes-dashboard namespace: kube-system labels: k8s-app: kubernetes-dashboard kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcilespec: selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: priorityClassName: system-cluster-critical containers: - name: kubernetes-dashboard image: harbor.od.com/k8s/dashboard:v1.8.3 resources: limits: cpu: 100m memory: 300Mi requests: cpu: 50m memory: 100Mi ports: - containerPort: 8443 protocol: TCP args: # PLATFORM-SPECIFIC ARGS HERE - -\-auto-generate-certificates volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs - name: tmp-volume mountPath: /tmp livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard-admin tolerations: - key: "CriticalAddonsOnly" operator: "Exists" imagePullSecrets: - name: harbor 解析域名HDSS7-11.host.com上 /var/named/od.com.zone1dashboard 60 IN A 10.4.7.10 依次执行创建浏览器打开:http://k8s-yaml.od.com/dashboard 检查资源配置清单文件是否正确创建在任意运算节点应用资源配置清单 1234567891011121314151617181920[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/rbac.yaml serviceaccount/kubernetes-dashboard createdrole.rbac.authorization.k8s.io/kubernetes-dashboard-admin createdrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-admin created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/secret.yaml secret/kubernetes-dashboard-certs createdsecret/kubernetes-dashboard-key-holder created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/configmap.yaml configmap/kubernetes-dashboard-settings created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/svc.yaml service/kubernetes-dashboard created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/ingress.yaml ingress.extensions/kubernetes-dashboard created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/deployment.yaml deployment.apps/kubernetes-dashboard created 浏览器访问http://dashboard.od.com 配置认证 下载新版dashboard 123[root@hdss7-200 ~]# docker pull hexun/kubernetes-dashboard-amd64:v1.10.1[root@hdss7-200 ~]# docker tag f9aed6605b81 harbor.od.com/k8s/dashboard:v1.10.1[root@hdss7-200 ~]# docker push harbor.od.com/k8s/dashboard:v1.10.1 应用新版dashboard 修改nginx配置,走https /etc/nginx/conf.d/dashboard.od.com.conf1234567891011121314151617181920212223server { listen 80; server_name dashboard.od.com; rewrite ^(.*)$ https://${server_name}$1 permanent;}server { listen 443 ssl; server_name dashboard.od.com; ssl_certificate "certs/dashboard.od.com.crt"; ssl_certificate_key "certs/dashboard.od.com.key"; ssl_session_cache shared:SSL:1m; ssl_session_timeout 10m; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_pass http://default_backend_traefik; proxy_set_header Host $http_host; proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for; }} 获取token 1234567891011121314[root@hdsss7-21 ~]# kubectl describe secret kubernetes-dashboard-admin-token-rhr62 -n kube-systemName: kubernetes-dashboard-admin-token-rhr62Namespace: kube-systemLabels: <none>Annotations: kubernetes.io/service-account.name: kubernetes-dashboard-admin kubernetes.io/service-account.uid: cdd3c552-856d-11e9-ae34-782bcb321c07Type: kubernetes.io/service-account-tokenData====ca.crt: 1354 bytesnamespace: 11 bytestoken: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi1yaHI2MiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImNkZDNjNTUyLTg1NmQtMTFlOS1hZTM0LTc4MmJjYjMyMWMwNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.72OcJZCm_3I-7QZcEJTRPyIJSxQwSwZfVsB6Bx_RAZRJLOv3-BXy88PclYgxRy2dDqeX6cpjvFPBrmNOGQoxT9oD8_H49pvBnqdCdNuoJbXK7aBIZdkZxATzXd-63zmhHhUBsM3Ybgwy5XxD3vj8VUYfux5c5Mr4TzU_rnGLCj1H5mq_JJ3hNabv0rwil-ZAV-3HLikOMiIRhEK7RdMs1bfXF2yvse4VOabe9xv47TvbEYns97S4OlZvsurmOk0B8dD85OSaREEtqa8n_ND9GrHeeL4CcALqWYJHLrr7vLfndXi1QHDVrUzFKvgkAeYpDVAzGwIWL7rgHwp3sQguGA 部署heapsterheapster官方github地址 准备heapster镜像运维主机HDSS7-200.host.com上 123456789101112131415[root@hdss7-200 ~]# docker pull quay.io/bitnami/heapster:1.5.41.5.4: Pulling from bitnami/heapster4018396ca1ba: Pull complete 0e4723f815c4: Pull complete d8569f30adeb: Pull complete Digest: sha256:6d891479611ca06a5502bc36e280802cbf9e0426ce4c008dd2919c2294ce0324Status: Downloaded newer image for quay.io/bitnami/heapster:1.5.4[root@hdss7--200 ~]# docker tag c359b95ad38b harbor.od.com/k8s/heapster:v1.5.4[root@hdss7--200 ~]# docker push !$docker push harbor.od.com/k8s/heapster:v1.5.4The push refers to a repository [harbor.od.com/k8s/heapster]20d37d828804: Pushed b9b192015e25: Pushed b76dba5a0109: Pushed v1.5.4: digest: sha256:1203b49f2b2b07e02e77263bce8bb30563a91e1d7ee7c6742e9d125abcb3abe6 size: 952 准备资源配置清单RBACDeploymentServicevi /data/k8s-yaml/dashboard/heapster/rbac.yaml 123456789101112131415161718apiVersion: v1kind: ServiceAccountmetadata: name: heapster namespace: kube-system\--\-kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1beta1metadata: name: heapsterroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:heapstersubjects:- kind: ServiceAccount name: heapster namespace: kube-systemvi /data/k8s-yaml/dashboard/heapster/deployment.yaml 123456789101112131415161718192021apiVersion: extensions/v1beta1kind: Deploymentmetadata: name: heapster namespace: kube-systemspec: replicas: 1 template: metadata: labels: task: monitoring k8s-app: heapster spec: serviceAccountName: heapster containers: - name: heapster image: harbor.od.com/k8s/heapster:v1.5.4 imagePullPolicy: IfNotPresent command: - /opt/bitnami/heapster/bin/heapster - \--source=kubernetes:https://kubernetes.defaultvi /data/k8s-yaml/dashboard/heapster/svc.yaml 1234567891011121314151617apiVersion: v1kind: Servicemetadata: labels: task: monitoring # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons) # If you are NOT using this as an addon, you should comment out this line. kubernetes.io/cluster-service: 'true' kubernetes.io/name: Heapster name: heapster namespace: kube-systemspec: ports: - port: 80 targetPort: 8082 selector: k8s-app: heapster 应用资源配置清单任意运算节点上: 1234567[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/heapster/rbac.yaml serviceaccount/heapster createdclusterrolebinding.rbac.authorization.k8s.io/heapster created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/heapster/deployment.yaml deployment.extensions/heapster created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dashboard/heapster/svc.yaml service/heapster created 重启dashboard浏览器访问:http://dashboard.od.com 排错专用命令1for j in `kubectl get ns|sed '1d'|awk '{print $1}'`;do for i in `kubectl get pods -n $j|grep -iv running|sed '1d'|awk '{print $1}'`;do kubectl delete pods $i -n $j --force --grace-period=0;done;done]]></content>
<categories>
<category>Kubernetes容器云技术专题</category>
</categories>
</entry>
<entry>
<title><![CDATA[实验文档2:实战交付一套dubbo微服务到kubernetes集群]]></title>
<url>%2F2019%2F01%2F18%2F%E5%AE%9E%E9%AA%8C%E6%96%87%E6%A1%A32%EF%BC%9A%E5%AE%9E%E6%88%98%E4%BA%A4%E4%BB%98%E4%B8%80%E5%A5%97dubbo%E5%BE%AE%E6%9C%8D%E5%8A%A1%E5%88%B0kubernetes%E9%9B%86%E7%BE%A4%2F</url>
<content type="text"><![CDATA[欢迎加入王导的VIP学习qq群:==>932194668<== 基础架构 主机名 角色 ip HDSS7-11.host.com k8s代理节点1,zk1 10.4.7.11 HDSS7-12.host.com k8s代理节点2,zk2 10.4.7.12 HDSS7-21.host.com k8s运算节点1,zk3 10.4.7.21 HDSS7-22.host.com k8s运算节点2,jenkins 10.4.7.22 HDSS7-200.host.com k8s运维节点(docker仓库) 10.4.7.200 部署zookeeper安装jdk1.8(3台zk角色主机) jdk下载地址jdk1.6jdk1.7jdk1.8 /opt/src123456789[root@hdss7-11 src]# ls -l|grep jdk-rw-r--r-- 1 root root 153530841 Jan 17 17:49 jdk-8u201-linux-x64.tar.gz[root@hdss7-11 src]# mkdir /usr/java[root@hdss7-11 src]# tar xf jdk-8u201-linux-x64.tar.gz -C /usr/java[root@hdss7-11 src]# ln -s /usr/java/jdk1.8.0_201 /usr/java/jdk[root@hdss7-11 src]# vi /etc/profileexport JAVA_HOME=/usr/java/jdkexport PATH=$JAVA_HOME/bin:$JAVA_HOME/bin:$PATHexport CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/lib/tools.jar 安装zookeeper(3台zk角色主机) zk下载地址zookeeper 解压、配置/opt/src123456789101112131415[root@hdss7-11 src]# ls -l|grep zoo-rw-r--r-- 1 root root 153530841 Jan 17 18:10 zookeeper-3.4.14.tar.gz[root@hdss7-11 src]# tar xf /opt/src/zookeeper-3.4.14.tar.gz -C /opt[root@hdss7-11 opt]# ln -s /opt/zookeeper-3.4.14/ /opt/zookeeper[root@hdss7-11 opt]# mkdir -pv /data/zookeeper/data /data/zookeeper/logs[root@hdss7-11 opt]# vi /opt/zookeeper/conf/zoo.cfgtickTime=2000initLimit=10syncLimit=5dataDir=/data/zookeeper/datadataLogDir=/data/zookeeper/logsclientPort=2181server.1=zk1.od.com:2888:3888server.2=zk2.od.com:2888:3888server.3=zk3.od.com:2888:3888 注意:各节点zk配置相同。 myidHDSS7-11.host.com上: /data/zookeeper/data/myid11 HDSS7-12.host.com上: /data/zookeeper/data/myid12 HDSS7-21.host.com上: /data/zookeeper/data/myid13 做dns解析HDSS7-11.host.com上 /var/named/od.com.zone123zk1 60 IN A 10.4.7.11zk2 60 IN A 10.4.7.12zk3 60 IN A 10.4.7.21 依次启动1234[root@hdss7-11 opt]# /opt/zookeeper/bin/zkServer.sh startZooKeeper JMX enabled by defaultUsing config: /opt/zookeeper/bin/../conf/zoo.cfgStarting zookeeper ... STARTED 部署jenkins准备镜像 jenkins官网jenkins镜像 在运维主机下载官网上的稳定版(这里下载2.164.1) 123456789101112131415161718192021222324252627[root@hdss7-200 ~]# docker pull jenkins/jenkins:2.164.12.164.1: Pulling from jenkins/jenkins22dbe790f715: Pull complete 0250231711a0: Pull complete 6fba9447437b: Pull complete c2b4d327b352: Pull complete cddb9bb0d37c: Pull complete b535486c968f: Pull complete f3e976e6210c: Pull complete b2c11b10291d: Pull complete f4c0181e1976: Pull complete 924c8e712392: Pull complete d13006b7c9dd: Pull complete fc80aeb92627: Pull complete 36a6e96ba1b5: Pull complete f50f33dc1d0a: Pull complete b10642432117: Pull complete 850c260511d8: Pull complete 47f95e65a629: Pull complete 3b33ce546dc6: Pull complete 051c7665e760: Pull complete fe379aecc538: Pull complete Digest: sha256:12fd14965de7274b5201653b2bffa62700c5f5f336ec75c945321e2cb70d7af0Status: Downloaded newer image for jenkins/jenkins:2.164.1[root@hdss7-200 ~]# docker tag 256cb12e72d6 harbor.od.com/public/jenkins:v2.164.1[root@hdss7-200 ~]# docker push harbor.od.com/public/jenkins:v2.164.1 自定义Dockerfile在运维主机HDSS7-200.host.com上编辑自定义dockerfile /data/dockerfile/jenkins/Dockerfile123456789FROM harbor.od.com/public/jenkins:v2.164.1USER rootRUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\ echo 'Asia/Shanghai' >/etc/timezoneADD id_rsa /root/.ssh/id_rsaADD config.json /root/.docker/config.jsonADD get-docker.sh /get-docker.shRUN echo " StrictHostKeyChecking no" >> /etc/ssh/ssh_config &&\ /get-docker.sh 这个Dockerfile里我们主要做了以下几件事 设置容器用户为root 设置容器内的时区 将ssh私钥加入(使用git拉代码时要用到,配对的公钥应配置在gitlab中) 加入了登录自建harbor仓库的config文件 修改了ssh客户端的 安装一个docker的客户端 生成ssh密钥对: 1[root@hdss7-200 ~]# ssh-keygen -t rsa -b 2048 -C "[email protected]" -N "" -f /root/.ssh/id_rsa config.jsonget-docker.sh1234567{ "auths": { "harbor.od.com": { "auth": "YWRtaW46SGFyYm9yMTIzNDU=" } }}123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525#!/bin/shset -e# This script is meant for quick & easy install via:# $ curl -fsSL get.docker.com -o get-docker.sh# $ sh get-docker.sh## For test builds (ie. release candidates):# $ curl -fsSL test.docker.com -o test-docker.sh# $ sh test-docker.sh## NOTE: Make sure to verify the contents of the script# you downloaded matches the contents of install.sh# located at https://github.com/docker/docker-install# before executing.## Git commit from https://github.com/docker/docker-install when# the script was uploaded (Should only be modified by upload job):SCRIPT_COMMIT_SHA=36b78b2# This value will automatically get changed for:# * edge# * test# * experimentalDEFAULT_CHANNEL_VALUE="edge"if [ -z "$CHANNEL" ]; then CHANNEL=$DEFAULT_CHANNEL_VALUEfiDEFAULT_DOWNLOAD_URL="https://download.docker.com"if [ -z "$DOWNLOAD_URL" ]; then DOWNLOAD_URL=$DEFAULT_DOWNLOAD_URLfiDEFAULT_REPO_FILE="docker-ce.repo"if [ -z "$REPO_FILE" ]; then REPO_FILE="$DEFAULT_REPO_FILE"fiSUPPORT_MAP="x86_64-centos-7x86_64-fedora-26x86_64-fedora-27x86_64-fedora-28x86_64-debian-wheezyx86_64-debian-jessiex86_64-debian-stretchx86_64-debian-busterx86_64-ubuntu-trustyx86_64-ubuntu-xenialx86_64-ubuntu-bionicx86_64-ubuntu-artfuls390x-ubuntu-xenials390x-ubuntu-bionics390x-ubuntu-artfulppc64le-ubuntu-xenialppc64le-ubuntu-bionicppc64le-ubuntu-artfulaarch64-ubuntu-xenialaarch64-ubuntu-bionicaarch64-debian-jessieaarch64-debian-stretchaarch64-debian-busteraarch64-fedora-26aarch64-fedora-27aarch64-fedora-28aarch64-centos-7armv6l-raspbian-jessiearmv7l-raspbian-jessiearmv6l-raspbian-stretcharmv7l-raspbian-stretcharmv7l-debian-jessiearmv7l-debian-stretcharmv7l-debian-busterarmv7l-ubuntu-trustyarmv7l-ubuntu-xenialarmv7l-ubuntu-bionicarmv7l-ubuntu-artful"mirror=''DRY_RUN=${DRY_RUN:-}while [ $# -gt 0 ]; do case "$1" in --mirror) mirror="$2" shift ;; --dry-run) DRY_RUN=1 ;; --*) echo "Illegal option $1" ;; esac shift $(( $# > 0 ? 1 : 0 ))donecase "$mirror" in Aliyun) DOWNLOAD_URL="https://mirrors.aliyun.com/docker-ce" ;; AzureChinaCloud) DOWNLOAD_URL="https://mirror.azure.cn/docker-ce" ;;esaccommand_exists() { command -v "$@" > /dev/null 2>&1}is_dry_run() { if [ -z "$DRY_RUN" ]; then return 1 else return 0 fi}deprecation_notice() { distro=$1 date=$2 echo echo "DEPRECATION WARNING:" echo " The distribution, $distro, will no longer be supported in this script as of $date." echo " If you feel this is a mistake please submit an issue at https://github.com/docker/docker-install/issues/new" echo sleep 10}get_distribution() { lsb_dist="" # Every system that we officially support has /etc/os-release if [ -r /etc/os-release ]; then lsb_dist="$(. /etc/os-release && echo "$ID")" fi # Returning an empty string here should be alright since the # case statements don't act unless you provide an actual value echo "$lsb_dist"}add_debian_backport_repo() { debian_version="$1" backports="deb http://ftp.debian.org/debian $debian_version-backports main" if ! grep -Fxq "$backports" /etc/apt/sources.list; then (set -x; $sh_c "echo \"$backports\" >> /etc/apt/sources.list") fi}echo_docker_as_nonroot() { if is_dry_run; then return fi if command_exists docker && [ -e /var/run/docker.sock ]; then ( set -x $sh_c 'docker version' ) || true fi your_user=your-user [ "$user" != 'root' ] && your_user="$user" # intentionally mixed spaces and tabs here -- tabs are stripped by "<<-EOF", spaces are kept in the output echo "If you would like to use Docker as a non-root user, you should now consider" echo "adding your user to the \"docker\" group with something like:" echo echo " sudo usermod -aG docker $your_user" echo echo "Remember that you will have to log out and back in for this to take effect!" echo echo "WARNING: Adding a user to the \"docker\" group will grant the ability to run" echo " containers which can be used to obtain root privileges on the" echo " docker host." echo " Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface" echo " for more information."}# Check if this is a forked Linux distrocheck_forked() { # Check for lsb_release command existence, it usually exists in forked distros if command_exists lsb_release; then # Check if the `-u` option is supported set +e lsb_release -a -u > /dev/null 2>&1 lsb_release_exit_code=$? set -e # Check if the command has exited successfully, it means we're in a forked distro if [ "$lsb_release_exit_code" = "0" ]; then # Print info about current distro cat <<-EOF You're using '$lsb_dist' version '$dist_version'. EOF # Get the upstream release info lsb_dist=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'id' | cut -d ':' -f 2 | tr -d '[:space:]') dist_version=$(lsb_release -a -u 2>&1 | tr '[:upper:]' '[:lower:]' | grep -E 'codename' | cut -d ':' -f 2 | tr -d '[:space:]') # Print info about upstream distro cat <<-EOF Upstream release is '$lsb_dist' version '$dist_version'. EOF else if [ -r /etc/debian_version ] && [ "$lsb_dist" != "ubuntu" ] && [ "$lsb_dist" != "raspbian" ]; then if [ "$lsb_dist" = "osmc" ]; then # OSMC runs Raspbian lsb_dist=raspbian else # We're Debian and don't even know it! lsb_dist=debian fi dist_version="$(sed 's/\/.*//' /etc/debian_version | sed 's/\..*//')" case "$dist_version" in 9) dist_version="stretch" ;; 8|'Kali Linux 2') dist_version="jessie" ;; 7) dist_version="wheezy" ;; esac fi fi fi}semverParse() { major="${1%%.*}" minor="${1#$major.}" minor="${minor%%.*}" patch="${1#$major.$minor.}" patch="${patch%%[-.]*}"}ee_notice() { echo echo echo " WARNING: $1 is now only supported by Docker EE" echo " Check https://store.docker.com for information on Docker EE" echo echo}do_install() { echo "# Executing docker install script, commit: $SCRIPT_COMMIT_SHA" if command_exists docker; then docker_version="$(docker -v | cut -d ' ' -f3 | cut -d ',' -f1)" MAJOR_W=1 MINOR_W=10 semverParse "$docker_version" shouldWarn=0 if [ "$major" -lt "$MAJOR_W" ]; then shouldWarn=1 fi if [ "$major" -le "$MAJOR_W" ] && [ "$minor" -lt "$MINOR_W" ]; then shouldWarn=1 fi cat >&2 <<-'EOF' Warning: the "docker" command appears to already exist on this system. If you already have Docker installed, this script can cause trouble, which is why we're displaying this warning and provide the opportunity to cancel the installation. If you installed the current Docker package using this script and are using it EOF if [ $shouldWarn -eq 1 ]; then cat >&2 <<-'EOF' again to update Docker, we urge you to migrate your image store before upgrading to v1.10+. You can find instructions for this here: https://github.com/docker/docker/wiki/Engine-v1.10.0-content-addressability-migration EOF else cat >&2 <<-'EOF' again to update Docker, you can safely ignore this message. EOF fi cat >&2 <<-'EOF' You may press Ctrl+C now to abort this script. EOF ( set -x; sleep 20 ) fi user="$(id -un 2>/dev/null || true)" sh_c='sh -c' if [ "$user" != 'root' ]; then if command_exists sudo; then sh_c='sudo -E sh -c' elif command_exists su; then sh_c='su -c' else cat >&2 <<-'EOF' Error: this installer needs the ability to run commands as root. We are unable to find either "sudo" or "su" available to make this happen. EOF exit 1 fi fi if is_dry_run; then sh_c="echo" fi # perform some very rudimentary platform detection lsb_dist=$( get_distribution ) lsb_dist="$(echo "$lsb_dist" | tr '[:upper:]' '[:lower:]')" case "$lsb_dist" in ubuntu) if command_exists lsb_release; then dist_version="$(lsb_release --codename | cut -f2)" fi if [ -z "$dist_version" ] && [ -r /etc/lsb-release ]; then dist_version="$(. /etc/lsb-release && echo "$DISTRIB_CODENAME")" fi ;; debian|raspbian) dist_version="$(sed 's/\/.*//' /etc/debian_version | sed 's/\..*//')" case "$dist_version" in 9) dist_version="stretch" ;; 8) dist_version="jessie" ;; 7) dist_version="wheezy" ;; esac ;; centos) if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then dist_version="$(. /etc/os-release && echo "$VERSION_ID")" fi ;; rhel|ol|sles) ee_notice "$lsb_dist" exit 1 ;; *) if command_exists lsb_release; then dist_version="$(lsb_release --release | cut -f2)" fi if [ -z "$dist_version" ] && [ -r /etc/os-release ]; then dist_version="$(. /etc/os-release && echo "$VERSION_ID")" fi ;; esac # Check if this is a forked Linux distro check_forked # Check if we actually support this configuration if ! echo "$SUPPORT_MAP" | grep "$(uname -m)-$lsb_dist-$dist_version" >/dev/null; then cat >&2 <<-'EOF' Either your platform is not easily detectable or is not supported by this installer script. Please visit the following URL for more detailed installation instructions: https://docs.docker.com/engine/installation/ EOF exit 1 fi # Run setup for each distro accordingly case "$lsb_dist" in ubuntu|debian|raspbian) pre_reqs="apt-transport-https ca-certificates curl" if [ "$lsb_dist" = "debian" ]; then if [ "$dist_version" = "wheezy" ]; then add_debian_backport_repo "$dist_version" fi # libseccomp2 does not exist for debian jessie main repos for aarch64 if [ "$(uname -m)" = "aarch64" ] && [ "$dist_version" = "jessie" ]; then add_debian_backport_repo "$dist_version" fi fi # TODO: August 31, 2018 delete from here, if [ "$lsb_dist" = "ubuntu" ] && [ "$dist_version" = "artful" ]; then deprecation_notice "$lsb_dist $dist_version" "August 31, 2018" fi # TODO: August 31, 2018 delete to here, if ! command -v gpg > /dev/null; then pre_reqs="$pre_reqs gnupg" fi apt_repo="deb [arch=$(dpkg --print-architecture)] $DOWNLOAD_URL/linux/$lsb_dist $dist_version $CHANNEL" ( if ! is_dry_run; then set -x fi $sh_c 'apt-get update -qq >/dev/null' $sh_c "apt-get install -y -qq $pre_reqs >/dev/null" $sh_c "curl -fsSL \"$DOWNLOAD_URL/linux/$lsb_dist/gpg\" | apt-key add -qq - >/dev/null" $sh_c "echo \"$apt_repo\" > /etc/apt/sources.list.d/docker.list" if [ "$lsb_dist" = "debian" ] && [ "$dist_version" = "wheezy" ]; then $sh_c 'sed -i "/deb-src.*download\.docker/d" /etc/apt/sources.list.d/docker.list' fi $sh_c 'apt-get update -qq >/dev/null' ) pkg_version="" if [ ! -z "$VERSION" ]; then if is_dry_run; then echo "# WARNING: VERSION pinning is not supported in DRY_RUN" else # Will work for incomplete versions IE (17.12), but may not actually grab the "latest" if in the test channel pkg_pattern="$(echo "$VERSION" | sed "s/-ce-/~ce~.*/g" | sed "s/-/.*/g").*-0~$lsb_dist" search_command="apt-cache madison 'docker-ce' | grep '$pkg_pattern' | head -1 | cut -d' ' -f 4" pkg_version="$($sh_c "$search_command")" echo "INFO: Searching repository for VERSION '$VERSION'" echo "INFO: $search_command" if [ -z "$pkg_version" ]; then echo echo "ERROR: '$VERSION' not found amongst apt-cache madison results" echo exit 1 fi pkg_version="=$pkg_version" fi fi ( if ! is_dry_run; then set -x fi $sh_c "apt-get install -y -qq --no-install-recommends docker-ce$pkg_version >/dev/null" ) echo_docker_as_nonroot exit 0 ;; centos|fedora) yum_repo="$DOWNLOAD_URL/linux/$lsb_dist/$REPO_FILE" if ! curl -Ifs "$yum_repo" > /dev/null; then echo "Error: Unable to curl repository file $yum_repo, is it valid?" exit 1 fi if [ "$lsb_dist" = "fedora" ]; then if [ "$dist_version" -lt "26" ]; then echo "Error: Only Fedora >=26 are supported" exit 1 fi pkg_manager="dnf" config_manager="dnf config-manager" enable_channel_flag="--set-enabled" pre_reqs="dnf-plugins-core" pkg_suffix="fc$dist_version" else pkg_manager="yum" config_manager="yum-config-manager" enable_channel_flag="--enable" pre_reqs="yum-utils" pkg_suffix="el" fi ( if ! is_dry_run; then set -x fi $sh_c "$pkg_manager install -y -q $pre_reqs" $sh_c "$config_manager --add-repo $yum_repo" if [ "$CHANNEL" != "stable" ]; then $sh_c "$config_manager $enable_channel_flag docker-ce-$CHANNEL" fi $sh_c "$pkg_manager makecache" ) pkg_version="" if [ ! -z "$VERSION" ]; then if is_dry_run; then echo "# WARNING: VERSION pinning is not supported in DRY_RUN" else pkg_pattern="$(echo "$VERSION" | sed "s/-ce-/\\\\.ce.*/g" | sed "s/-/.*/g").*$pkg_suffix" search_command="$pkg_manager list --showduplicates 'docker-ce' | grep '$pkg_pattern' | tail -1 | awk '{print \$2}'" pkg_version="$($sh_c "$search_command")" echo "INFO: Searching repository for VERSION '$VERSION'" echo "INFO: $search_command" if [ -z "$pkg_version" ]; then echo echo "ERROR: '$VERSION' not found amongst $pkg_manager list results" echo exit 1 fi # Cut out the epoch and prefix with a '-' pkg_version="-$(echo "$pkg_version" | cut -d':' -f 2)" fi fi ( if ! is_dry_run; then set -x fi $sh_c "$pkg_manager install -y -q docker-ce$pkg_version" ) echo_docker_as_nonroot exit 0 ;; esac exit 1}# wrapped up in a function so that we have some protection against only getting# half the file during "curl | sh"do_install 制作自定义镜像/data/dockerfile/jenkins12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970[root@hdss7-200 jenkins]# ls -ltotal 24-rw------- 1 root root 98 Jan 17 15:58 config.json-rw-r--r-- 1 root root 158 Jan 17 15:59 Dockerfile-rwxr-xr-x 1 root root 13847 Jan 17 15:37 get-docker.sh-rw------- 1 root root 1679 Jan 17 15:39 id_rsa[root@hdss7-200 jenkins]# docker build . -t harbor.od.com/infra/jenkins:v2.164.1Sending build context to Docker daemon 19.46 kBStep 1 : FROM harbor.od.com/public/jenkins:v2.164.1 ---> 256cb12e72d6Step 2 : USER root ---> Running in d600e9db8305 ---> 03687cf21cb3Removing intermediate container d600e9db8305Step 3 : RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone ---> Running in 3d79b4025e97 ---> e4790b3bb6d9Removing intermediate container 3d79b4025e97Step 4 : ADD id_rsa /root/.ssh/id_rsa ---> 39d80713d43cRemoving intermediate container 7b4e66e726ddStep 5 : ADD config.json /root/.docker/config.json ---> a44402fd4bc1Removing intermediate container f1ae1871d035Step 6 : ADD get-docker.sh /get-docker.sh ---> 189ccca429e4Removing intermediate container a0ff59237fe5Step 7 : RUN /get-docker.sh ---> Running in 5a7d69c1af45# Executing docker install script, commit: cfba462+ sh -c apt-get update -qq >/dev/null+ sh -c apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/nulldebconf: delaying package configuration, since apt-utils is not installed+ sh -c curl -fsSL "https://download.docker.com/linux/debian/gpg" | apt-key add -qq - >/dev/nullWarning: apt-key output should not be parsed (stdout is not a terminal)+ sh -c echo "deb [arch=amd64] https://download.docker.com/linux/debian stretch stable" > /etc/apt/sources.list.d/docker.list+ sh -c apt-get update -qq >/dev/null+ sh -c apt-get install -y -qq --no-install-recommends docker-ce >/dev/nulldebconf: delaying package configuration, since apt-utils is not installedIf you would like to use Docker as a non-root user, you should now consideradding your user to the "docker" group with something like: sudo usermod -aG docker your-userRemember that you will have to log out and back in for this to take effect!WARNING: Adding a user to the "docker" group will grant the ability to run containers which can be used to obtain root privileges on the docker host. Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface for more information.** DOCKER ENGINE - ENTERPRISE **If you’re ready for production workloads, Docker Engine - Enterprise also includes: * SLA-backed technical support * Extended lifecycle maintenance policy for patches and hotfixes * Access to certified ecosystem content** Learn more at https://dockr.ly/engine2 **ACTIVATE your own engine to Docker Engine - Enterprise using: sudo docker engine activate ---> 64c74242ee28Removing intermediate container 5a7d69c1af45Successfully built 64c74242ee28[root@hdss7-200 jenkins]# docker push harbor.od.com/infra/jenkins:v2.164.1 准备共享存储运维主机,以及所有运算节点上: 1# yum install nfs-utils -y 配置NFS服务 运维主机HDSS7-200.host.com上: /etc/exports1/data/nfs-volume 10.4.7.0/24(rw,no_root_squash) 启动NFS服务 运维主机HDSS7-200.host.com上: 123[root@hdss7-200 ~]# mkdir -p /data/nfs-volume[root@hdss7-200 ~]# systemctl start nfs[root@hdss7-200 ~]# systemctl enable nfs 准备资源配置清单运维主机HDSS7-200.host.com上: /data/k8s-yaml1[root@hdss7-200 k8s-yaml]# mkdir /data/k8s-yaml/jenkins && mkdir /data/nfs-volume/jenkins_home && cd /data/k8s-yaml/jenkins DeploymentServiceIngressvi deployment.yaml 1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465kind: DeploymentapiVersion: extensions/v1beta1metadata: name: jenkins namespace: infra labels: name: jenkinsspec: replicas: 1 selector: matchLabels: name: jenkins template: metadata: labels: app: jenkins name: jenkins spec: volumes: - name: data nfs: server: hdss7-200 path: /data/nfs-volume/jenkins_home - name: docker hostPath: path: /run/docker.sock type: '' containers: - name: jenkins image: harbor.od.com/infra/jenkins:v2.164.1 ports: - containerPort: 8080 protocol: TCP env: - name: JAVA_OPTS value: -Xmx512m -Xms512m resources: limits: cpu: 500m memory: 1Gi requests: cpu: 500m memory: 1Gi volumeMounts: - name: data mountPath: /var/jenkins_home - name: docker mountPath: /run/docker.sock terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent imagePullSecrets: - name: harbor restartPolicy: Always terminationGracePeriodSeconds: 30 securityContext: runAsUser: 0 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600vi svc.yaml 1234567891011121314kind: ServiceapiVersion: v1metadata: name: jenkins namespace: infraspec: ports: - protocol: TCP port: 80 targetPort: 8080 selector: app: jenkins type: ClusterIP sessionAffinity: Nonevi ingress.yaml 1234567891011121314kind: IngressapiVersion: extensions/v1beta1metadata: name: jenkins namespace: infraspec: rules: - host: jenkins.od.com http: paths: - path: / backend: serviceName: jenkins servicePort: 80 应用资源配置清单任意一个k8s运算节点上 12345678910111213141516[root@hdss7-21 ~]# kubectl create namespace infra[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/jenkins/deployment.yaml[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/jenkins/svc.yaml[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/jenkins/ingress.yaml[root@hdss7-21 ~]# kubectl get pods -n infra|grep jenkinsNAME READY STATUS RESTARTS AGEjenkins-84455f9675-jpkr8 1/1 Running 0 0d[root@hdss7-21 ~]# kubectl get svc -n infra|grep jenkinsNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEjenkins ClusterIP None <none> 8080/TCP 0d[root@hdss7-21 ~]# kubectl get ingress -n infra|grep jenkinsNAME HOSTS ADDRESS PORTS AGEjenkins jenkins.od.com 80 0d 解析域名HDSS7-11.host.com上 /var/named/od.com.zone1jenkins 60 IN A 10.4.7.10 浏览器访问http://jenkins.od.com 页面配置jenkins 初始化密码/data/nfs-volume/jenkins_home/secrets/initialAdminPassword12[root@hdss7-200 secrets]# cat initialAdminPassword 08d17edc125444a28ad6141ffdfd5c69 安装插件 设置用户 完成安装 使用admin登录 安装Blue Ocean插件 Manage Jenkins Manage Plugins Available Blue Ocean 调整安全选项 Manage Jenkins Configure Global Security Allow anonymous read access 配置New job create new jobs Enter an item name dubbo-demo Pipeline -> OK Discard old builds Days to keep builds : 3Max # of builds to keep : 30 This project is parameterized Add Parameter -> String Parameter Name : app_nameDefault Value :Description : project name. e.g: dubbo-demo-service Add Parameter -> String Parameter Name : image_nameDefault Value :Description : project docker image name. e.g: app/dubbo-demo-service Add Parameter -> String Parameter Name : git_repoDefault Value :Description : project git repository. e.g: https://gitee.com/stanleywang/dubbo-demo-service.git Add Parameter -> String Parameter Name : git_verDefault Value :Description : git commit id of the project. Add Parameter -> String Parameter Name : add_tagDefault Value :Description : project docker image tag, date_timestamp recommended. e.g: 190117_1920 Add Parameter -> String Parameter Name : mvn_dirDefault Value : ./Description : project maven directory. e.g: ./ Add Parameter -> String Parameter Name : target_dirDefault Value : ./targetDescription : the relative path of target file such as .jar or .war package. e.g: ./dubbo-server/target Add Parameter -> String Parameter Name : mvn_cmdDefault Value : mvn clean package -Dmaven.test.skip=trueDescription : maven command. e.g: mvn clean package -e -q -Dmaven.test.skip=true Add Parameter -> Choice Parameter Name : base_imageDefault Value : base/jre7:7u80 base/jre8:8u112Description : project base image list in harbor.od.com. Add Parameter -> Choice Parameter Name : mavenDefault Value : 3.6.0-8u181 3.2.5-6u025 2.2.1-6u025Description : different maven edition. Pipeline Script123456789101112131415161718192021222324252627pipeline { agent any stages { stage('pull') { //get project code from repo steps { sh "git clone ${params.git_repo} ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.app_name}/${env.BUILD_NUMBER} && git checkout ${params.git_ver}" } } stage('build') { //exec mvn cmd steps { sh "cd ${params.app_name}/${env.BUILD_NUMBER} && /var/jenkins_home/maven-${params.maven}/bin/${params.mvn_cmd}" } } stage('package') { //move jar file into project_dir steps { sh "cd ${params.app_name}/${env.BUILD_NUMBER} && cd ${params.target_dir} && mkdir project_dir && mv *.jar ./project_dir" } } stage('image') { //build image and push to registry steps { writeFile file: "${params.app_name}/${env.BUILD_NUMBER}/Dockerfile", text: """FROM harbor.od.com/${params.base_image}ADD ${params.target_dir}/project_dir /opt/project_dir""" sh "cd ${params.app_name}/${env.BUILD_NUMBER} && docker build -t harbor.od.com/${params.image_name}:${params.git_ver}_${params.add_tag} . && docker push harbor.od.com/${params.image_name}:${params.git_ver}_${params.add_tag}" } } }} 最后的准备工作检查jenkins容器里的docker客户端进入jenkins的docker容器里,检查docker客户端是否可用。 12[root@hdss7-22 ~]# docker exec -ti 52e250789b78 bashroot@52e250789b78:/# docker ps -a 检查jenkins容器里的SSH key进入jenkins的docker容器里,检查ssh连接git仓库,确认是否能拉到代码。 12345[root@hdss7-22 ~]# docker exec -ti 52e250789b78 bashroot@52e250789b78:/# ssh -i /root/.ssh/id_rsa -T [email protected] Hi Anonymous! You've successfully authenticated, but GITEE.COM does not provide shell access.Note: Perhaps the current use is DeployKey.Note: DeployKey only supports pull/fetch operations 部署maven软件maven官方下载地址在运维主机HDSS7-200.host.com上二进制部署,这里部署maven-3.6.0版 /opt/src1234567[root@hdss7-22 src]# ls -ltotal 8852-rw-r--r-- 1 root root 9063587 Jan 17 19:57 apache-maven-3.6.0-bin.tar.gz[root@hdss7-200 src]# tar xf apache-maven-3.6.0-bin.tar.gz -C /data/nfs-volume/jenkins_home/maven-3.6.0-8u181[root@hdss7-200 src]# mv /data/nfs-volume/jenkins_home/apache-maven-3.6.0/ /data/nfs-volume/jenkins_home/maven-3.6.0-8u181[root@hdss7-200 src]# ls -ld /data/nfs-volume/jenkins_home/maven-3.6.0-8u181drwxr-xr-x 6 root root 99 Jan 17 19:58 /data/nfs-volume/jenkins_home/maven-3.6.0-8u181 设置国内镜像源 /data/nfs-volume/jenkins_home/maven-3.6.0-8u181/conf/setting.xml123456<mirror> <id>alimaven</id> <name>aliyun maven</name> <url>http://maven.aliyun.com/nexus/content/groups/public/</url> <mirrorOf>central</mirrorOf> </mirror> 其他版本略 制作dubbo微服务的底包镜像运维主机HDSS7-200.host.com上 自定义Dockerfile /data/dockerfile/jre8/Dockerfile12345678FROM stanleyws/jre8:8u112RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\ echo 'Asia/Shanghai' >/etc/timezoneADD config.yml /opt/prom/config.ymlADD jmx_javaagent-0.3.1.jar /opt/prom/WORKDIR /opt/project_dirADD entrypoint.sh /entrypoint.shCMD ["/entrypoint.sh"] config.ymljmx_javaagent-0.3.1.jarentrypoint.shvi config.yml 123\--\-rules: - pattern: '.*'1wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/0.3.1/jmx_prometheus_javaagent-0.3.1.jar -O jmx_javaagent-0.3.1.jarvi entrypoint.sh (不要忘了给执行权限) 12345#!/bin/shM_OPTS="-Duser.timezone=Asia/Shanghai -javaagent:/opt/prom/jmx_javaagent-0.3.1.jar=$(hostname -i):${M_PORT:-"12346"}:/opt/prom/config.yml"C_OPTS=${C_OPTS}JAR_BALL=${JAR_BALL}exec java -jar ${M_OPTS} ${C_OPTS} ${JAR_BALL} 制作dubbo服务docker底包 /data/dockerfile/jre8123456789101112131415161718192021222324252627282930313233343536373839404142434445[root@hdss7-200 jre8]# ls -ltotal 372-rw-r--r-- 1 root root 29 Jan 17 19:09 config.yml-rw-r--r-- 1 root root 287 Jan 17 19:06 Dockerfile-rwxr--r-- 1 root root 250 Jan 17 19:11 entrypoint.sh-rw-r--r-- 1 root root 367417 May 10 2018 jmx_javaagent-0.3.1.jar[root@hdss7-200 jre8]# docker build . -t harbor.od.com/base/jre8:8u112Sending build context to Docker daemon 372.2 kBStep 1 : FROM stanleyws/jre8:8u112 ---> fa3a085d6ef1Step 2 : RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone ---> Using cache ---> 5da5ab0b1a48Step 3 : ADD config.yml /opt/prom/config.yml ---> Using cache ---> 70d3ebfe88f5Step 4 : ADD jmx_javaagent-0.3.1.jar /opt/prom/ ---> Using cache ---> 08b38a0684a8Step 5 : WORKDIR /opt/project_dir ---> Using cache ---> f06adf17fb69Step 6 : ADD entrypoint.sh /entrypoint.sh ---> e34f185d5c52Removing intermediate container ee213576ca0eStep 7 : CMD /entrypoint.sh ---> Running in 655f594bcbe2 ---> 47852bc0ade9Removing intermediate container 655f594bcbe2Successfully built 47852bc0ade9[root@hdss7-200 jre8]# docker push harbor.od.com/base/jre8:8u112The push refers to a repository [harbor.od.com/base/jre8]0b2b753b122e: Pushed 67e1b844d09c: Pushed ad4fa4673d87: Pushed 0ef3a1b4ca9f: Pushed 052016a734be: Pushed 0690f10a63a5: Pushed c843b2cf4e12: Pushed fddd8887b725: Pushed 42052a19230c: Pushed 8d4d1ab5ff74: Pushed 8u112: digest: sha256:252e3e869039ee6242c39bdfee0809242e83c8c3a06830f1224435935aeded28 size: 2405 注意:jre7底包制作类似,这里略 交付dubbo微服务至kubernetes集群dubbo服务提供者(dubbo-demo-service)通过jenkins进行一次CI打开jenkins页面,使用admin登录,准备构建dubbo-demo项目 点Build with Parameters 依次填入/选择: app_name dubbo-demo-service image_name app/dubbo-demo-service git_repo https://gitee.com/stanleywang/dubbo-demo-service.git git_ver master add_tag 190117_1920 mvn_dir / target_dir ./dubbo-server/target mvn_cmd mvn clean package -Dmaven.test.skip=true base_image base/jre8:8u112 maven 3.6.0-8u181 点击Build进行构建,等待构建完成。 test $? -eq 0 && 成功,进行下一步 || 失败,排错直到成功 检查harbor仓库内镜像 准备k8s资源配置清单运维主机HDSS7-200.host.com上,准备资源配置清单: /data/k8s-yaml/dubbo-demo-service/deployment.yaml123456789101112131415161718192021222324252627282930313233343536373839404142kind: DeploymentapiVersion: extensions/v1beta1metadata: name: dubbo-demo-service namespace: app labels: name: dubbo-demo-servicespec: replicas: 1 selector: matchLabels: name: dubbo-demo-service template: metadata: labels: app: dubbo-demo-service name: dubbo-demo-service spec: containers: - name: dubbo-demo-service image: harbor.od.com/app/dubbo-demo-service:master_190117_1920 ports: - containerPort: 20880 protocol: TCP env: - name: JAR_BALL value: dubbo-server.jar imagePullPolicy: IfNotPresent imagePullSecrets: - name: harbor restartPolicy: Always terminationGracePeriodSeconds: 30 securityContext: runAsUser: 0 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600 应用资源配置清单在任意一台k8s运算节点执行: 12[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dubbo-demo-service/deployment.yamldeployment.extensions/dubbo-demo-service created 检查docker运行情况及zk里的信息/opt/zookeeper/bin/zkCli.sh123[root@hdss7-11 ~]# /opt/zookeeper/bin/zkCli.sh -server localhost[zk: localhost(CONNECTED) 0] ls /dubbo[com.od.dubbotest.api.HelloService] dubbo-monitor工具dubbo-monitor源码包 准备docker镜像下载源码下载到运维主机HDSS7-200.host.com上 /opt/src12[root@hdss7-200 src]# ls -l|grep dubbo-monitordrwxr-xr-x 4 root root 81 Jan 17 13:58 dubbo-monitor 修改配置/opt/src/dubbo-monitor/dubbo-monitor-simple/conf/dubbo_origin.properties123456dubbo.registry.address=zookeeper://zk1.od.com:2181?backup=zk2.od.com:2181,zk3.od.com:2181dubbo.protocol.port=20880dubbo.jetty.port=8080dubbo.jetty.directory=/dubbo-monitor-simple/monitordubbo.statistics.directory=/dubbo-monitor-simple/statisticsdubbo.log4j.file=logs/dubbo-monitor.log 制作镜像 准备环境 1234[root@hdss7-200 src]# mkdir /data/dockerfile/dubbo-monitor[root@hdss7-200 src]# cp -a dubbo-monitor/* /data/dockerfile/dubbo-monitor/[root@hdss7-200 src]# cd /data/dockerfile/dubbo-monitor/[root@hdss7-200 dubbo-monitor]# sed -r -i -e '/^nohup/{p;:a;N;$!ba;d}' ./dubbo-monitor-simple/bin/start.sh && sed -r -i -e "s%^nohup(.*)%exec \1%" ./dubbo-monitor-simple/bin/start.sh 准备Dockerfile /data/dockerfile/dubbo-monitor/Dockerfile1234FROM jeromefromcn/docker-alpine-java-bashMAINTAINER Jerome JiangCOPY dubbo-monitor-simple/ /dubbo-monitor-simple/CMD /dubbo-monitor-simple/bin/start.sh build镜像 1234567891011121314151617181920212223242526272829303132333435[root@hdss7-200 dubbo-monitor]# docker build . -t harbor.od.com/infra/dubbo-monitor:latestSending build context to Docker daemon 26.21 MBStep 1 : FROM harbor.od.com/base/jre7:7u80 ---> dbba4641da57Step 2 : MAINTAINER Stanley Wang ---> Running in 8851a3c55d4b ---> 6266a6f15dc5Removing intermediate container 8851a3c55d4bStep 3 : COPY dubbo-monitor-simple/ /opt/dubbo-monitor/ ---> f4e0a9067c5cRemoving intermediate container f1038ecb1055Step 4 : WORKDIR /opt/dubbo-monitor ---> Running in 4056339d1b5a ---> e496e2d3079eRemoving intermediate container 4056339d1b5aStep 5 : CMD /opt/dubbo-monitor/bin/start.sh ---> Running in c33b8fb98326 ---> 97e40c179bbeRemoving intermediate container c33b8fb98326Successfully built 97e40c179bbe[root@hdss7-200 dubbo-monitor]# docker push harbor.od.com/infra/dubbo-monitor:latestThe push refers to a repository [harbor.od.com/infra/dubbo-monitor]750135a87545: Pushed 0b2b753b122e: Pushed 5b1f1b5295ff: Pushed d54f1d9d76d3: Pushed 8d51c20d6553: Pushed 106b765202e9: Pushed c6698ca565d0: Pushed 50ecb880731d: Pushed fddd8887b725: Pushed 42052a19230c: Pushed 8d4d1ab5ff74: Pushed 190107_1930: digest: sha256:73007a37a55ecd5fd72bc5b36d2ab0bb639c96b32b7879984d5cdbc759778790 size: 2617 解析域名在DNS主机HDSS7-11.host.com上: /var/named/od.com.zone1dubbo-monitor IN A 60 10.9.7.10 准备k8s资源配置清单运维主机HDSS7-200.host.com上 DeploymentServiceIngressvi /data/k8s-yaml/dubbo-monitor/deployment.yaml 1234567891011121314151617181920212223242526272829303132333435363738394041kind: DeploymentapiVersion: extensions/v1beta1metadata: name: dubbo-monitor namespace: infra labels: name: dubbo-monitorspec: replicas: 1 selector: matchLabels: name: dubbo-monitor template: metadata: labels: app: dubbo-monitor name: dubbo-monitor spec: containers: - name: dubbo-monitor image: harbor.od.com/infra/dubbo-monitor:latest ports: - containerPort: 8080 protocol: TCP - containerPort: 20880 protocol: TCP imagePullPolicy: IfNotPresent imagePullSecrets: - name: harbor restartPolicy: Always terminationGracePeriodSeconds: 30 securityContext: runAsUser: 0 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600vi /data/k8s-yaml/dubbo-monitor/svc.yaml 123456789101112131415kind: ServiceapiVersion: v1metadata: name: dubbo-monitor namespace: infraspec: ports: - protocol: TCP port: 8080 targetPort: 8080 selector: app: dubbo-monitor clusterIP: None type: ClusterIP sessionAffinity: Nonevi /data/k8s-yaml/dubbo-monitor/ingress.yaml 1234567891011121314kind: IngressapiVersion: extensions/v1beta1metadata: name: dubbo-monitor namespace: infraspec: rules: - host: dubbo-monitor.od.com http: paths: - path: / backend: serviceName: dubbo-monitor servicePort: 8080 应用资源配置清单在任意一台k8s运算节点执行: 123456[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dubbo-monitor/deployment.yamldeployment.extensions/dubbo-monitor created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dubbo-monitor/svc.yamlservice/dubbo-monitor created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dubbo-monitor/ingress.yamlingress.extensions/dubbo-monitor created 浏览器访问http://dubbo-monitor.od.com dubbo服务消费者(dubbo-demo-consumer)通过jenkins进行一次CI打开jenkins页面,使用admin登录,准备构建dubbo-demo项目 点Build with Parameters 依次填入/选择: app_name dubbo-demo-consumer image_name app/dubbo-demo-consumer git_repo [email protected]:stanleywang/dubbo-demo-web.git git_ver master add_tag 190117_1950 mvn_dir / target_dir ./dubbo-client/target mvn_cmd mvn clean package -Dmaven.test.skip=true base_image base/jre8:8u112 maven 3.6.0-8u181 点击Build进行构建,等待构建完成。 test $? -eq 0 && 成功,进行下一步 || 失败,排错直到成功 检查harbor仓库内镜像 解析域名在DNS主机HDSS7-11.host.com上: /var/named/od.com.zone1demo IN A 60 10.9.7.10 准备k8s资源配置清单运维主机HDSS7-200.host.com上,准备资源配置清单 DeploymentServiceIngressvi /data/k8s-yaml/dubbo-demo-consumer/deployment.yaml 1234567891011121314151617181920212223242526272829303132333435363738394041424344kind: DeploymentapiVersion: extensions/v1beta1metadata: name: dubbo-demo-consumer namespace: app labels: name: dubbo-demo-consumerspec: replicas: 1 selector: matchLabels: name: dubbo-demo-consumer template: metadata: labels: app: dubbo-demo-consumer name: dubbo-demo-consumer spec: containers: - name: dubbo-demo-consumer image: harbor.od.com/app/dubbo-demo-consumer:master_190119_2015 ports: - containerPort: 8080 protocol: TCP - containerPort: 20880 protocol: TCP env: - name: JAR_BALL value: dubbo-client.jar imagePullPolicy: IfNotPresent imagePullSecrets: - name: harbor restartPolicy: Always terminationGracePeriodSeconds: 30 securityContext: runAsUser: 0 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600vi /data/k8s-yaml/dubbo-demo-consumer/svc.yaml 123456789101112131415kind: ServiceapiVersion: v1metadata: name: dubbo-demo-consumer namespace: appspec: ports: - protocol: TCP port: 8080 targetPort: 8080 selector: app: dubbo-demo-consumer clusterIP: None type: ClusterIP sessionAffinity: Nonevi /data/k8s-yaml/dubbo-demo-consumer/ingress.yaml 1234567891011121314kind: IngressapiVersion: extensions/v1beta1metadata: name: dubbo-demo-consumer namespace: appspec: rules: - host: demo.od.com http: paths: - path: / backend: serviceName: dubbo-demo-consumer servicePort: 8080 应用资源配置清单在任意一台k8s运算节点执行: 123456[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dubbo-demo-consumer/deployment.yamldeployment.extensions/dubbo-demo-consumer created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dubbo-demo-consumer/svc.yamlservice/dubbo-demo-consumer created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dubbo-demo-consumer/ingress.yamlingress.extensions/dubbo-demo-consumer created 检查docker运行情况及dubbo-monitorhttp://dubbo-monitor.od.com 浏览器访问http://demo.od.com/hello?name=wangdao 实战维护dubbo微服务集群更新(rolling update) 修改代码提git(发版) 使用jenkins进行CI 修改并应用k8s资源配置清单 或者在k8s的dashboard上直接操作 扩容(scaling) k8s的dashboard上直接操作]]></content>
<categories>
<category>Kubernetes容器云技术专题</category>
</categories>
</entry>
<entry>
<title><![CDATA[实验文档3:在kubernetes集群里集成Apollo配置中心]]></title>
<url>%2F2019%2F01%2F18%2F%E5%AE%9E%E9%AA%8C%E6%96%87%E6%A1%A33%EF%BC%9A%E5%9C%A8kubernetes%E9%9B%86%E7%BE%A4%E9%87%8C%E9%9B%86%E6%88%90Apollo%E9%85%8D%E7%BD%AE%E4%B8%AD%E5%BF%83%2F</url>
<content type="text"><![CDATA[欢迎加入王导的VIP学习qq群:==>932194668<== 使用ConfigMap管理应用配置拆分环境 主机名 角色 ip HDSS7-11.host.com zk1.od.com(Test环境) 10.4.7.11 HDSS7-12.host.com zk2.od.com(Prod环境) 10.4.7.12 重配zookeeperHDSS7-11.host.com上: /opt/zookeeper/conf/zoo.cfg123456tickTime=2000initLimit=10syncLimit=5dataDir=/data/zookeeper/datadataLogDir=/data/zookeeper/logsclientPort=2181 HDSS7-12.host.com上: /opt/zookeeper/conf/zoo.cfg123456tickTime=2000initLimit=10syncLimit=5dataDir=/data/zookeeper/datadataLogDir=/data/zookeeper/logsclientPort=2181 重启zk(删除数据文件) 123[root@hdss7-11 ~]# /opt/zookeeper/bin/zkServer.sh restart && /opt/zookeeper/bin/zkServer.sh status[root@hdss7-12 ~]# /opt/zookeeper/bin/zkServer.sh restart && /opt/zookeeper/bin/zkServer.sh status[root@hdss7-21 ~]# /opt/zookeeper/bin/zkServer.sh stop 准备资源配置清单(dubbo-monitor)在运维主机HDSS7-200.host.com上: ConfigMapDeploymentvi /data/k8s-yaml/dubbo-monitor/configmap.yaml 123456789101112131415161718apiVersion: v1kind: ConfigMapmetadata: name: dubbo-monitor-cm namespace: infradata: dubbo.properties: | dubbo.container=log4j,spring,registry,jetty dubbo.application.name=simple-monitor dubbo.application.owner= dubbo.registry.address=zookeeper://zk1.od.com:2181 dubbo.protocol.port=20880 dubbo.jetty.port=8080 dubbo.jetty.directory=/dubbo-monitor-simple/monitor dubbo.charts.directory=/dubbo-monitor-simple/charts dubbo.statistics.directory=/dubbo-monitor-simple/statistics dubbo.log4j.file=/dubbo-monitor-simple/logs/dubbo-monitor.log dubbo.log4j.level=WARNvi /data/k8s-yaml/dubbo-monitor/deployment.yaml 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748kind: DeploymentapiVersion: extensions/v1beta1metadata: name: dubbo-monitor namespace: infra labels: name: dubbo-monitorspec: replicas: 1 selector: matchLabels: name: dubbo-monitor template: metadata: labels: app: dubbo-monitor name: dubbo-monitor spec: containers: - name: dubbo-monitor image: harbor.od.com/infra/dubbo-monitor:latest ports: - containerPort: 8080 protocol: TCP - containerPort: 20880 protocol: TCP imagePullPolicy: IfNotPresent volumeMounts: - name: configmap-volume mountPath: /dubbo-monitor-simple/conf volumes: - name: configmap-volume configMap: name: dubbo-monitor-cm imagePullSecrets: - name: harbor restartPolicy: Always terminationGracePeriodSeconds: 30 securityContext: runAsUser: 0 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600 应用资源配置清单在任意一台k8s运算节点执行: 1234[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dubbo-monitor/configmap.yamlconfigmap/dubbo-monitor-cm created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dubbo-monitor/deployment.yamldeployment.extensions/dubbo-monitor configured 重新发版,修改dubbo项目的配置文件修改项目源代码 duboo-demo-service dubbo-server/src/main/java/config.properties12dubbo.registry=zookeeper://zk1.od.com:2181dubbo.port=28080 dubbo-demo-web dubbo-client/src/main/java/config.properties1dubbo.registry=zookeeper://zk1.od.com:2181 使用Jenkins进行CI略 修改/应用资源配置清单k8s的dashboard上,修改deployment使用的容器版本,提交应用 验证configmap的配置在K8S的dashboard上,修改dubbo-monitor的configmap配置为不同的zk,重启POD,浏览器打开http://dubbo-monitor.od.com 观察效果 交付Apollo至Kubernetes集群Apollo简介Apollo(阿波罗)是携程框架部门研发的分布式配置中心,能够集中化管理应用不同环境、不同集群的配置,配置修改后能够实时推送到应用端,并且具备规范的权限、流程治理等特性,适用于微服务配置管理场景。 官方GitHub地址Apollo官方地址官方release包 基础架构 简化模型 交付apollo-configservice准备软件包在运维主机HDSS7-200.host.com上:下载官方release包 /opt/src123456789101112[root@hdss7-200 src]# ls -l|grep apollo-rw-r--r-- 1 root root 52713404 Feb 16 23:29 apollo-configservice-1.3.0-github.zip[root@hdss7-200 src]# mkdir /data/dockerfile/apollo-configservice && unzip -o apollo-configservice-1.3.0-github.zip -d /data/dockerfile/apollo-configserviceArchive: apollo-configservice-1.3.0-github.zip creating: /data/dockerfile/apollo-configservice/scripts/ inflating: /data/dockerfile/apollo-configservice/config/application-github.properties inflating: /data/dockerfile/apollo-configservice/scripts/shutdown.sh inflating: /data/dockerfile/apollo-configservice/apollo-configservice-1.3.0-sources.jar inflating: /data/dockerfile/apollo-configservice/scripts/startup.sh inflating: /data/dockerfile/apollo-configservice/config/app.properties inflating: /data/dockerfile/apollo-configservice/apollo-configservice-1.3.0.jar inflating: /data/dockerfile/apollo-configservice/apollo-configservice.conf 执行数据库脚本在数据库主机HDSS7-11.host.com上:注意:MySQL版本应为5.6或以上! 更新yum源 /etc/yum.repos.d/MariaDB.repo12345[mariadb]name = MariaDBbaseurl = https://mirrors.ustc.edu.cn/mariadb/yum/10.1/centos7-amd64/gpgkey=https://mirrors.ustc.edu.cn/mariadb/yum/RPM-GPG-KEY-MariaDBgpgcheck=1 导入GPG-KEY 1[root@hdss7-11 ~]# rpm --import https://mirrors.ustc.edu.cn/mariadb/yum/RPM-GPG-KEY-MariaDB 更新数据库版本 1[root@hdss7-11 ~]# yum update MariaDB-server -y 配置my.cnf /etc/my.cnf123456[mysql]default-character-set = utf8mb4[mysqld]character_set_server = utf8mb4collation_server = utf8mb4_general_ciinit_connect = "SET NAMES 'utf8mb4'" 数据库脚本地址 123[root@hdss7-11 ~]# mysql -uroot -pmysql> create database ApolloConfigDB;mysql> source ./apolloconfig.sql 数据库用户授权1mysql> grant INSERT,DELETE,UPDATE,SELECT on ApolloConfigDB.* to "apolloconfig"@"10.4.7.%" identified by "123456"; 修改初始数据12345678910111213141516mysql> update ApolloConfigDB.ServerConfig set ServerConfig.Value="http://config.od.com/eureka" where ServerConfig.Key="eureka.service.url";Query OK, 1 row affected (0.00 sec)Rows matched: 1 Changed: 1 Warnings: 0mysql> select * from ServerConfig\G*************************** 1. row *************************** Id: 1 Key: eureka.service.url Cluster: default Value: http://config.od.com/eureka Comment: Eureka服务Url,多个service以英文逗号分隔 IsDeleted: DataChange_CreatedBy: default DataChange_CreatedTime: 2019-04-10 15:07:34DataChange_LastModifiedBy: DataChange_LastTime: 2019-04-11 16:28:57 制作Docker镜像在运维主机HDSS7-200.host.com上: 配置数据库连接串 /data/dockerfile/apollo-configservice1[root@hdss7-200 apollo-configservice]# cat config/application-github.properties 更新startup.sh /data/dockerfile/apollo-configservice/scripts/startup.sh123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263#!/bin/bashSERVICE_NAME=apollo-configservice## Adjust log dir if necessaryLOG_DIR=/opt/logs/apollo-config-server## Adjust server port if necessarySERVER_PORT=8080APOLLO_CONFIG_SERVICE_NAME=$(hostname -i)SERVER_URL="http://${APOLLO_CONFIG_SERVICE_NAME}:${SERVER_PORT}"## Adjust memory settings if necessary#export JAVA_OPTS="-Xms6144m -Xmx6144m -Xss256k -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=384m -XX:NewSize=4096m -XX:MaxNewSize=4096m -XX:SurvivorRatio=8"## Only uncomment the following when you are using server jvm#export JAVA_OPTS="$JAVA_OPTS -server -XX:-ReduceInitialCardMarks"########### The following is the same for configservice, adminservice, portal ###########export JAVA_OPTS="$JAVA_OPTS -XX:ParallelGCThreads=4 -XX:MaxTenuringThreshold=9 -XX:+DisableExplicitGC -XX:+ScavengeBeforeFullGC -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+ExplicitGCInvokesConcurrent -XX:+PrintGCDetails -XX:+HeapDumpOnOutOfMemoryError -XX:-OmitStackTraceInFastThrow -Duser.timezone=Asia/Shanghai -Dclient.encoding.override=UTF-8 -Dfile.encoding=UTF-8 -Djava.security.egd=file:/dev/./urandom"export JAVA_OPTS="$JAVA_OPTS -Dserver.port=$SERVER_PORT -Dlogging.file=$LOG_DIR/$SERVICE_NAME.log -XX:HeapDumpPath=$LOG_DIR/HeapDumpOnOutOfMemoryError/"# Find Javaif [[ -n "$JAVA_HOME" ]] && [[ -x "$JAVA_HOME/bin/java" ]]; then javaexe="$JAVA_HOME/bin/java"elif type -p java > /dev/null 2>&1; then javaexe=$(type -p java)elif [[ -x "/usr/bin/java" ]]; then javaexe="/usr/bin/java"else echo "Unable to find Java" exit 1fiif [[ "$javaexe" ]]; then version=$("$javaexe" -version 2>&1 | awk -F '"' '/version/ {print $2}') version=$(echo "$version" | awk -F. '{printf("%03d%03d",$1,$2);}') # now version is of format 009003 (9.3.x) if [ $version -ge 011000 ]; then JAVA_OPTS="$JAVA_OPTS -Xlog:gc*:$LOG_DIR/gc.log:time,level,tags -Xlog:safepoint -Xlog:gc+heap=trace" elif [ $version -ge 010000 ]; then JAVA_OPTS="$JAVA_OPTS -Xlog:gc*:$LOG_DIR/gc.log:time,level,tags -Xlog:safepoint -Xlog:gc+heap=trace" elif [ $version -ge 009000 ]; then JAVA_OPTS="$JAVA_OPTS -Xlog:gc*:$LOG_DIR/gc.log:time,level,tags -Xlog:safepoint -Xlog:gc+heap=trace" else JAVA_OPTS="$JAVA_OPTS -XX:+UseParNewGC" JAVA_OPTS="$JAVA_OPTS -Xloggc:$LOG_DIR/gc.log -XX:+PrintGCDetails" JAVA_OPTS="$JAVA_OPTS -XX:+UseConcMarkSweepGC -XX:+UseCMSCompactAtFullCollection -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=60 -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:CMSFullGCsBeforeCompaction=9 -XX:+CMSClassUnloadingEnabled -XX:+PrintGCDateStamps -XX:+PrintGCApplicationConcurrentTime -XX:+PrintHeapAtGC -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=5M" fifiprintf "$(date) ==== Starting ==== \n"cd `dirname $0`/..chmod 755 $SERVICE_NAME".jar"./$SERVICE_NAME".jar" startrc=$?;if [[ $rc != 0 ]];then echo "$(date) Failed to start $SERVICE_NAME.jar, return code: $rc" exit $rc;fitail -f /dev/null 写Dockerfile /data/dockerfile/apollo-configservice/Dockerfile123456789101112FROM stanleyws/jre8:8u112ENV VERSION 1.3.0RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\ echo "Asia/Shanghai" > /etc/timezoneADD apollo-configservice-${VERSION}.jar /apollo-configservice/apollo-configservice.jarADD config/ /apollo-configservice/configADD scripts/ /apollo-configservice/scriptsCMD ["/apollo-configservice/scripts/startup.sh"] 制作镜像并推送 1234567891011121314151617181920212223242526272829303132333435363738394041[root@hdss7-200 apollo-configservice]# docker build . -t harbor.od.com/infra/apollo-configservice:v1.3.0Sending build context to Docker daemon 61.91 MBStep 1 : FROM stanleyws/jre8:8u112 ---> fa3a085d6ef1Step 2 : ENV VERSION 1.3.0 ---> [Warning] IPv4 forwarding is disabled. Networking will not work. ---> Running in 685d51b5adb4 ---> feb4c0289f04Removing intermediate container 685d51b5adb4Step 3 : RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo "Asia/Shanghai" > /etc/timezone ---> [Warning] IPv4 forwarding is disabled. Networking will not work. ---> Running in eaa05073feeb ---> a3e3fd61ae35Removing intermediate container eaa05073feebStep 4 : ADD apollo-configservice-${VERSION}.jar /apollo-configservice/apollo-configservice.jar ---> be09a59b83a2Removing intermediate container ac6b8af3979bStep 5 : ADD config/ /apollo-configservice/config ---> fb64fc0f3194Removing intermediate container b73c5315ad20Step 6 : ADD scripts/ /apollo-configservice/scripts ---> 96ff3d9b9456Removing intermediate container 67ba203b3101Step 7 : CMD /apollo-configservice/scripts/startup.sh ---> [Warning] IPv4 forwarding is disabled. Networking will not work. ---> Running in 80bd3f53fefc ---> 551ea2ba8de3Removing intermediate container 80bd3f53fefcSuccessfully built 551ea2ba8de3[root@hdss7-200 apollo-configservice]# docker push harbor.od.com/infra/apollo-configservice:v1.3.0The push refers to a repository [harbor.od.com/infra/apollo-configservice]25efb9a44683: Pushed b3572bb46247: Pushed e7994b936025: Pushed 0ff1d078cbc4: Pushed ebfb473df5c2: Pushed aae5c057d1b6: Pushed dee6aef5c2b6: Pushed a464c54f93a9: Pushed v1.3.0: digest: sha256:6a8e4fdda58de0dfba9985ebbf91c4d6f46f5274983d2efa8853b03f4e45fa06 size: 1992 解析域名DNS主机HDSS7-11.host.com上: /var/named/od.com.zone12mysql 60 IN A 10.4.7.11config 60 IN A 10.4.7.10 准备资源配置清单在运维主机HDSS7-200.host.com上 /data/k8s-yaml1[root@hdss7-200 k8s-yaml]# mkdir /data/k8s-yaml/apollo-configservice && cd /data/k8s-yaml/apollo-configservice DeploymentServiceIngressConfigMapvi deployment.yaml 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748kind: DeploymentapiVersion: extensions/v1beta1metadata: name: apollo-configservice namespace: infra labels: name: apollo-configservicespec: replicas: 1 selector: matchLabels: name: apollo-configservice template: metadata: labels: app: apollo-configservice name: apollo-configservice spec: volumes: - name: configmap-volume configMap: name: apollo-configservice-cm containers: - name: apollo-configservice image: harbor.od.com/infra/apollo-configservice:v1.3.0 ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: configmap-volume mountPath: /apollo-configservice/config terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent imagePullSecrets: - name: harbor restartPolicy: Always terminationGracePeriodSeconds: 30 securityContext: runAsUser: 0 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600vi svc.yaml 123456789101112131415kind: ServiceapiVersion: v1metadata: name: apollo-configservice namespace: infraspec: ports: - protocol: TCP port: 8080 targetPort: 8080 selector: app: apollo-configservice clusterIP: None type: ClusterIP sessionAffinity: Nonevi ingress.yaml 1234567891011121314kind: IngressapiVersion: extensions/v1beta1metadata: name: apollo-configservice namespace: infraspec: rules: - host: config.od.com http: paths: - path: / backend: serviceName: apollo-configservice servicePort: 8080vi configmap.yaml 1234567891011121314apiVersion: v1kind: ConfigMapmetadata: name: apollo-configservice-cm namespace: infradata: application-github.properties: | # DataSource spring.datasource.url = jdbc:mysql://mysql.od.com:3306/ApolloConfigDB?characterEncoding=utf8 spring.datasource.username = apolloconfig spring.datasource.password = 123456 eureka.service.url = http://config.od.com/eureka app.properties: | appId=100003171 应用资源配置清单在任意一台k8s运算节点执行: 12345678[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/apollo-configservice/configmap.yamlconfigmap/apollo-configservice-cm created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/apollo-configservice/deployment.yamldeployment.extensions/apollo-configservice created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/apollo-configservice/svc.yamlservice/apollo-configservice created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/apollo-configservice/ingress.yamlingress.extensions/apollo-configservice created 浏览器访问http://config.od.com 交付apollo-adminservice准备软件包在运维主机HDSS7-200.host.com上:下载官方release包 12345[root@hdss7-200 src]# ls -l|grep apollo-rw-r--r-- 1 root root 52713404 Feb 16 08:47 apollo-configservice-1.3.0-github.zip-rw-r--r-- 1 root root 49418246 Feb 16 09:54 apollo-adminservice-1.3.0-github.zip[root@hdss7-200 src]# mkdir /data/dockerfile/apollo-adminservice && unzip -o apollo-adminservice-1.3.0-github.zip -d /data/dockerfile/apollo-adminservice 制作Docker镜像在运维主机HDSS7-200.host.com上: 配置数据库连接串 /data/dockerfile/apollo-adminservice1[root@hdss7-200 apollo-adminservice]# cat config/application-github.properties 更新starup.sh /data/dockerfile/apollo-adminservice/scripts/startup.sh12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364#!/bin/bashSERVICE_NAME=apollo-adminservice## Adjust log dir if necessaryLOG_DIR=/opt/logs/apollo-adminservice## Adjust server port if necessarySERVER_PORT=8080APOLLO_ADMIN_SERVICE_NAME=$(hostname -i)# SERVER_URL="http://localhost:${SERVER_PORT}"SERVER_URL="http://${APOLLO_ADMIN_SERVICE_NAME}:${SERVER_PORT}"## Adjust memory settings if necessary#export JAVA_OPTS="-Xms2560m -Xmx2560m -Xss256k -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=384m -XX:NewSize=1536m -XX:MaxNewSize=1536m -XX:SurvivorRatio=8"## Only uncomment the following when you are using server jvm#export JAVA_OPTS="$JAVA_OPTS -server -XX:-ReduceInitialCardMarks"########### The following is the same for configservice, adminservice, portal ###########export JAVA_OPTS="$JAVA_OPTS -XX:ParallelGCThreads=4 -XX:MaxTenuringThreshold=9 -XX:+DisableExplicitGC -XX:+ScavengeBeforeFullGC -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+ExplicitGCInvokesConcurrent -XX:+PrintGCDetails -XX:+HeapDumpOnOutOfMemoryError -XX:-OmitStackTraceInFastThrow -Duser.timezone=Asia/Shanghai -Dclient.encoding.override=UTF-8 -Dfile.encoding=UTF-8 -Djava.security.egd=file:/dev/./urandom"export JAVA_OPTS="$JAVA_OPTS -Dserver.port=$SERVER_PORT -Dlogging.file=$LOG_DIR/$SERVICE_NAME.log -XX:HeapDumpPath=$LOG_DIR/HeapDumpOnOutOfMemoryError/"# Find Javaif [[ -n "$JAVA_HOME" ]] && [[ -x "$JAVA_HOME/bin/java" ]]; then javaexe="$JAVA_HOME/bin/java"elif type -p java > /dev/null 2>&1; then javaexe=$(type -p java)elif [[ -x "/usr/bin/java" ]]; then javaexe="/usr/bin/java"else echo "Unable to find Java" exit 1fiif [[ "$javaexe" ]]; then version=$("$javaexe" -version 2>&1 | awk -F '"' '/version/ {print $2}') version=$(echo "$version" | awk -F. '{printf("%03d%03d",$1,$2);}') # now version is of format 009003 (9.3.x) if [ $version -ge 011000 ]; then JAVA_OPTS="$JAVA_OPTS -Xlog:gc*:$LOG_DIR/gc.log:time,level,tags -Xlog:safepoint -Xlog:gc+heap=trace" elif [ $version -ge 010000 ]; then JAVA_OPTS="$JAVA_OPTS -Xlog:gc*:$LOG_DIR/gc.log:time,level,tags -Xlog:safepoint -Xlog:gc+heap=trace" elif [ $version -ge 009000 ]; then JAVA_OPTS="$JAVA_OPTS -Xlog:gc*:$LOG_DIR/gc.log:time,level,tags -Xlog:safepoint -Xlog:gc+heap=trace" else JAVA_OPTS="$JAVA_OPTS -XX:+UseParNewGC" JAVA_OPTS="$JAVA_OPTS -Xloggc:$LOG_DIR/gc.log -XX:+PrintGCDetails" JAVA_OPTS="$JAVA_OPTS -XX:+UseConcMarkSweepGC -XX:+UseCMSCompactAtFullCollection -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=60 -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:CMSFullGCsBeforeCompaction=9 -XX:+CMSClassUnloadingEnabled -XX:+PrintGCDateStamps -XX:+PrintGCApplicationConcurrentTime -XX:+PrintHeapAtGC -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=5M" fifiprintf "$(date) ==== Starting ==== \n"cd `dirname $0`/..chmod 755 $SERVICE_NAME".jar"./$SERVICE_NAME".jar" startrc=$?;if [[ $rc != 0 ]];then echo "$(date) Failed to start $SERVICE_NAME.jar, return code: $rc" exit $rc;fitail -f /dev/null 写Dockerfile /data/dockerfile/apollo-adminservice/Dockerfile123456789101112FROM stanleyws/jre8:8u112ENV VERSION 1.3.0RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\ echo "Asia/Shanghai" > /etc/timezoneADD apollo-adminservice-${VERSION}.jar /apollo-adminservice/apollo-adminservice.jarADD config/ /apollo-adminservice/configADD scripts/ /apollo-adminservice/scriptsCMD ["/apollo-adminservice/scripts/startup.sh"] 制作镜像并推送 12345678910111213141516171819202122232425262728293031323334353637[root@hdss7-200 apollo-adminservice]# docker build . -t harbor.od.com/infra/apollo-adminservice:v1.3.0Sending build context to Docker daemon 58.31 MBStep 1 : FROM stanleyws/jre8:8u112 ---> fa3a085d6ef1Step 2 : ENV VERSION 1.3.0 ---> Using cache ---> feb4c0289f04Step 3 : RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo "Asia/Shanghai" > /etc/timezone ---> Using cache ---> a3e3fd61ae35Step 4 : ADD apollo-adminservice-${VERSION}.jar /apollo-adminservice/apollo-adminservice.jar ---> 6a1eb9565777Removing intermediate container 7196df9af6afStep 5 : ADD config/ /apollo-adminservice/config ---> 9f364b732d46Removing intermediate container 9b24669c6c78Step 6 : ADD scripts/ /apollo-adminservice/scripts ---> b7bc5517b0fcRemoving intermediate container f3e34e759148Step 7 : CMD /apollo-adminservice/scripts/startup.sh ---> [Warning] IPv4 forwarding is disabled. Networking will not work. ---> Running in 18c6597914b4 ---> 82145db3ee88Removing intermediate container 18c6597914b4Successfully built 82145db3ee88[root@hdss7-200 apollo-adminservice]# docker push harbor.od.com/infra/apollo-adminservice:v1.3.0docker push harbor.od.com/infra/apollo-adminservice:v1.3.0The push refers to a repository [harbor.od.com/infra/apollo-adminservice]19b1ca6c066d: Pushed 8fa6cde49908: Pushed 0b2c9b9226cc: Pushed ebfb473df5c2: Mounted from infra/apollo-configservice aae5c057d1b6: Mounted from infra/apollo-configservice dee6aef5c2b6: Mounted from infra/apollo-configservice a464c54f93a9: Mounted from infra/apollo-configservice v1.3.0: digest: sha256:75367caab9bad3d0d281eb3324451a0734e84b6aa3ee860e38ad758d7166a7d1 size: 1785 准备资源配置清单在运维主机HDSS7-200.host.com上 /data/k8s-yaml1[root@hdss7-200 k8s-yaml]# mkdir /data/k8s-yaml/apollo-adminservice && cd /data/k8s-yaml/apollo-adminservice DeploymentConfigMapvi deployment.yaml 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748kind: DeploymentapiVersion: extensions/v1beta1metadata: name: apollo-adminservice namespace: infra labels: name: apollo-adminservicespec: replicas: 1 selector: matchLabels: name: apollo-adminservice template: metadata: labels: app: apollo-adminservice name: apollo-adminservice spec: volumes: - name: configmap-volume configMap: name: apollo-adminservice-cm containers: - name: apollo-adminservice image: harbor.od.com/infra/apollo-adminservice:v1.3.0 ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: configmap-volume mountPath: /apollo-adminservice/config terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent imagePullSecrets: - name: harbor restartPolicy: Always terminationGracePeriodSeconds: 30 securityContext: runAsUser: 0 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600vi configmap.yaml 1234567891011121314apiVersion: v1kind: ConfigMapmetadata: name: apollo-adminservice-cm namespace: infradata: application-github.properties: | # DataSource spring.datasource.url = jdbc:mysql://mysql.od.com:3306/ApolloConfigDB?characterEncoding=utf8 spring.datasource.username = apolloconfig spring.datasource.password = 123456 eureka.service.url = http://config.od.com/eureka app.properties: | appId=100003172 应用资源配置清单在任意一台k8s运算节点执行: 1234[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/apollo-adminservice/configmap.yamlconfigmap/apollo-adminservice-cm created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/apollo-adminservice/deployment.yamldeployment.extensions/apollo-adminservice created 浏览器访问http://config.od.com 交付apollo-portal准备软件包在运维主机HDSS7-200.host.com上:下载官方release包 12345678910111213141516[root@hdss7-200 src]# ls -l|grep apollo-rw-r--r-- 1 root root 52713404 Feb 16 08:37 apollo-configservice-1.3.0-github.zip-rw-r--r-- 1 root root 49418246 Feb 16 09:54 apollo-adminservice-1.3.0-github.zip-rw-r--r-- 1 root root 36459359 Feb 16 10:00 apollo-portal-1.3.0-github.zip[root@hdss7-200 src]# mkdir /data/dockerfile/apollo-portal && unzip -o apollo-portal-1.3.0-github.zip -d /data/dockerfile/apollo-portalArchive: apollo-portal-1.3.0-github.zip inflating: /data/dockerfile/apollo-portal/scripts/shutdown.sh inflating: /data/dockerfile/apollo-portal/apollo-portal.conf inflating: /data/dockerfile/apollo-portal/apollo-portal-1.3.0-sources.jar creating: /data/dockerfile/apollo-portal/config/ inflating: /data/dockerfile/apollo-portal/config/application-github.properties inflating: /data/dockerfile/apollo-portal/scripts/startup.sh inflating: /data/dockerfile/apollo-portal/config/apollo-env.properties inflating: /data/dockerfile/apollo-portal/config/app.properties inflating: /data/dockerfile/apollo-portal/apollo-portal-1.3.0.jar 执行数据库脚本在数据库主机HDSS7-11.host.com上:数据库脚本地址 123[root@hdss7-11 ~]# mysql -uroot -pmysql> create database ApolloPortalDB;mysql> source ./apolloportal.sql 数据库用户授权1mysql> grant INSERT,DELETE,UPDATE,SELECT on ApolloPortalDB.* to "apolloportal"@"172.7.%" identified by "123456"; 制作Docker镜像在运维主机HDSS7-200.host.com上: 配置数据库连接串 /data/dockerfile/apollo-portal1[root@hdss7-200 apollo-portal]# cat config/application-github.properties 配置Portal的meta service /data/dockerfile/apollo-portal/config/apollo-env.properties1dev.meta=http://config.od.com 更新starup.sh /data/dockerfile/apollo-portal/scripts/startup.sh12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364#!/bin/bashSERVICE_NAME=apollo-portal## Adjust log dir if necessaryLOG_DIR=/opt/logs/apollo-portal-server## Adjust server port if necessarySERVER_PORT=8080APOLLO_PORTAL_SERVICE_NAME=$(hostname -i)# SERVER_URL="http://localhost:$SERVER_PORT"SERVER_URL="http://${APOLLO_PORTAL_SERVICE_NAME}:${SERVER_PORT}"## Adjust memory settings if necessary#export JAVA_OPTS="-Xms2560m -Xmx2560m -Xss256k -XX:MetaspaceSize=128m -XX:MaxMetaspaceSize=384m -XX:NewSize=1536m -XX:MaxNewSize=1536m -XX:SurvivorRatio=8"## Only uncomment the following when you are using server jvm#export JAVA_OPTS="$JAVA_OPTS -server -XX:-ReduceInitialCardMarks"########### The following is the same for configservice, adminservice, portal ###########export JAVA_OPTS="$JAVA_OPTS -XX:ParallelGCThreads=4 -XX:MaxTenuringThreshold=9 -XX:+DisableExplicitGC -XX:+ScavengeBeforeFullGC -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+ExplicitGCInvokesConcurrent -XX:+PrintGCDetails -XX:+HeapDumpOnOutOfMemoryError -XX:-OmitStackTraceInFastThrow -Duser.timezone=Asia/Shanghai -Dclient.encoding.override=UTF-8 -Dfile.encoding=UTF-8 -Djava.security.egd=file:/dev/./urandom"export JAVA_OPTS="$JAVA_OPTS -Dserver.port=$SERVER_PORT -Dlogging.file=$LOG_DIR/$SERVICE_NAME.log -XX:HeapDumpPath=$LOG_DIR/HeapDumpOnOutOfMemoryError/"# Find Javaif [[ -n "$JAVA_HOME" ]] && [[ -x "$JAVA_HOME/bin/java" ]]; then javaexe="$JAVA_HOME/bin/java"elif type -p java > /dev/null 2>&1; then javaexe=$(type -p java)elif [[ -x "/usr/bin/java" ]]; then javaexe="/usr/bin/java"else echo "Unable to find Java" exit 1fiif [[ "$javaexe" ]]; then version=$("$javaexe" -version 2>&1 | awk -F '"' '/version/ {print $2}') version=$(echo "$version" | awk -F. '{printf("%03d%03d",$1,$2);}') # now version is of format 009003 (9.3.x) if [ $version -ge 011000 ]; then JAVA_OPTS="$JAVA_OPTS -Xlog:gc*:$LOG_DIR/gc.log:time,level,tags -Xlog:safepoint -Xlog:gc+heap=trace" elif [ $version -ge 010000 ]; then JAVA_OPTS="$JAVA_OPTS -Xlog:gc*:$LOG_DIR/gc.log:time,level,tags -Xlog:safepoint -Xlog:gc+heap=trace" elif [ $version -ge 009000 ]; then JAVA_OPTS="$JAVA_OPTS -Xlog:gc*:$LOG_DIR/gc.log:time,level,tags -Xlog:safepoint -Xlog:gc+heap=trace" else JAVA_OPTS="$JAVA_OPTS -XX:+UseParNewGC" JAVA_OPTS="$JAVA_OPTS -Xloggc:$LOG_DIR/gc.log -XX:+PrintGCDetails" JAVA_OPTS="$JAVA_OPTS -XX:+UseConcMarkSweepGC -XX:+UseCMSCompactAtFullCollection -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=60 -XX:+CMSClassUnloadingEnabled -XX:+CMSParallelRemarkEnabled -XX:CMSFullGCsBeforeCompaction=9 -XX:+CMSClassUnloadingEnabled -XX:+PrintGCDateStamps -XX:+PrintGCApplicationConcurrentTime -XX:+PrintHeapAtGC -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=5M" fifiprintf "$(date) ==== Starting ==== \n"cd `dirname $0`/..chmod 755 $SERVICE_NAME".jar"./$SERVICE_NAME".jar" startrc=$?;if [[ $rc != 0 ]];then echo "$(date) Failed to start $SERVICE_NAME.jar, return code: $rc" exit $rc;fitail -f /dev/null 写Dockerfile /data/dockerfile/apollo-portal/Dockerfile123456789101112FROM stanleyws/jre8:8u112ENV VERSION 1.3.0RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\ echo "Asia/Shanghai" > /etc/timezoneADD apollo-portal-${VERSION}.jar /apollo-portal/apollo-portal.jarADD config/ /apollo-portal/configADD scripts/ /apollo-portal/scriptsCMD ["/apollo-portal/scripts/startup.sh"] 制作镜像并推送 123456789101112131415161718192021222324252627282930313233343536[root@hdss7-200 apollo-portal]# docker build . -t harbor.od.com/infra/apollo-portal:v1.3.0Sending build context to Docker daemon 43.35 MBStep 1 : FROM stanleyws/jre8:8u112 ---> fa3a085d6ef1Step 2 : ENV VERSION 1.3.0 ---> Using cache ---> feb4c0289f04Step 3 : RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo "Asia/Shanghai" > /etc/timezone ---> Using cache ---> a3e3fd61ae35Step 4 : ADD apollo-portal-${VERSION}.jar /apollo-portal/apollo-portal.jar ---> cfcf63e8eedcRemoving intermediate container 860b55bd3fc5Step 5 : ADD config/ /apollo-portal/config ---> 3ee780369431Removing intermediate container 6b67ee4224b5Step 6 : ADD scripts/ /apollo-portal/scripts ---> 42c9aea2e9e3Removing intermediate container 2dcf8d1bf4cfStep 7 : CMD /apollo-portal/scripts/startup.sh ---> [Warning] IPv4 forwarding is disabled. Networking will not work. ---> Running in 9162dab8b63a ---> 0c020b79c36fRemoving intermediate container 9162dab8b63aSuccessfully built 0c020b79c36f[root@hdss7-200 apollo-portal]# docker push harbor.od.com/infra/apollo-portal:v1.3.0docker push harbor.od.com/infra/apollo-portal:v1.3.0The push refers to a repository [harbor.od.com/infra/apollo-portal]e7c0e96ded4e: Pushed 0076c5344476: Pushed 3851a45d7440: Pushed ebfb473df5c2: Mounted from infra/apollo-adminservice aae5c057d1b6: Mounted from infra/apollo-adminservice dee6aef5c2b6: Mounted from infra/apollo-adminservice a464c54f93a9: Mounted from infra/apollo-adminservice v1.3.0: digest: sha256:1aa30aac8642cceb97c053b7d74632240af08f64c49b65d8729021fef65628a4 size: 1785 解析域名DNS主机HDSS7-11.host.com上: /var/named/od.com.zone1portal 60 IN A 10.4.7.10 准备资源配置清单在运维主机HDSS7-200.host.com上 /data/k8s-yaml1[root@hdss7-200 k8s-yaml]# mkdir /data/k8s-yaml/apollo-portal && cd /data/k8s-yaml/apollo-portal DeploymentServiceIngressConfigMapvi deployment.yaml 123456789101112131415161718192021222324252627282930313233343536373839404142434445464748kind: DeploymentapiVersion: extensions/v1beta1metadata: name: apollo-portal namespace: infra labels: name: apollo-portalspec: replicas: 1 selector: matchLabels: name: apollo-portal template: metadata: labels: app: apollo-portal name: apollo-portal spec: volumes: - name: configmap-volume configMap: name: apollo-portal-cm containers: - name: apollo-portal image: harbor.od.com/infra/apollo-portal:v1.3.0 ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: configmap-volume mountPath: /apollo-portal/config terminationMessagePath: /dev/termination-log terminationMessagePolicy: File imagePullPolicy: IfNotPresent imagePullSecrets: - name: harbor restartPolicy: Always terminationGracePeriodSeconds: 30 securityContext: runAsUser: 0 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600vi svc.yaml 123456789101112131415kind: ServiceapiVersion: v1metadata: name: apollo-portal namespace: infraspec: ports: - protocol: TCP port: 8080 targetPort: 8080 selector: app: apollo-portal clusterIP: None type: ClusterIP sessionAffinity: Nonevi ingress.yaml 1234567891011121314kind: IngressapiVersion: extensions/v1beta1metadata: name: apollo-portal namespace: infraspec: rules: - host: portal.od.com http: paths: - path: / backend: serviceName: apollo-portal servicePort: 8080vi configmap.yaml 123456789101112131415apiVersion: v1kind: ConfigMapmetadata: name: apollo-portal-cm namespace: infradata: application-github.properties: | # DataSource spring.datasource.url = jdbc:mysql://mysql.od.com:3306/ApolloPortalDB?characterEncoding=utf8 spring.datasource.username = apolloportal spring.datasource.password = 123456 app.properties: | appId=100003173 apollo-env.properties: | dev.meta=http://config.od.com 应用资源配置清单在任意一台k8s运算节点执行: 12345678[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/apollo-portal/configmap.yamlconfigmap/apollo-portal-cm created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/apollo-portal/deployment.yamldeployment.extensions/apollo-portal created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/apollo-portal/svc.yamlservice/apollo-portal created[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/apollo-portal/ingress.yamlingress.extensions/apollo-portal created 浏览器访问http://portal.od.com 用户名:apollo 密码: admin 实战dubbo微服务接入Apollo配置中心改造dubbo-demo-service项目使用IDE拉取项目(这里使用git bash作为范例)1$ git clone [email protected]/stanleywang/dubbo-demo-service.git 切到apollo分支1$ git checkout -b apollo 修改pom.xml 加入apollo客户端jar包的依赖 dubbo-server/pom.xml12345<dependency> <groupId>com.ctrip.framework.apollo</groupId> <artifactId>apollo-client</artifactId> <version>1.1.0</version></dependency> 修改resource段 dubbo-server/pom.xml1234567<resource> <directory>src/main/resources</directory> <includes> <include>**/*</include> </includes> <filtering>false</filtering></resource> 增加resources目录/d/workspace/dubbo-demo-service/dubbo-server/src/main123$ mkdir -pv resources/META-INFmkdir: created directory 'resources'mkdir: created directory 'resources/META-INF' 修改config.properties文件/d/workspace/dubbo-demo-service/dubbo-server/src/main/resources/config.properties12dubbo.registry=${dubbo.registry}dubbo.port=${dubbo.port} 修改srping-config.xml文件 beans段新增属性 /d/workspace/dubbo-demo-service/dubbo-server/src/main/resources/spring-config.xml1xmlns:apollo="http://www.ctrip.com/schema/apollo" xsi:schemaLocation段内新增属性 /d/workspace/dubbo-demo-service/dubbo-server/src/main/resources/spring-config.xml1http://www.ctrip.com/schema/apollo http://www.ctrip.com/schema/apollo.xsd 新增配置项 /d/workspace/dubbo-demo-service/dubbo-server/src/main/resources/spring-config.xml1<apollo:config/> 删除配置项(注释) /d/workspace/dubbo-demo-service/dubbo-server/src/main/resources/spring-config.xml1<!-- <context:property-placeholder location="classpath:config.properties"/> --> 增加app.properties文件/d/workspace/dubbo-demo-service/dubbo-server/src/main/resources/META-INF/app.properties1app.id=dubbo-demo-service 提交git中心仓库(gitee)1$ git push origin apollo 配置apollo-portal创建项目 部门 样例部门1(TEST1) 应用id dubbo-demo-service 应用名称 dubbo服务提供者 应用负责人 apollo|apollo 项目管理员 apollo|apollo 提交 进入配置页面新增配置项1 Key dubbo.registry Value zookeeper://zk1.od.com:2181 选择集群 DEV 提交 新增配置项2 Key dubbo.port Value 20880 选择集群 DEV 提交 发布配置点击发布,配置生效 使用jenkins进行CI略(注意记录镜像的tag) 上线新构建的项目准备资源配置清单运维主机HDSS7-200.host.com上: /data/k8s-yaml/dubbo-demo-service/deployment.yaml1234567891011121314151617181920212223242526272829303132333435363738394041424344kind: DeploymentapiVersion: extensions/v1beta1metadata: name: dubbo-demo-service namespace: app labels: name: dubbo-demo-servicespec: replicas: 1 selector: matchLabels: name: dubbo-demo-service template: metadata: labels: app: dubbo-demo-service name: dubbo-demo-service spec: containers: - name: dubbo-demo-service image: harbor.od.com/app/dubbo-demo-service:apollo_190119_1815 ports: - containerPort: 20880 protocol: TCP env: - name: C_OPTS value: -Denv=dev -Dapollo.meta=http://config.od.com - name: JAR_BALL value: dubbo-server.jar imagePullPolicy: IfNotPresent imagePullSecrets: - name: harbor restartPolicy: Always terminationGracePeriodSeconds: 30 securityContext: runAsUser: 0 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600 注意:增加了env段配置注意:docker镜像新版的tag 应用资源配置清单在任意一台k8s运算节点上执行: 12[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dubbo-demo-service/deployment.yamldeployment.extensions/dubbo-demo-service configured 观察项目运行情况http://dubbo-monitor.od.com 改造dubbo-demo-web略 配置apollo-portal创建项目 部门 样例部门1(TEST1) 应用id dubbo-demo-web 应用名称 dubbo服务消费者 应用负责人 apollo|apollo 项目管理员 apollo|apollo 提交 进入配置页面新增配置项1 Key dubbo.registry Value zookeeper://zk1.od.com:2181 选择集群 DEV 提交 发布配置点击发布,配置生效 使用jenkins进行CI略(注意记录镜像的tag) 上线新构建的项目准备资源配置清单运维主机HDSS7-200.host.com上: /data/k8s-yaml/dubbo-demo-consumer/deployment.yaml12345678910111213141516171819202122232425262728293031323334353637383940414243444546kind: DeploymentapiVersion: extensions/v1beta1metadata: name: dubbo-demo-consumer namespace: app labels: name: dubbo-demo-consumerspec: replicas: 1 selector: matchLabels: name: dubbo-demo-consumer template: metadata: labels: app: dubbo-demo-consumer name: dubbo-demo-consumer spec: containers: - name: dubbo-demo-consumer image: harbor.od.com/app/dubbo-demo-consumer:apllo_190120_1815 ports: - containerPort: 20880 protocol: TCP - containerPort: 8080 protocol: TCP env: - name: C_OPTS value: -Denv=dev -Dapollo.meta=http://config.od.com - name: JAR_BALL value: dubbo-client.jar imagePullPolicy: IfNotPresent imagePullSecrets: - name: harbor restartPolicy: Always terminationGracePeriodSeconds: 30 securityContext: runAsUser: 0 schedulerName: default-scheduler strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 maxSurge: 1 revisionHistoryLimit: 7 progressDeadlineSeconds: 600 注意:增加了env段配置注意:docker镜像新版的tag 应用资源配置清单在任意一台k8s运算节点上执行: 12[root@hdss7-21 ~]# kubectl apply -f http://k8s-yaml.od.com/dubbo-demo-web/deployment.yamldeployment.extensions/dubbo-demo-consumer configured 通过Apollo配置中心动态维护项目的配置以dubbo-demo-service项目为例,不用修改代码 在http://portal.od.com 里修改dubbo.port配置项 重启dubbo-demo-service项目 配置生效 实战维护多套dubbo微服务环境生产实践 迭代新需求/修复BUG(编码->提GIT) 测试环境发版,测试(应用通过编译打包发布至TEST命名空间) 测试通过,上线(应用镜像直接发布至PROD命名空间) 系统架构 物理架构 主机名 角色 ip HDSS7-11.host.com zk-test(测试环境Test) 10.4.7.11 HDSS7-12.host.com zk-prod(生产环境Prod) 10.4.7.12 HDSS7-21.host.com kubernetes运算节点 10.4.7.21 HDSS7-22.host.com kubernetes运算节点 10.4.7.22 HDSS7-200.host.com 运维主机,harbor仓库 10.4.7.200 K8S内系统架构 环境 命名空间 应用 测试环境(TEST) test apollo-config,apollo-admin 测试环境(TEST) test dubbo-demo-service,dubbo-demo-web 生产环境(PROD) prod apollo-config,apollo-admin 生产环境(PROD) prod dubbo-demo-service,dubbo-demo-web ops环境(infra) infra jenkins,dubbo-monitor,apollo-portal 修改/添加域名解析DNS主机HDSS7-11.host.com上: /var/named/od.com.zone123456zk-test 60 IN A 10.4.7.11zk-prod 60 IN A 10.4.7.12config-test 60 IN A 10.4.7.10config-prod 60 IN A 10.4.7.10demo-test 60 IN A 10.4.7.10demo-prod 60 IN A 10.4.7.10 Apollo的k8s应用配置 删除app命名空间内应用,创建test命名空间,创建prod命名空间 删除infra命名空间内apollo-configservice,apollo-adminservice应用 数据库内删除ApolloConfigDB,创建ApolloConfigTestDB,创建ApolloConfigProdDB 12345678910111213mysql> drop database ApolloConfigDB;mysql> create database ApolloConfigTestDB;mysql> use ApolloConfigTestDB;mysql> source ./apolloconfig.sqlmysql> update ApolloConfigTestDB.ServerConfig set ServerConfig.Value="http://config-test.od.com/eureka" where ServerConfig.Key="eureka.service.url";mysql> grant INSERT,DELETE,UPDATE,SELECT on ApolloConfigTestDB.* to "apolloconfig"@"10.4.7.%" identified by "123456";mysql> create database ApolloConfigProdDB;mysql> use ApolloConfigProdDB;mysql> source ./apolloconfig.sqlmysql> update ApolloConfigProdDB.ServerConfig set ServerConfig.Value="http://config-prod.od.com/eureka" where ServerConfig.Key="eureka.service.url";mysql> grant INSERT,DELETE,UPDATE,SELECT on ApolloConfigProdDB.* to "apolloconfig"@"10.4.7.%" identified by "123456"; 准备apollo-config,apollo-admin的资源配置清单(各2套) 注:apollo-config/apollo-admin的configmap配置要点 Test环境 123456application-github.properties: | # DataSource spring.datasource.url = jdbc:mysql://mysql.od.com:3306/ApolloConfigTestDB?characterEncoding=utf8 spring.datasource.username = apolloconfig spring.datasource.password = 123456 eureka.service.url = http://config-test.od.com/eureka Prod环境 123456application-github.properties: | # DataSource spring.datasource.url = jdbc:mysql://mysql.od.com:3306/ApolloConfigProdDB?characterEncoding=utf8 spring.datasource.username = apolloconfig spring.datasource.password = 123456 eureka.service.url = http://config-prod.od.com/eureka 依次应用,分别发布在test和prod命名空间 修改apollo-portal的configmap并重启portal 123apollo-env.properties: | TEST.meta=http://config-test.od.com PROD.meta=http://config-prod.od.com Apollo的portal配置管理员工具删除应用、集群、AppNamespace,将已配置应用删除 系统参数 Key apollo.portal.envs Value TEST,PROD 查询 Value TEST,PROD 保存 新建dubbo-demo-service和dubbo-demo-web项目在TEST/PROD环境分别增加配置项并发布 发布dubbo微服务 准备dubbo-demo-service和dubbo-demo-web的资源配置清单(各2套) 依次应用,分别发布至app-test和app-prod命名空间 使用dubbo-monitor查验 互联网公司技术部的日常 产品经理整理需求,需求评审,出产品原型 开发同学夜以继日的开发,提测 测试同学使用Jenkins持续集成,并发布至测试环境 验证功能,通过->待上线or打回->修改代码 提交发版申请,运维同学将测试后的包发往生产环境 无尽的BUG修复(笑cry)]]></content>
<categories>
<category>Kubernetes容器云技术专题</category>
</categories>
</entry>
<entry>
<title><![CDATA[实验文档1:BIND9的安装部署]]></title>
<url>%2F2018%2F12%2F16%2F%E5%AE%9E%E9%AA%8C%E6%96%87%E6%A1%A31%EF%BC%9ABIND9%E7%9A%84%E5%AE%89%E8%A3%85%E9%83%A8%E7%BD%B2%2F</url>
<content type="text"><![CDATA[欢迎加入王导的VIP学习qq群:==>932194668<== 安装部署BIND9操作系统版本和内核版本12345#cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core)#uname -aLinux node 3.10.0-862.el7.x86_64 #1 SMP Fri Apr 20 16:44:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux 使用yum安装BIND9123456#yum install bind============================================================================================================================================================= Package Arch Version Repository Size=============================================================================================================================================================Installing: bind x86_64 32:9.9.4-73.el7_6 updates 1.8 M 安装的版本为9.9.4 BIND9主配置文件/etc/named.conf 主配置文件的格式 12345678910options{ //全局选项}zone "zone name" { //定于区域}logging{ //日志文件}include:加载别的文件 主配置文件的配置注意事项 语法严格,分号,空格 文件的权限,属主:root,属组:named,640 主配置文件范例 1234567891011121314151617181920212223242526272829303132333435363738394041424344454647options { listen-on port 53 { 10.4.7.11; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; /* - If you are building an AUTHORITATIVE DNS server, do NOT enable recursion. - If you are building a RECURSIVE (caching) DNS server, you need to enable recursion. - If your recursive DNS server has a public IP address, you MUST enable access control to limit queries to your legitimate users. Failing to do so will cause your server to become part of large scale DNS amplification attacks. Implementing BCP38 within your network would greatly reduce such attack surface */ recursion yes; dnssec-enable no; dnssec-validation no; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; pid-file "/run/named/named.pid"; session-keyfile "/run/named/session.key";};logging { channel default_debug { file "data/named.run"; severity dynamic; };};zone "." IN { type hint; file "named.ca";};include "/etc/named.rfc1912.zones";include "/etc/named.root.key"; BIND9服务的启动检查配置文件1# named-checkconf 没有报错就是正常的 启动BIND9服务1# systemctl start named 检查BIND9服务状态1# systemctl status named 这样就完成了一个最基本的转发DNS的部署,它可以为我们的内网客户端提供DNS递归查询,例如查询并返回www.baidu.com的解析结果。 验证解析配置DNS服务器指向在/etc/resole里配置DNS服务器的ip地址为我们部署的主机ip 123# cat /etc/resolv.conf # Generated by NetworkManagernameserver 10.4.7.11 验证解析12# ping baidu.comPING baidu.com (220.181.57.216) 56(84) bytes of data.]]></content>
<categories>
<category>Web DNS技术</category>
</categories>
</entry>
<entry>
<title><![CDATA[实验文档2:自定义正解域]]></title>
<url>%2F2018%2F12%2F16%2F%E5%AE%9E%E9%AA%8C%E6%96%87%E6%A1%A32%EF%BC%9A%E8%87%AA%E5%AE%9A%E4%B9%89%E6%AD%A3%E8%A7%A3%E5%9F%9F%2F</url>
<content type="text"><![CDATA[欢迎加入王导的VIP学习qq群:==>932194668<== 自定义区域配置文件自定义区域的配置范例如下: 12345zone "host.com" IN { type master; file "host.com.zone"; allow-update { 10.4.7.11;10.4.7.12; };}; 这里自定义了一个host.com的主机域,可以放在/etc/named.rfc1912.zones文件中,也可以放置在自定义的文件中,在/etc/named.conf里include进来 主机域 主机域和业务域无关,且建议分开 主机域其实是一个假域,也就是说,主机域其实是不能解析到互联网上的,它只对局域网(内网)提供服务 自定义区域数据库文件 一般而言是文本文件,且只包含资源记录、宏定义和注释 需在自定义区域配置文件中指定存放路径,可以绝对路径或相对路径(相对于/var/named目录) 注意文件的属性(属主、属组及权限) 配置范例12345678910111213$ORIGIN .$TTL 600 ; 10 minuteshost.com IN SOA ns1.host.com. dnsadmin.host.com. ( 2018121601 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NS ns1.host.com.$ORIGIN host.com.$TTL 60 ; 1 minutens1 A 10.4.7.11 资源记录(Resource Record)资源记录格式1name [ttl(缓存时间)] IN 资源记录类型(RRtype) Value 常用资源记录类型(RR-type)SOA记录SOA: 起始授权,只能有一条 name:只能是区域名称,通常可以简写为@,例如:od.com. value:有n个数值,最主要的是主DNS服务器的FQDN,点不可省略 注意:SOA必须是区域数据库文件第一条记录 例子: 1234567@ 600 IN SOA dns.host.com. 管理员邮箱(dnsadmin.host.com.)( 序列号(serial number) ;注释内容,十进制数据,不能超过10位,通常使用日期时间戳,例如2018121601 刷新时间(refresh time) ;即每隔多久到主服务器检查一次 重试时间(retry time) ;应该小于refresh time 过期时间(expire time);当辅助DNS服务器无法联系上主DNS服务器时,辅助DNS服务器可以在多长时间内认为其缓存是有效的,并供用户查询。 netgative answer ttl ;非权威应答的ttl,缓存DNS服务器可以缓存记录多长时间 ) NS记录NS:可以有多条,每一个NS记录,必须对应一个A记录 name:区域名称,通常可以简写为@ value:DNS服务器的FQDN(可以使用相对名称) 例子: 1@ 600 IN NS ns1 A记录A:只能定义在正向区域数据库文件中(ipv4->FQDN) name:FQDN(可以使用相对名称) value:IP 例子: 12www 600(单位s) IN A 10.4.7.11www 600(单位s) IN A 10.4.7.12 注 可以做轮询 MX记录MX:邮件交换记录,可以有多个(用的不多) name:区域名称,用于标识smtp服务器 value:包含优先级和FQDN 优先级:0-99,数字越小,级别越高, 例子: 12@ 600 IN MX 10 mail@ 600 IN MX 20 smtp CNAME记录CNAME:canonical name,别名(FQDN->FQDN) name :FQDN value :FQDN 例子: 1eshop IN CNAME www 宏定义 $ORIGIN . $TTL 60 注释区域数据库文件中使用;(分号)来进行注释 实战正解主机域配置在/etc/named.rfc1912.zones文件内最下,添加以下内容12345zone "host.com" IN { type master; file "host.com.zone"; allow-update { 10.4.7.11;10.4.7.12; };}; 在/var/named下创建host.com.zone文件,写入以下内容/var/named/host.com.zone12345678910111213$TTL 600 ; 10 minutes@ IN SOA dns.host.com. 87527941.qq.com. ( 2018121601 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NS dns.host.com.$ORIGIN host.com.$TTL 60 ; 1 minuteHDSS7-11 A 10.4.7.11dns A 10.4.7.11 三种配置方式: 用宏定义$ORIGIN . 下面用host.com 不用宏定义,下面用@ 不用宏定义,下面用host.com. 检查配置并生效检查自定义区域配置123#named-checkzone host.com. /var/named/host.com.zonezone host.com/IN: loaded serial 2018121601OK 检查主配置文件1#named-checkconf 重启named服务1#systemctl restart named 检查该正解域是否生效配置主机名 12# hostnamectl set-hostname hdss7-11.host.com# logout 开启第二台虚机,配置好DNS后验证解析 维护正解域(增、删、改、查)增加一条资源记录/var/named/host.com.zone1234567891011121314$TTL 600 ; 10 minutes@ IN SOA dns.host.com. 87527941.qq.com. ( 2018121602 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NS dns.host.com.$ORIGIN host.com.$TTL 60 ; 1 minuteHDSS7-11 A 10.4.7.11HDSS7-12 A 10.4.7.12dns A 10.4.7.11 增加一个HDSS7-12.host.com的A记录解析,并验证 修改一条资源记录给10.4.7.12这台主机增加一个辅助ip 1# ip addr add 10.4.7.13/24 dev eth0 修改DNS服务器上的区域数据库文件 /var/named/host.com.zone1234567891011121314$TTL 600 ; 10 minutes@ IN SOA dns.host.com. 87527941.qq.com. ( 2018121603 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NS dns.host.com.$ORIGIN host.com.$TTL 60 ; 1 minuteHDSS7-11 A 10.4.7.11HDSS7-12 A 10.4.7.13dns A 10.4.7.11 修改HDSS7-12.host.com的A记录解析,指向新增的辅助ip10.4.7.13检查: 123456789ping HDSS7-12.host.comING hdss7-12.host.com (10.4.7.13) 56(84) bytes of data.64 bytes from 10.4.7.13 (10.4.7.13): icmp_seq=1 ttl=64 time=0.318 ms64 bytes from 10.4.7.13 (10.4.7.13): icmp_seq=2 ttl=64 time=0.206 ms64 bytes from 10.4.7.13 (10.4.7.13): icmp_seq=3 ttl=64 time=0.391 ms^C--- hdss7-12.host.com ping statistics ---3 packets transmitted, 3 received, 0% packet loss, time 2002msrtt min/avg/max/mdev = 0.206/0.305/0.391/0.076 ms 删除一条资源记录/var/named/host.com.zone12345678910111213$TTL 600 ; 10 minutes@ IN SOA ns1.host.com. dnsadmin.host.com. ( 2018121604 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NS ns1.host.com.$ORIGIN host.com.$TTL 60 ; 1 minutens1 A 10.4.7.11HDSS7-11 A 10.4.7.11 删除HDSS7-12.host.com的A记录解析,并验证 查询记录略]]></content>
<categories>
<category>Web DNS技术</category>
</categories>
</entry>
<entry>
<title><![CDATA[实验文档3:自定义反解域]]></title>
<url>%2F2018%2F12%2F16%2F%E5%AE%9E%E9%AA%8C%E6%96%87%E6%A1%A33%EF%BC%9A%E8%87%AA%E5%AE%9A%E4%B9%89%E5%8F%8D%E8%A7%A3%E5%9F%9F%2F</url>
<content type="text"><![CDATA[欢迎加入王导的VIP学习qq群:==>932194668<== 添加反解域的自定义区域配置/etc/named.rfc1912.zones12345zone "7.4.10.in-addr.arpa" IN { type master; file "7.4.10.in-addr.arpa.zone"; allow-update { 10.4.7.11;10.4.7.12; };}; 添加反解域的区域数据库文件/var/named/7.4.10.in-addr.arpa.zone12345678910111213$TTL 600 ; 10 minutes@ IN SOA dns.host.com. dnsadmin.host.com. ( 2018121603 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NS ns1.host.com.$ORIGIN 7.4.10.in-addr.arpa.$TTL 60 ; 1 minute11 PTR HDSS7-11.host.com.12 PTR HDSS7-12.host.com. 注意:一个IP只能对应唯一的FQDN反解PTR记录,且应该与正解A记录对应 检查反解域的配置123[root@hdss7-11 ~]# named-checkzone 7.4.10.in-addr.arpa /var/named/7.4.10.in-addr.arpa.zonezone 7.4.10.in-addr.arpa/IN: loaded serial 2018121603OK 重启BIND9服务1[root@hdss7-11 ~]# systemctl restart named.service 检查解析是否生效 方法1: 12[root@hdss7-11 ~]# dig -t PTR 11.7.4.10.in-addr.arpa. @10.4.7.11 +shortHDSS7-11.host.com. 方法2: 12[root@hdss7-11 ~]# dig -x 10.4.7.11 @10.4.7.11 +shortHDSS7-11.host.com. 增删改增加一条反解记录/var/named/7.4.10.in-addr.arpa.zone1234567891011121314$TTL 600 ; 10 minutes@ IN SOA dns.host.com. dnsadmin.host.com. ( 2018121604 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NS ns1.host.com.$ORIGIN 7.4.10.in-addr.arpa.$TTL 60 ; 1 minute11 PTR HDSS7-11.host.com.12 PTR HDSS7-12.host.com.13 PTR HDSS7-13.host.com. 删除一条反解记录/var/named/7.4.10.in-addr.arpa.zone12345678910111213$TTL 600 ; 10 minutes@ IN SOA dns.host.com. dnsadmin.host.com. ( 2018121605 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NS ns1.host.com.$ORIGIN 7.4.10.in-addr.arpa.$TTL 60 ; 1 minute11 PTR HDSS7-11.host.com.12 PTR HDSS7-12.host.com. 修改一条反解记录/var/named/7.4.10.in-addr.arpa.zone12345678910111213$TTL 600 ; 10 minutes@ IN SOA dns.host.com. dnsadmin.host.com. ( 2018121606 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NS ns1.host.com.$ORIGIN 7.4.10.in-addr.arpa.$TTL 60 ; 1 minute11 PTR HDSS7-11.host.com.12 PTR HDSS7-13.host.com.]]></content>
<categories>
<category>Web DNS技术</category>
</categories>
</entry>
<entry>
<title><![CDATA[实验文档4:DNS主辅同步]]></title>
<url>%2F2018%2F12%2F16%2F%E5%AE%9E%E9%AA%8C%E6%96%87%E6%A1%A34%EF%BC%9ADNS%E4%B8%BB%E8%BE%85%E5%90%8C%E6%AD%A5%2F</url>
<content type="text"><![CDATA[欢迎加入王导的VIP学习qq群:==>932194668<== DNS主辅同步架构 IP 主机名 功能 10.4.7.11 HDSS7-11.host.com DNS主 10.4.7.12 HDSS7-12.host.com DNS辅 注意:所有资源记录的增、删、改的操作,均在主DNS上进行,辅助DNS仅提供查询功能 辅助DNS主机上安装部署BIND9安装BIND9软件1234567#yum install bind============================================================================================================================================================= Package Arch Version Repository Size=============================================================================================================================================================Installing: bind x86_64 32:9.9.4-73.el7_6 updates 1.8 M 注意:辅助DNS的BIND9软件版本应小于等于主DNS的BIND9软件版本 修改辅助DNS主配置文件修改主配置文件,并加入masterfile-format text; /etc/named.conf1234567891011121314151617181920212223242526272829303132333435363738394041424344454647options { listen-on port 53 { 10.4.7.12; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; masterfile-format text; /* - If you are building an AUTHORITATIVE DNS server, do NOT enable recursion. - If you are building a RECURSIVE (caching) DNS server, you need to enable recursion. - If your recursive DNS server has a public IP address, you MUST enable access control to limit queries to your legitimate users. Failing to do so will cause your server to become part of large scale DNS amplification attacks. Implementing BCP38 within your network would greatly reduce such attack surface */ recursion yes; dnssec-enable no; dnssec-validation no; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; pid-file "/run/named/named.pid"; session-keyfile "/run/named/session.key";};logging { channel default_debug { file "data/named.run"; severity dynamic; };};zone "." IN { type hint; file "named.ca";};include "/etc/named.rfc1912.zones";include "/etc/named.root.key"; 修改主DNS主配置文件加入以下配置/etc/named.conf12allow-transfer { 10.4.7.12; };also-notify { 10.4.7.12; }; 检查配置并重启主DNS12# named-checkconf# systemctl restart named 检查完全区域数据传送1234567891011121314[root@hdss7-12 ~]# dig -t axfr host.com @10.4.7.11; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> -t axfr host.com @10.4.7.11;; global options: +cmdhost.com. 600 IN SOA dns.host.com. dnsadmin.host.com. 2018121601 10800 900 604800 86400host.com. 600 IN NS ns1.host.com.HDSS7-11.host.com. 60 IN A 10.4.7.11HDSS7-12.host.com. 60 IN A 10.4.7.12ns1.host.com. 60 IN A 10.4.7.11host.com. 600 IN SOA dns.host.com. dnsadmin.host.com. 2018121601 10800 900 604800 86400;; Query time: 0 msec;; SERVER: 10.4.7.11#53(10.4.7.11);; WHEN: Sun Dec 16 14:16:01 CST 2018;; XFR size: 6 records (messages 1, bytes 220) 辅助DNS上创建自定义正解区域配置/etc/named.rfc1912.zones12345zone "host.com" IN { type slave; masters { 10.4.7.11; }; file "slaves/host.com.zone";}; 检查配置并启动辅助DNS12# named-checkconf# systemctl start named 检查同步过来的区域数据库文件/var/named/slaves/host.com.zone123456789101112131415161718[root@hdss7-12 slaves]# ll-rw-r--r-- 1 named named 392 Feb 10 21:08 host.com.zone[root@hdss7-12 slaves]# cat host.com.zone $ORIGIN .$TTL 600 ; 10 minuteshost.com IN SOA dns.host.com. dnsadmin.host.com. ( 2018121601 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NS ns1.host.com.$ORIGIN host.com.$TTL 60 ; 1 minuteHDSS7-11 A 10.4.7.11HDSS7-12 A 10.4.7.12ns1 A 10.4.7.11 检查解析是否正确使用主DNS查询一个A记录12# dig -t A HDSS7-11.host.com @10.4.7.11 +short10.4.7.11 使用辅助DNS查询一个A记录12# dig -t A HDSS7-11.host.com @10.4.7.12 +short10.4.7.11 辅助DNS上创建自定义反解区域配置略 增加、删除、修改记录,并验证同步注意:一定要手动修改主DNS上SOA记录里的serial值! 增加记录删除记录修改记录再增加一个od.com的业务域,并验证主辅同步(复习)主DNS上增加自定义区域主DNS上增加自定义区域数据库文件主DNS上增加自定义区域资源记录检查配置并重启主DNS服务辅助DNS上增加自定义区域检查完全区域数据传送检查配置并重启辅助DNS服务验证主辅同步分别使用主DNS和辅助DNS查询新业务域的A记录在主DNS上新增一条A记录,并验证主辅同步在主DNS上修改一条A记录,并验证主辅同步在主DNS上删除一条A记录,并验证主辅同步客户端配置DNS解析高可用在客户端主机(以Linux主机为例,Windows和Mac操作系统略)配置主、辅DNS /etc/resolv.conf12345#cat /etc/resolv.conf # Generated by NetworkManagersearch host.com od.comnameserver 10.4.7.11nameserver 10.4.7.12 这样客户端高可用就配置好了,任意一个DNS服务器宕机也不会影响正常解析]]></content>
<categories>
<category>Web DNS技术</category>
</categories>
</entry>
<entry>
<title><![CDATA[实验文档5:DNS工具和rndc远程管理DNS实战]]></title>
<url>%2F2018%2F12%2F16%2F%E5%AE%9E%E9%AA%8C%E6%96%87%E6%A1%A35%EF%BC%9ADNS%E5%B7%A5%E5%85%B7%E5%92%8Crndc%E8%BF%9C%E7%A8%8B%E7%AE%A1%E7%90%86DNS%E5%AE%9E%E6%88%98%2F</url>
<content type="text"><![CDATA[欢迎加入王导的VIP学习qq群:==>932194668<== DNS管理工具安装1# yum install bind-utils -y 工具一:nslookupWindows操作系统也有的一个常用工具 交互式123456789101112131415161718192021222324#nslookup > server localhostDefault server: localhostAddress: 127.0.0.1#53Default server: localhostAddress: 127.0.0.1#53> www.bkjf-inc.comServer: localhostAddress: 127.0.0.1#53Name: www.bkjf-inc.comAddress: 192.144.198.128-------以上是权威应答> server 8.8.8.8Default server: 8.8.8.8Address: 8.8.8.8#53> www.bkjf-inc.comServer: 8.8.8.8Address: 8.8.8.8#53Non-authoritative answer:Name: www.bkjf-inc.comAddress: 192.144.198.128-------以上是非权威应答 非交互式12345678910#nslookup www.baidu.comServer: 183.60.83.19Address: 183.60.83.19#53Non-authoritative answer:www.baidu.com canonical name = www.a.shifen.com.Name: www.a.shifen.comAddress: 220.181.112.244Name: www.a.shifen.comAddress: 220.181.111.37 工具二:host简单粗暴的小工具 1234#host -t A www.baidu.comwww.baidu.com is an alias for www.a.shifen.com.www.a.shifen.com has address 220.181.112.244www.a.shifen.com has address 220.181.111.37 工具三:dig功能强大的DNS工具,重点掌握 123456789101112131415161718192021222324252627282930313233343536#dig -t A www.baidu.com @localhost; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> -t A www.baidu.com @localhost;; global options: +cmd;; Got answer:;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46476;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 5, ADDITIONAL: 6;; OPT PSEUDOSECTION:; EDNS: version: 0, flags:; udp: 4096;; QUESTION SECTION:;www.baidu.com. IN A;; ANSWER SECTION:www.baidu.com. 1200 IN CNAME www.a.shifen.com.www.a.shifen.com. 300 IN A 220.181.111.37www.a.shifen.com. 300 IN A 220.181.112.244;; AUTHORITY SECTION:a.shifen.com. 1200 IN NS ns3.a.shifen.com.a.shifen.com. 1200 IN NS ns2.a.shifen.com.a.shifen.com. 1200 IN NS ns4.a.shifen.com.a.shifen.com. 1200 IN NS ns5.a.shifen.com.a.shifen.com. 1200 IN NS ns1.a.shifen.com.;; ADDITIONAL SECTION:ns5.a.shifen.com. 1200 IN A 180.76.76.95ns1.a.shifen.com. 1200 IN A 61.135.165.224ns3.a.shifen.com. 1200 IN A 112.80.255.253ns4.a.shifen.com. 1200 IN A 14.215.177.229ns2.a.shifen.com. 1200 IN A 220.181.57.142;; Query time: 561 msec;; SERVER: 127.0.0.1#53(127.0.0.1);; WHEN: Fri Mar 01 09:45:51 CST 2019;; MSG SIZE rcvd: 271 常用参数: +[no]addition +short 工具四:nsupdate不常用,需要在zone配置文件里声明allow-update { acl; }; 调整zone配置文件/etc/named.rfc1912.zones12345zone "bkjf-inc.com" IN { type master; file "bkjf-inc.com.zone"; allow-update { 10.4.7.11/32; };}; 重启named服务 1# systemctl restart named 新增一条记录12345#nsupdate > server 10.4.7.11> update add update.bkjf-inc.com 60 A 10.4.7.11> send> quit 检查: 123456#nslookup update.bkjf-inc.comServer: 10.4.7.11Address: 10.4.7.11#53Name: update.bkjf-inc.comAddress: 10.4.7.11 查看区域数据库文件/var/named/1234-rw-r--r-- 1 root root 335 Feb 28 10:57 bkjf-inc.com.zone-rw-r--r-- 1 named named 733 Mar 1 09:54 bkjf-inc.com.zone.jnl#file bkjf-inc.com.zone.jnl bkjf-inc.com.zone.jnl: data 产生了一个jnl的数据文件,不能使用文本编辑器打开 jnl文件(journal文件)是BIND9动态更新的时候记录更新内容所生成的日志文件。 删除一条记录12345#nsupdate > server 10.4.7.11> update delete update.bkjf-inc.com> send> quit 检查 12345#nslookup update.bkjf-inc.comServer: 10.4.7.11Address: 10.4.7.11#53** server can't find update.bkjf-inc.com: NXDOMAIN 更新一条记录不支持直接更新,需要先执行删除,再新增 nsupdate使用小结: 优点 命令简单,便于记忆 不用手动变更SOA的serial序列号,自动滚动 不需要重启/重载BIND9服务/配置,生效快 可以通过配置acl实现远程管理 缺点 jnl文件无法使用文本文件的方式打开 只能依赖完全区域传送查看所有区域的记录 更新操作复杂,先删再增 远程管理有安全隐患,需要加强审计 动态域在rndc管理上多一步 rndc远程管理DNS生成rndc-key12345678910111213141516171819202122232425#rndc-confgen -r /dev/urandom# Start of rndc.confkey "rndc-key" { algorithm hmac-md5; secret "MFM4AocpN0lcoL4fN2lA6Q==";};options { default-key "rndc-key"; default-server 127.0.0.1; default-port 953;};# End of rndc.conf# Use with the following in named.conf, adjusting the allow list as needed:# key "rndc-key" {# algorithm hmac-md5;# secret "MFM4AocpN0lcoL4fN2lA6Q==";# };# # controls {# inet 127.0.0.1 port 953# allow { 127.0.0.1; } keys { "rndc-key"; };# };# End of named.conf 把rndc-key和controls配置到bind的主配置文件的options段里/etc/named.conf123456789key "rndc-key" { algorithm hmac-md5; secret "MFM4AocpN0lcoL4fN2lA6Q==";};controls { inet 10.4.7.11 port 953 allow { 10.4.7.11;10.4.7.12; } keys { "rndc-key"; };}; 注意:这里要配置一下controls段的acl,限定好哪些主机可以使用rndc管理DNS服务 重启bind9服务1# systemctl restart named rndc的服务端监听在953端口,检查一下端口是否起来 12# netstat -luntp|grep 953tcp 0 0 10.4.7.11:953 0.0.0.0:* LISTEN 11136/named 在远程管理主机上安装bindrndc命令在bind包里,所以远程管理主机需要安装bind(不需要启动named) 在远程管理主机上做rndc.conf使用rndc进行远程管理的主机上,都需要配置rndc.conf,且rndc-key要和DNS服务器上的key一致 /etc/rndc.conf12345678910key "rndc-key" { algorithm hmac-md5; secret "MFM4AocpN0lcoL4fN2lA6Q==";};options { default-key "rndc-key"; default-server 10.4.7.11; default-port 953; }; 使用rndc命令远程管理DNS查询DNS服务状态(可以取值做监控)1234567891011121314#rndc status version: 9.9.4-RedHat-9.9.4-73.el7_6 <id:8f9657aa>CPUs found: 2worker threads: 2UDP listeners per interface: 2number of zones: 105debug level: 0xfers running: 0xfers deferred: 0soa queries in progress: 0query logging is OFFrecursive clients: 0/0/1000tcp clients: 0/100server is up and running 管理静态域(allow-update { none; };)静态域zone文件12345zone "od.com" IN { type master; file "od.com.zone"; allow-update { none; };}; 增、删、改一条记录后 12# rndc reload od.comzone reload up-to-date 管理动态域(allow-update { 10.4.7.11; };)动态域zone文件12345zone "host.com" IN { type master; file "host.com.zone"; allow-update { 10.4.7.11; };}; 增、删、改一条记录后 12#rndc reload host.comrndc: 'reload' failed: dynamic zone 直接reload会报错,需要先freeze再thaw才行 123#rndc freeze host.com#rndc thaw host.comThe zone reload and thaw was successful.]]></content>
<categories>
<category>Web DNS技术</category>
</categories>
</entry>
<entry>
<title><![CDATA[实验文档6:智能DNS实战]]></title>
<url>%2F2018%2F12%2F16%2F%E5%AE%9E%E9%AA%8C%E6%96%87%E6%A1%A36%EF%BC%9A%E6%99%BA%E8%83%BDDNS%E5%AE%9E%E6%88%98%2F</url>
<content type="text"><![CDATA[欢迎加入王导的VIP学习qq群:==>932194668<== BIND9的acl访问控制列表4个内置acl any:任何主机 none:没有主机 localhost:本机 localnet:本地子网所有IP 自定义acl简单acl123acl "someips" { //定义一个名为someips的ACL 10.0.0.1; 192.168.23.1; 192.168.23.15; //包含3个单个IP }; 复杂acl1234567acl "complex" { //定义一个名为complex的ACL "someips"; //可以嵌套包含其他ACL 10.0.15.0/24; //包含10.0.15.0子网中的所有IP !10.0.16.1/24; //非10.0.16.1子网的IP {10.0.17.1;10.0.18.2;}; //包含了一个IP组 localhost; //本地网络接口IP(含实际接口IP和127.0.0.1) }; 使用acl123allow-update { "someips"; };allow-transfer { "complex"; };... BIND9的view视图功能view语句定义了视图功能。视图是BIND9提供的强大的新功能,允许DNS服务器根据客户端的不同,有区别地回答DNS查询,每个视图定义了一个被特定客户端子集见到的DNS名称空间。这个功能在一台主机上运行多个形式上独立的DNS服务器时特别有用。 view的语法范例12345678view view_name [class] { match-clients { address_match_list } ; match-destinations { address_match_list } ; match-recursive-only { yes_or_no } ; [ view_option; ...] [ zone-statistics yes_or_no ; ] [ zone_statement; ...]}; view配置范例1:按照不同业务环境解析注:以下是内网DNS的view使用范例 1234567891011121314151617181920212223242526272829303132acl "env-test" { 10.4.7.11;};acl "env-prd" { 10.4.7.12;};view "env-test" { match-clients { "env-test"; }; recursion yes; zone "od.com" { type master; file "env-test.od.com.zone"; };};view "env-prd" { match-clients { "env-prd"; }; recursion yes; zone "od.com" { type master; file "env-prd.od.com.zone"; };};view "default" { match-clients { any; }; recursion yes; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones";}; view配置范例2:智能DNS注:以下特指公网智能DNS配置范例 12345678910111213141516171819202122232425//电信IP访问控制列表acl "telecomip"{ telecom_IP; ... };//联通IP访问控制列表acl "netcomip"{ netcom_IP; ... };view "telecom" { match-clients { "telecomip"; }; zone "ZONE_NAME" IN { type master; file "ZONE_NAME.telecom.zone"; };};view "netcom" { match-clients { "netcomip"; }; zone "ZONE_NAME" IN { type master; file "ZONE_NAME.netcom.zone"; };};view "default" { match-clients { any; }; zone "ZONE_NAME" IN { type master; file "ZONE_NAME.zone"; };};]]></content>
<categories>
<category>Web DNS技术</category>
</categories>
</entry>
<entry>
<title><![CDATA[实验文档7:bind-chroot和dnssec技术实战]]></title>
<url>%2F2018%2F12%2F16%2F%E5%AE%9E%E9%AA%8C%E6%96%87%E6%A1%A37%EF%BC%9Abind-chroot%E5%92%8Cdnssec%E6%8A%80%E6%9C%AF%E5%AE%9E%E6%88%98%2F</url>
<content type="text"><![CDATA[欢迎加入王导的VIP学习qq群:==>932194668<== 安装部署bind-chroot系统环境服务器:腾讯云主机,有公网IPOS:CentOS Linux release 7.4.1708 (Core)bind-chroot:bind-chroot-9.9.4-73.el7_6.x86_64 yum 安装12345678910111213141516171819202122232425262728293031323334# yum install bind-chroot -y============================================================================================================================================================= Package Arch Version Repository Size=============================================================================================================================================================Installing: bind-chroot x86_64 32:9.9.4-73.el7_6 updates 88 kInstalling for dependencies: bind x86_64 32:9.9.4-73.el7_6 updates 1.8 MUpdating for dependencies: bind-libs x86_64 32:9.9.4-73.el7_6 updates 1.0 M bind-libs-lite x86_64 32:9.9.4-73.el7_6 updates 741 k bind-license noarch 32:9.9.4-73.el7_6 updates 87 k bind-utils x86_64 32:9.9.4-73.el7_6 updates 206 kTransaction Summary=============================================================================================================================================================Install 1 Package (+1 Dependent package)Upgrade ( 4 Dependent packages)Installed: bind-chroot.x86_64 32:9.9.4-73.el7_6 Dependency Installed: bind.x86_64 32:9.9.4-73.el7_6 Dependency Updated: bind-libs.x86_64 32:9.9.4-73.el7_6 bind-libs-lite.x86_64 32:9.9.4-73.el7_6 bind-license.noarch 32:9.9.4-73.el7_6 bind-utils.x86_64 32:9.9.4-73.el7_6 Complete! 配置bind-chrootbind-chroot本质上是使用chroot方式给bind软件换了个“根”,这时bind软件的“根”在/var/named/chroot下,弄懂这一点,配置起来就跟BIND9没什么区别了把yum安装的bind-chroot在/etc下的产生的配置文件硬链接到/var/named/chroot/etc下 /var/named/chroot/etc/1234[root@VM_0_13_centos ~]# cd /var/named/chroot/etc/[root@VM_0_13_centos etc]# ls /etc/namednamed/ named.conf named.iscdlv.key named.rfc1912.zones named.root.key [root@VM_0_13_centos etc]# ln /etc/named.* . /var/named/chroot/var/named123456789101112[root@VM_0_13_centos named]# ln /var/named/named.* .[root@VM_0_13_centos named]# mkdir data/ dynamic/ slaves/ dnssec-key/[root@VM_0_13_centos named]# chgrp -R named *[root@VM_0_13_centos named]# lldrwxrwx--- 2 root named 4096 Feb 27 18:30 datadrwxr-xr-x 3 root named 4096 Feb 28 14:31 dnssec-keydrwxrwx--- 2 root named 4096 Feb 28 14:33 dynamic-rw-r----- 2 root named 2281 May 22 2017 named.ca-rw-r----- 2 root named 152 Dec 15 2009 named.empty-rw-r----- 2 root named 152 Jun 21 2007 named.localhost-rw-r----- 2 root named 168 Dec 15 2009 named.loopbackdrwxrwx--- 2 root named 4096 Jan 30 01:23 slaves /etc/named.conf主配置文件编辑主配置文件,这里把53端口开放到公网 /etc/named.conf12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849options { listen-on port 53 { any; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; recursing-file "/var/named/data/named.recursing"; secroots-file "/var/named/data/named.secroots"; allow-query { any; }; /* - If you are building an AUTHORITATIVE DNS server, do NOT enable recursion. - If you are building a RECURSIVE (caching) DNS server, you need to enable recursion. - If your recursive DNS server has a public IP address, you MUST enable access control to limit queries to your legitimate users. Failing to do so will cause your server to become part of large scale DNS amplification attacks. Implementing BCP38 within your network would greatly reduce such attack surface */ recursion no; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; pid-file "/run/named/named.pid"; session-keyfile "/run/named/session.key";};logging { channel default_debug { file "data/named.run"; severity dynamic; };};zone "." IN { type hint; file "named.ca";};include "/etc/named.rfc1912.zones";include "/etc/named.root.key"; 使用dnssec技术维护一个业务域在公网上使用BIND9维护的业务域,最好使用dnssec技术对该域添加数字签名DNSSEC(DNS Security Extension)—-DNS安全扩展,主要是为了解决DNS欺骗和缓存污染问题而设计的一种安全机制。 DNSSEC技术参考文献1DNSSEC技术参考文献2 打开dnssec支持选项/etc/named.conf123dnssec-enable yes;dnssec-validation yes;dnssec-lookaside auto; 配置一个业务域bkjf-inc.com/etc/named.rfc1912.zones12345678zone "bkjf-inc.com" IN { type master; file "bkjf-inc.com.zone"; key-directory "dnssec-key/bkjf-inc.com"; inline-signing yes; auto-dnssec maintain; allow-update { none; };}; 创建数字签名证书/var/named/chroot/var/named/dnssec-key12345678910111213141516171819[root@VM_0_13_centos dnssec-key]# mkdir bkjf-inc.com[root@VM_0_13_centos dnssec-key]# chgrp named bkjf-inc.com[root@VM_0_13_centos dnssec-key]# cd bkjf-inc.com[root@VM_0_13_centos bkjf-inc.com]# dnssec-keygen -a RSASHA256 -b 1024 bkjf-inc.comGenerating key pair..................................++++++ .++++++ Kbkjf-inc.com.+008+53901[root@VM_0_13_centos bkjf-inc.com]# dnssec-keygen -a RSASHA256 -b 2048 -f KSK bkjf-inc.com KSK bkjf-inc.comGenerating key pair..........................................................................................+++ .................................................+++ Kbkjf-inc.com.+008+40759[root@VM_0_13_centos bkjf-inc.com]# chgrp named *[root@VM_0_13_centos bkjf-inc.com]# chmod g+r *.private[root@VM_0_13_centos bkjf-inc.com]# lltotal 16-rw-r--r-- 1 root named 607 Feb 28 14:10 Kbkjf-inc.com.+008+40759.key-rw-r----- 1 root named 1776 Feb 28 14:10 Kbkjf-inc.com.+008+40759.private-rw-r--r-- 1 root named 433 Feb 28 14:10 Kbkjf-inc.com.+008+53901.key-rw-r----- 1 root named 1012 Feb 28 14:10 Kbkjf-inc.com.+008+53901.private 这里如果生成密钥的速度很慢,需要yum安装一下haveged软件并开启 1# systemctl start haveged.service 创建区域数据库文件/var/named/chroot/var/named/bkjf-inc.com.zone1234567891011121314151617[root@VM_0_13_centos named]# cat bkjf-inc.com.zone$TTL 600 ; 10 minutes@ IN SOA ns1.bkjf-inc.com. 87527941.qq.com. ( 2018121605 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) ) NS ns1.bkjf-inc.com. NS ns2.bkjf-inc.com.$ORIGIN bkjf-inc.com.$TTL 60 ; 1 minutens1 A 192.144.198.128ns2 A 192.144.198.128www A 192.144.198.128eshop CNAME www 启动bind-chroot服务1# systemctl start named-chroot 自动生成了签名zone如果启动成功且配置无误,应该自动生成了带签名的zone /var/named/chroot/var/named/1234567[root@VM_0_13_centos named]# lltotal 60-rw-r--r-- 1 root named 507 Feb 28 14:34 bkjf-inc.com.zone-rw-r--r-- 1 named named 512 Feb 28 14:26 bkjf-inc.com.zone.jbk-rw-r--r-- 1 named named 742 Feb 28 14:35 bkjf-inc.com.zone.jnl-rw-r--r-- 1 named named 4102 Feb 28 14:44 bkjf-inc.com.zone.signed-rw-r--r-- 1 named named 7481 Feb 28 14:35 bkjf-inc.com.zone.signed.jnl 检查签名区需要用到完全区域传送命令 123456789101112131415161718192021222324252627282930313233343536373839[root@VM_0_13_centos named]# dig -t AXFR bkjf-inc.com @localhost; <<>> DiG 9.9.4-RedHat-9.9.4-73.el7_6 <<>> -t AXFR bkjf-inc.com @localhost;; global options: +cmdbkjf-inc.com. 600 IN SOA ns1.bkjf-inc.com. 87527941.qq.com. 2018121608 10800 900 604800 86400bkjf-inc.com. 86400 IN RRSIG NSEC 8 2 86400 20190330063503 20190228053503 53901 bkjf-inc.com. 0fyLJXxaDOI+RWnYjK2tGpd6WgbWmgeIADtjpPQFQLrv1X9fuDLi2MFR q0+csg5P22eVUdasKi3q5tMmFW8GZtLEBBVtOtSba3/FvtoitvyBGcG6 KJ155dPbhEFe/eR0/JhWtFsIsyj/UHtgELB4eGYJYCeEI+WzUopT7voz 4UE=bkjf-inc.com. 86400 IN NSEC eshop.bkjf-inc.com. NS SOA RRSIG NSEC DNSKEY TYPE65534bkjf-inc.com. 600 IN RRSIG NS 8 2 600 20190330063017 20190228053309 53901 bkjf-inc.com. Y/T0m4p0yNrJwJiHc0mjDgit/9E4h7MXPb5F2WgBd+huXYgL0pS0vOb3 c2aRvHHW/zngPjShOfy3sYY5203SzPS15tN6E/RAs36/I33sZE7jZBFo 9q0KjEdKHNsoC9XISSdbLPCX879/B1rKZcmhpPNmhpAK6P351nWWgd9L jtU=bkjf-inc.com. 600 IN RRSIG SOA 8 2 600 20190330063503 20190228053503 53901 bkjf-inc.com. eE3nKlCmAZrjJ3DwdzPStYmrC38X6VCqCxIc6otLJDX65Uk2uSqGSPre WIu16zEsbuuxq7/38ABrupQNwkPAgaSaiLIRC/000PXzKsUPhll0xO4x u9tLg2LBRATQ+4dHpKtLsoBTX0nXVHlz09YeAAA82r5wyQye2/ebesxH +A4=bkjf-inc.com. 0 IN RRSIG TYPE65534 8 2 0 20190330054441 20190228053309 53901 bkjf-inc.com. sEX7jpdTbUZ3hlIR2CRWHbgceAQFVOVKnVl6CXvyQhavIFjUyBMMhXTw hKYwXd2Hc0LGg9koWJqlt0oYS8YbXacKbeBUrLovmcbYP46Uhm05zaVo jswG7oYYsYDE3ekbl5ImnAEyjksSNOgk8if/WoUvXfF5QH6Rdl+6Q3qG cEI=bkjf-inc.com. 600 IN RRSIG DNSKEY 8 2 600 20190330063309 20190228053309 53901 bkjf-inc.com. rUGjMTxmbthB6UbmemoorQOfuen8u0xeOosl7lPRNLV2Hk7KsAZzUD2/ tRAJaY9NRZ1JhZHkmX/N5hncuVpPxZnrp8UB7qOoairqgjA73IFGoT0F 00KIU0FZaqsQAbBSzpzfbwr9KVbn1hTAq6/5Q/wrWZvQOASMYrF5Xhr9 lW4=bkjf-inc.com. 600 IN RRSIG DNSKEY 8 2 600 20190330063309 20190228053309 40759 bkjf-inc.com. lBXWXbTshdeH/oOkBGdwIspet0ABbhUZfzAXUjOP3ivCMW5sse3ZayEA qPe6mZncURqomWNA/xQKemoJJjtlAwc5F4CjmtrUierdy3EVVKS0NFnz 9L3PxiJcOxl1VVtSBX+XAOPa0xkS3cpEbFVOym4NaKsoLgcqKKBjjBu4 dhWoXoxXk7PE5fogo9/BM0heGI4XpnixUSTbucMw4bcnNYPY0qKUBs2o alt1CvrGz78oOO10//pXpw/ml89UwWo28/FDvxeuXS7soeImDRklTLlE xV/Q3//v7o73ZosAdSR+9xFdcZtVs43Jjo3Cy8WL1Zjz6BdRd59Fyu6h WghEKg==bkjf-inc.com. 0 IN TYPE65534 \# 5 08D28D0001bkjf-inc.com. 0 IN TYPE65534 \# 5 089F370001bkjf-inc.com. 600 IN DNSKEY 256 3 8 AwEAAflXAWLXAVJUEj29iidwVvZALuQr03hLn1bEl81XDtD63H7wwHS9 i9fNDYL0q0FkRDkuzXEQpb3UUleu/RYtSd9w6Ads0RWNUyB6X1E4Djmv sPwFwvo570svZSVky2rjEHnySgVI2ywqhcRYLMKjxE6pXuzXrqecQcF2 qrMq2xmJbkjf-inc.com. 600 IN DNSKEY 257 3 8 AwEAAbxFYlbq+R8y/hGg/xL8xDBasZGYtgPOqVd3bP68p98YHsFwHyG8 u3svatzRoq8STNjKKZEluDC2bcUIn9/mRHyorTYPtwyePxPEgVE4yhBy 9xqD4ES+ty7kuHOUz/WEHdNdYRhYyHe+SGf4dHnmU49pHIBCE8xFX6fs t270webjuXs4Pt6qRlyoFC3XmpRDiMNVwtM+doUxo/MRK4mw5zTeHyyf dFLVOvE3mW/ZKgBfnrsj0zE71bnD5nTxJIjDv1bUppbiRy5RK40jPhHu zaa3quxg1yS/BceYcjJpZJUc3LS55HGzatfuK799KvukuDKf7u71ylW+ 5ynT7Sxhbt0=bkjf-inc.com. 600 IN NS ns1.bkjf-inc.com.bkjf-inc.com. 600 IN NS ns2.bkjf-inc.com.eshop.bkjf-inc.com. 86400 IN RRSIG NSEC 8 3 86400 20190330063503 20190228053503 53901 bkjf-inc.com. dHM2PhYs7BVuhD//iGhcwPZGZmHDkBCfWKju6ZZlvSx3I+QmWWvVdKCj 8YCw2AkWhgARxFfRMzhxRwDjgEgHhxUr4UGPH9+kJpvGi+UpFBVoBvPw iL43qCn/4J2f6URuAY8Dcq0DFpR0QLVJgIXBZpyhUYu5hZNWI2tzfyhO GlM=eshop.bkjf-inc.com. 86400 IN NSEC ns1.bkjf-inc.com. CNAME RRSIG NSECeshop.bkjf-inc.com. 60 IN RRSIG CNAME 8 3 60 20190330063503 20190228053503 53901 bkjf-inc.com. 9ONt81AjpHFrM8YwDm7pQAg62oDBgaNzdtDIqtBHt5h/BPl83fOP/dOp P0Xi+y/OsFjDzHBSBDU4sy3fJwHBqm8uuMc6m33pIZfTq15fxFXF+2hU ift1bc0b0dk/L7ANZ5haEsDcl+hSVjwru2o2ISJtvp5zySZ61pdMvA6y ktg=eshop.bkjf-inc.com. 60 IN CNAME www.bkjf-inc.com.ns1.bkjf-inc.com. 60 IN RRSIG A 8 3 60 20190330063017 20190228053309 53901 bkjf-inc.com. 9MUZhsTxlmn5B6QXg/iCQoFyilRh8H4OJcTgpu1KgSyMTiBoEwJGdhIx k2XimlJZr9/MrSeRbuLwMZOnwFJ7w9fcIunrYHiE1T71y0BcLnQOKaJf SkJI5VKUam80+J6unkscCj0i/Y1kXTjXWLODKsZzw4+zLz5cGJk6hvsn XP4=ns1.bkjf-inc.com. 86400 IN RRSIG NSEC 8 3 86400 20190330063017 20190228053309 53901 bkjf-inc.com. EFeX2LsEd/flN2/5lCgKlSTtC93WH0LDw9GW1RAlLIfxFAptPsXkmy7y B0Blt7tOuaxA/cTNbnFZBnyo8G3YW90LnYagqeuNzl+90gjUxsbbhE4f pTkQkRXRsvcagYDKQjs9nkN1SAF13SagnupR8D2crHADICjy8RHjHtgA byM=ns1.bkjf-inc.com. 86400 IN NSEC ns2.bkjf-inc.com. A RRSIG NSECns1.bkjf-inc.com. 60 IN A 192.144.198.128ns2.bkjf-inc.com. 60 IN RRSIG A 8 3 60 20190330063017 20190228053309 53901 bkjf-inc.com. N2ssp0Eh6SyHBYHskedxUpfIp29DETt2g74sCuhrXwMuwLjOdVwuB02i /LqzDLyDbVZnMZncqoQ367AV2b/ttU/FJZcHiAlI2tLRTxVuNyj/E2YN BIDAtIqueNdJzsyE7n1yz9sPcsTrOidrIqqbM3qom5tMQvdo+2jrnhR3 UoY=ns2.bkjf-inc.com. 86400 IN RRSIG NSEC 8 3 86400 20190330063017 20190228053309 53901 bkjf-inc.com. sTTRnUQxPBbeAG0WrQpn4iK/U62D2s8umLwx8w8bx+bwxQdhR8Yyz8Ke tSelkffgctCtyUi5i7ibSTnvUJTcvOcvWWteMOQfQqXJmAngADx87cba /M+OJqRwp8tu3PEniPpTYN3msGSEFILyxLCO/2cyBzK+8jhFFKYyMOn/ ViQ=ns2.bkjf-inc.com. 86400 IN NSEC www.bkjf-inc.com. A RRSIG NSECns2.bkjf-inc.com. 60 IN A 192.144.198.128www.bkjf-inc.com. 60 IN RRSIG A 8 3 60 20190330063017 20190228053309 53901 bkjf-inc.com. aKI5N4y6eqN/xunC7+4vYa3cSHyXcW533iGA6/q34/ahvq0sTgYN36aF oBO0t8fRvwS3chZaPxwuqbk6hGSW+tRhJ8x/Nnwtbcn004W0ZxI1k046 JW/ePLhq1Cw2GPHXJTsfCjYmAOcwssX2yUv6q9/vocXx/mipuTMljrId yhE=www.bkjf-inc.com. 86400 IN RRSIG NSEC 8 3 86400 20190330063017 20190228053309 53901 bkjf-inc.com. 0q3C+xMKE1p586q+p8U4AHGiNjzzI899TcmL2P4x8x1B7rkc22rsakX9 AnNFAzkPOTVLr81GQtBraI1K6El2QDKcPkE9+0e+34tirpuUzVlzjYB2 f4WHGxTscdOMpCestqnmspQpmXm37+EBWS0alBBq3Db8T+F/3CSEGRS7 Ao0=www.bkjf-inc.com. 86400 IN NSEC bkjf-inc.com. A RRSIG NSECwww.bkjf-inc.com. 60 IN A 192.144.198.128bkjf-inc.com. 600 IN SOA ns1.bkjf-inc.com. 87527941.qq.com. 2018121608 10800 900 604800 86400;; Query time: 1 msec;; SERVER: 127.0.0.1#53(127.0.0.1);; WHEN: Thu Feb 28 15:22:46 CST 2019;; XFR size: 31 records (messages 1, bytes 3433) 这里看到了每个记录都附带了一个RRSIG记录,说明已经进行了数字签名 检查本地解析123[root@VM_0_13_centos named]# dig -t A www.bkjf-inc.com @localhost +dnssec +short192.144.198.128A 8 3 60 20190330063017 20190228053309 53901 bkjf-inc.com. aKI5N4y6eqN/xunC7+4vYa3cSHyXcW533iGA6/q34/ahvq0sTgYN36aF oBO0t8fRvwS3chZaPxwuqbk6hGSW+tRhJ8x/Nnwtbcn004W0ZxI1k046 JW/ePLhq1Cw2GPHXJTsfCjYmAOcwssX2yUv6q9/vocXx/mipuTMljrId yhE= DS记录在生成证书的目录对ZSK执行dnssec-dsfromkey命令,得到bkjf-inc.com的DS记录,这里我们使用比较长的那个 /var/named/chroot/var/named/dnssec-key/bkjf-inc.com123[root@VM_0_13_centos bkjf-inc.com]# dnssec-dsfromkey `grep -l zone-signing *key`bkjf-inc.com. IN DS 53901 8 1 5E13F6C0ECEE84248C2543693CE7D8617920983Bbkjf-inc.com. IN DS 53901 8 2 3006068B784AFBBC67133F123A0C389514959FCB6CAB0032DB200F08E6E5C384 其中: 53901:关键标签,用于标识域名的DNSSEC记录,一个小于65535的整数值 8:生成签名的加密算法,8对应RSA/SHA-256 2:构建摘要的加密算法,2对应SHA-256 最后一段:摘要值,就是DS记录值 参考万网(阿里云)上关于dnssec配置的文档:参考文档 DS记录需要通过运营商提交到上级DNS的信任锚中,这里是通过万网的配置页面,提交到.com域 注意:要在阿里云上将该域名的dns服务器指向自定义DNS服务器:参考文档 后续维护dnssec需要定期轮转,所以需要经常变更签名,其中 ZSK轮转 建议每年轮转 KSK轮转 建议更新ssl证书后尽快轮转? 轮转方法: ZSK(zone-signing key) /var/named/chroot/var/named/dnssec-key/bkjf-inc.com12345$ cd /var/named/chroot/var/named/dnssec-key/bkjf-inc.com$ dnssec-settime -I yyyy0101 -D yyyy0201 Kbkjf-inc.com.+008+53901$ dnssec-keygen -S Kbkjf-inc.com.+008+53901$ chgrp bind *$ chmod g+r *.private KSK轮转(key-signing key) /var/named/chroot/var/named/dnssec-key/bkjf-inc.com12345$ cd /var/named/chroot/var/named/dnssec-key/bkjf-inc.com$ dnssec-settime -I yyyy0101 -D yyyy0201 Kbkjf-inc.com.+008+40759$ dnssec-keygen -S Kbkjf-inc.com.+008+40759$ chgrp bind *$ chmod g+r *.private 注意:KSK轮转需要同步在万网上更新DS记录 在任意客户端验证解析1234567#dig -t A www.bkjf-inc.com @8.8.8.8 +dnssec +short192.144.198.128A 8 3 60 20190330063017 20190228053309 53901 bkjf-inc.com. aKI5N4y6eqN/xunC7+4vYa3cSHyXcW533iGA6/q34/ahvq0sTgYN36aF oBO0t8fRvwS3chZaPxwuqbk6hGSW+tRhJ8x/Nnwtbcn004W0ZxI1k046 JW/ePLhq1Cw2GPHXJTsfCjYmAOcwssX2yUv6q9/vocXx/mipuTMljrId yhE=#dig CNAME eshop.bkjf-inc.com @8.8.8.8 +dnssec +shortwww.bkjf-inc.com.CNAME 8 3 60 20190330063503 20190228053503 53901 bkjf-inc.com. 9ONt81AjpHFrM8YwDm7pQAg62oDBgaNzdtDIqtBHt5h/BPl83fOP/dOp P0Xi+y/OsFjDzHBSBDU4sy3fJwHBqm8uuMc6m33pIZfTq15fxFXF+2hU ift1bc0b0dk/L7ANZ5haEsDcl+hSVjwru2o2ISJtvp5zySZ61pdMvA6y ktg= 在第三方网站验证https://en.internet.nl/site/www.bkjf-inc.com/473349/ 浏览器插件https://www.dnssec-validator.cz/ 参考文献]]></content>
<categories>
<category>Web DNS技术</category>
</categories>
</entry>
<entry>
<title><![CDATA[实验文档8:企业级Web DNS实战]]></title>
<url>%2F2018%2F12%2F16%2F%E5%AE%9E%E9%AA%8C%E6%96%87%E6%A1%A38%EF%BC%9A%E4%BC%81%E4%B8%9A%E7%BA%A7Web%20DNS%E5%AE%9E%E6%88%98%2F</url>
<content type="text"><![CDATA[欢迎加入王导的VIP学习qq群:==>932194668<== 系统环境12345#cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core)#uname -aLinux node 3.10.0-862.el7.x86_64 #1 SMP Fri Apr 20 16:44:24 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux 安装部署namedmanager准备rpm包https://repos.jethrocarr.com/pub/jethrocarr/linux/centos/7/jethrocarr-custom/x86_64/ 下载最新版1234[root@hdss7-11 opt]# lltotal 62244-rw-r--r-- 1 root root 102136 Feb 1 18:17 namedmanager-bind-1.9.0-2.el7.centos.noarch.rpm-rw-r--r-- 1 root root 1084340 Feb 1 18:17 namedmanager-www-1.9.0-2.el7.centos.noarch.rpm 安装123456789101112131415161718192021[root@hdss7-11 opt]# yum localinstall namedmanager-* -y...Installed: namedmanager-bind.noarch 0:1.9.0-2.el7.centos namedmanager-www.noarch 0:1.9.0-2.el7.centos Dependency Installed: apr.x86_64 0:1.4.8-3.el7_4.1 apr-util.x86_64 0:1.5.2-6.el7 bind.x86_64 32:9.9.4-73.el7_6 httpd.x86_64 0:2.4.6-88.el7.centos httpd-tools.x86_64 0:2.4.6-88.el7.centos libzip.x86_64 0:0.10.1-8.el7 mailcap.noarch 0:2.1.41-2.el7 mariadb.x86_64 1:5.5.60-1.el7_5 mariadb-libs.x86_64 1:5.5.60-1.el7_5 mariadb-server.x86_64 1:5.5.60-1.el7_5 mod_ssl.x86_64 1:2.4.6-88.el7.centos perl-Compress-Raw-Bzip2.x86_64 0:2.061-3.el7 perl-Compress-Raw-Zlib.x86_64 1:2.061-4.el7 perl-DBD-MySQL.x86_64 0:4.023-6.el7 perl-DBI.x86_64 0:1.627-4.el7 perl-IO-Compress.noarch 0:2.061-2.el7 perl-Net-Daemon.noarch 0:0.48-5.el7 perl-PlRPC.noarch 0:0.2020-14.el7 php.x86_64 0:5.4.16-46.el7 php-cli.x86_64 0:5.4.16-46.el7 php-common.x86_64 0:5.4.16-46.el7 php-intl.x86_64 0:5.4.16-46.el7 php-ldap.x86_64 0:5.4.16-46.el7 php-mysqlnd.x86_64 0:5.4.16-46.el7 php-pdo.x86_64 0:5.4.16-46.el7 php-process.x86_64 0:5.4.16-46.el7 php-soap.x86_64 0:5.4.16-46.el7 php-xml.x86_64 0:5.4.16-46.el7 Dependency Updated: bind-libs.x86_64 32:9.9.4-73.el7_6 bind-libs-lite.x86_64 32:9.9.4-73.el7_6 bind-license.noarch 32:9.9.4-73.el7_6 bind-utils.x86_64 32:9.9.4-73.el7_6 Complete! 先配mysql启动mysql1[root@hdss7-11 mysql]# systemctl start mariadb.service 开机自启动12[root@hdss7-11 ~]# systemctl enable mariadbCreated symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service. 配mysql的root密码1[root@hdss7-11 mysql]# mysqladmin -uroot password 123456 导入namedmanager的数据库脚本/usr/share/namedmanager/resources/autoinstall.pl123456789101112131415161718192021222324[root@hdss7-11 ~]# cd /usr/share/namedmanager/resources/[root@hdss7-11 resources]# ./autoinstall.pl autoinstall.plThis script setups the NamedManager database components: * NamedManager MySQL user * NamedManager database * NamedManager configuration filesTHIS SCRIPT ONLY NEEDS TO BE RUN FOR THE VERY FIRST INSTALL OF NAMEDMANAGER.DO NOT RUN FOR ANY OTHER REASONPlease enter MySQL root password (if any): 123456输入123456Searching ../sql/ for latest install schema...../sql//version_20131222_install.sql is the latest file and will be used for the install.Importing file ../sql//version_20131222_install.sqlCreating user...Updating configuration file...DB installation complete!You can now login with the default username/password of setup/setup123 at http://localhost/namedmanager 配置namedmanagerconfig.php,增加一条配置/etc/namedmanager/config.php1$_SERVER['HTTPS'] = "TRUE"; config-bind.php,修改以下三条配置/etc/namedmanager/config-bind.php1234$config["api_url"] = "http://dns-manager.od.com/namedmanager"; // Application Install Location$config["api_server_name"] = "dns-manager.od.com"; // Name of the DNS server (important: part of the authentication process)$config["api_auth_key"] = "verycloud"; // API authentication key$config["log_file"] = "/var/log/namedmanager_bind_configwriter"; php.ini,修改一条配置/etc/php.ini12; How many GET/POST/COOKIE input variables may be acceptedmax_input_vars = 10000 绑host(临时)/etc/hosts110.4.7.11 dns-manager.od.com 配apache/etc/httpd/conf/httpd.conf1234567Listen 10.4.7.11:8080ServerName dns-manager.od.com<Directory /> AllowOverride none allow from all #Require all denied</Directory> 配nginx/etc/nginx/conf.d/dns-manager.od.com.conf123456789101112server { server_name dns-manager.od.com; location =/ { rewrite ^/(.*) http://dns-manager.od.com/namedmanager permanent; } location / { proxy_pass http://10.4.7.11:8080; proxy_set_header Host $http_host; proxy_set_header x-forwarded-for $proxy_add_x_forwarded_for; }} 启动apache和nginx 启动apache 123[root@hdss7-11 ~]# systemctl start httpd[root@hdss7-11 ~]# systemctl enable httpdCreated symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service. 启动nginx 1234[root@hdss7-11 ~]# nginx -tnginx: the configuration file /etc/nginx/nginx.conf syntax is oknginx: configuration file /etc/nginx/nginx.conf test is successful[root@hdss7-11 ~]# nginx 访问http://dns-manager.od.com,看看页面是否正常 继续改namedmanager的配置改namedmanager_bind_configwriter.php/usr/share/namedmanager/bind/namedmanager_bind_configwriter.php1234if (flock($fh_lock, LOCK_EX )){ log_write("debug", "script", "Obtained filelock");} 启动namedmanager_logpush.rcsysinit加执行权限/usr/share/namedmanager/resources/namedmanager_logpush.rcsysinit1[root@hdss7-11 resources]# chmod u+x namedmanager_logpush.rcsysinit 启动该脚本/usr/share/namedmanager/resources/namedmanager_logpush.rcsysinit123[root@hdss7-11 resources]# sh namedmanager_logpush.rcsysinit startStarting namedmanager_logpush service:[root@hdss7-11 resources]# nohup: redirecting stderr to stdout 检查是否启动12[root@hdss7-11 resources]# ps -ef|grep php|egrep -v greproot 10738 1 0 10:49 pts/1 00:00:00 php -q /usr/share/namedmanager/bind/namedmanager_logpush.php 用supervisor管理起来这个脚本非常重要,是整个namedmanager软件的核心,所以要保证它一直在后台启动,这里我们用supervisor这个软件把它管理起来 先安装supervisor软件1234567891011121314151617181920212223242526[root@hdss7-11 resources]# yum install supervisor -yependencies Resolved============================================================================================================================================================= Package Arch Version Repository Size=============================================================================================================================================================Installing: supervisor noarch 3.1.4-1.el7 epel 446 kInstalling for dependencies: python-meld3 x86_64 0.6.10-1.el7 epel 73 k python-setuptools noarch 0.9.8-7.el7 base 397 kTransaction Summary=============================================================================================================================================================Install 1 Package (+2 Dependent packages)Total download size: 916 kInstalled size: 4.4 M...Installed: supervisor.noarch 0:3.1.4-1.el7 Dependency Installed: python-meld3.x86_64 0:0.6.10-1.el7 python-setuptools.noarch 0:0.9.8-7.el7 Complete! 创建脚本启动的配置文件/etc/supervisord.d/namedmanager_logpush.ini1234567891011121314151617181920212223[program:namedmanager_logpush]command=php -q /usr/share/namedmanager/bind/namedmanager_logpush.php 2>&1 > /var/log/namedmanager_logpushnumprocs=1 directory=/usr/share/namedmanager/resources autostart=true autorestart=true startsecs=22 startretries=4 exitcodes=0,2 stopsignal=QUIT stopwaitsecs=10 user=root redirect_stderr=false stdout_logfile=/var/log/namedmanager_logpush.outstdout_logfile_maxbytes=64MB stdout_logfile_backups=4 stdout_capture_maxbytes=1MB stdout_events_enabled=false stderr_logfile=/var/log/namedmanager_logpush.errstderr_logfile_maxbytes=64MB stderr_logfile_backups=4 stderr_capture_maxbytes=1MB stderr_events_enabled=false 启动supservisord服务1[root@hdss7-11 resources]# systemctl start supervisord 开机自启12[root@hdss7-11 resources]# systemctl enable supervisordCreated symlink from /etc/systemd/system/multi-user.target.wants/supervisord.service to /usr/lib/systemd/system/supervisord.service. 查看脚本启动情况1234[root@hdss7-11 resources]# supervisorctl statusnamedmanager_logpush RUNNING pid 9194, uptime 0:01:44[root@hdss7-11 resources]# ps -ef|grep -v grep|grep phproot 9194 8979 0 11:14 ? 00:00:00 php -q /usr/share/namedmanager/bind/namedmanager_logpush.php 2>&1 > /var/log/namedmanager_logpush 这样脚本就可以保证高可用性了 检查日志/var/log/namedmanager_logpush12[root@hdss7-11 resources]# tail -fn 200 /var/log/namedmanager_logpushError: Unable to authenticate with NamedManager API - check that auth API key and server name are valid 有报错,所以需要继续配置 改inc_soap_api.php/usr/share/namedmanager/bind/include/application/inc_soap_api.php1preg_match("/^http:\/\/(\S*?)[:0-9]*\//", $GLOBALS["config"]["api_url"], $matches); 重启namedmanager_logpush.rcsysinit如果已经用supervisor软件管理起来了,只需要kill掉脚本进程即可 12345[root@hdss7-11 resources]# ps -ef|grep -v grep|grep php|awk '{print $2}'|xargs kill -9[root@hdss7-11 resources]# ps -ef|grep -v grep|grep phproot 9295 8979 1 11:18 ? 00:00:00 php -q /usr/share/namedmanager/bind/namedmanager_logpush.php 2>&1 > /var/log/namedmanager_logpush [root@hdss7-11 resources]# supervisorctl namedmanager_logpush RUNNING pid 9295, uptime 0:00:23 否则需要手动重启脚本 /usr/share/namedmanager/resources/namedmanager_logpush.rcsysinit1234[root@hdss7-11 resources]# sh namedmanager_logpush.rcsysinit restartStopping namedmanager_logpush services:Starting namedmanager_logpush service:nohup: redirecting stderr to stdout 配置BIND9先配rndcrndc.key12345[root@hdss7-11 ~]# cat /etc/rndc.key key "rndc-key" { algorithm hmac-sha256; secret "CD/4vqb9l0WiMy5TXjfeu1cMhyRerQ9kL2jwdBFWwa4=";}; 如果没有,使用如下命令生成rndc.key 1[root@hdss7-11 ~]# rndc-confgen -r /dev/urandom 配rndc.conf/etc/rndc.conf12345678910key "rndc-key" { algorithm hmac-sha256; secret "CD/4vqb9l0WiMy5TXjfeu1cMhyRerQ9kL2jwdBFWwa4=";};options { default-key "rndc-key"; default-server 10.4.7.11; default-port 953; }; 删除rndc.key1[root@hdss7-11 ~]# rm -f /etc/rndc.key BIND9主配置文件/etc/named.conf123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960options { listen-on port 53 { 10.4.7.11; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; allow-transfer { 10.4.7.12; }; also-notify { 10.4.7.12; }; /* - If you are building an AUTHORITATIVE DNS server, do NOT enable recursion. - If you are building a RECURSIVE (caching) DNS server, you need to enable recursion. - If your recursive DNS server has a public IP address, you MUST enable access control to limit queries to your legitimate users. Failing to do so will cause your server to become part of large scale DNS amplification attacks. Implementing BCP38 within your network would greatly reduce such attack surface */ recursion yes; dnssec-enable no; dnssec-validation no; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; pid-file "/run/named/named.pid"; session-keyfile "/run/named/session.key";};key "rndc-key" { algorithm hmac-sha256; secret "CD/4vqb9l0WiMy5TXjfeu1cMhyRerQ9kL2jwdBFWwa4=";};controls { inet 10.4.7.11 port 953 allow { 10.4.7.11; } keys { "rndc-key"; };};logging { channel default_debug { file "data/named.run"; severity dynamic; };};zone "." IN { type hint; file "named.ca";};include "/etc/named.rfc1912.zones";include "/etc/named.root.key";include "/etc/named.namedmanager.conf"; 改named.namedmanager.conf文件属性/etc/named.namedmanager.conf123[root@hdss7-11 named]# chown apache.apache /etc/named.namedmanager.conf[root@hdss7-11 named]# ls -l /etc/named.namedmanager.conf -rw-r--r-- 1 apache named 112 Dec 16 11:19 /etc/named.namedmanager.conf 检查配置并启动BIND9检查配置1[root@hdss7-11 ~]# named-checkconf 启动BIND91[root@hdss7-11 ~]# systemctl start named 开机自启动1[root@hdss7-11 ~]# systemctl enable named 检查启动情况1234[root@hdss7-11 ~]# netstat -luntp|grep 53tcp 0 0 10.4.7.11:53 0.0.0.0:* LISTEN 10922/named tcp 0 0 10.4.7.11:953 0.0.0.0:* LISTEN 10922/named udp 0 0 10.4.7.11:53 0.0.0.0:* 10922/named 配置NamedManager页面浏览器打开http://dns-manager.od.com(提前绑好host),用户名/密码:setup/setup123 配置Configuration选项卡Zone Configuration Defaults DEFAULT_HOSTMASTER [email protected] DEFAULT_TTL_SOA 86400 DEFAULT_TTL_NS 120 DEFAULT_TTL_MX 60 DEFAULT_TTL_OTHER 60 API Configuration ADMIN_API_KEY verycloud Date and Time Configuration DATEFORMAT yyyy-mm-dd TIMEZONE_DEFAULT Asia/Shanghai Save Changes配置New Servers选项卡Add NewServerServer Details Name Server FQDN * dns-manager.od.com注意:这里一定要填config-bind.php里对应$config["api_server_name"]项配置的值 Description dns server for od.com Server Type Server Type API (supports Bind) API Authentication Key * verycloud Server Domain Settings必须勾选以下三项 Nameserver Group * default – Default Nameserver Group Primary Nameserver * Make this server the primary one used for DNS SOA records. Use as NS Record * Adds this name server to all domains as a public NS record. Save Changes保存后View Name Servers选项卡下,Logging Status应变绿且成为status_synced,如一直不变绿,需要进行排错,不要继续往下做了。 配置Domain/Zones选项卡添加Domain/Zone两种方式 手动添加域 自动导入域 Add Domain(手动添加)Domain Details Domain Type * Standard DomainReverse Domain (IPv4)Reverse Domain (IPv6)根据实际情况选择,这里选择Standard Domain(正解域) Domain Name * od.com Description od.com domain Domain Server Groups注意:一定要勾选域服务器组 default – Default Nameserver Group Start of Authority Record Email Administrator Address * Email Administrator Address * Domain Serial * 2018121601 Refresh Timer * 21600 Refresh Retry Timeout * 3600 Expiry Timer * 604800 Default Record TTL * 60注意:这里配置SOA记录最后一个参数值没有按套路出牌,配置的并不是否定应答超时时间(NegativeAnswerTTL),而是默认资源记录的过期时间 Save ChangesImport Domain(自动导入) Import Source Bind 8/9 Compatible Zonefile Zone File 选择文件host.com.txt 导入一个正解域upload,选择文件附1:host.com.txt host.com.txt12345678910111213$ORIGIN .$TTL 600 ; 10 minuteshost.com IN SOA dns-manager.od.com. 87527941.qq.com. ( 2019013106 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) )$ORIGIN host.com.$TTL 60 ; 1 minuteHDSS7-11 A 10.4.7.11HDSS7-12 A 10.4.7.12 注意:这里可以不用给NS记录和对应的A记录了,会默认生成 Save Changes点保存进入下一个配置页面 Domain Details这里可以配置域的信息和描述,我们这里先配一个Standard Domain(正解域) Start of Authority Record这里注意SOA记录的最后一个选项Default Record TTL * Domain Records检查一下和导入文件里的记录是否一致 Save Changes先点一次保存 Domain Details检查一遍域信息和描述 Domain Server Groups注意:这里一定要勾选服务器组(上个页面没有,这里新出来的选项) Start of Authority Record检查一遍SOA记录 Save Changes最后点一下保存,导入成功 导入一个反解域upload,选择文件附2:7.4.10.in-addr.arpa.txt 7.4.10.in-addr.arpa.txt123456789101112$TTL 600 ; 10 minutes@ IN SOA dns-manager.od.com. 87527941.qq.com. ( 2018121603 ; serial 10800 ; refresh (3 hours) 900 ; retry (15 minutes) 604800 ; expire (1 week) 86400 ; minimum (1 day) )$ORIGIN 7.4.10.in-addr.arpa.$TTL 60 ; 1 minute11 PTR HDSS7-11.host.com.12 PTR HDSS7-12.host.com. 注意:这里可以不用给NS记录和对应的A记录了,会默认生成 Save Changes点保存进入下一个配置页面 Domain Details注意: Domain Type *应为Reverse Domain (IPv4) IPv4 Network Address *应为10.4.7.0/24 Start of Authority Record配置SOA记录 Domain Records检查一下和导入文件里的记录是否一致 Save Changes先点一次保存 Domain Details检查一遍域信息和描述 Domain Server Groups注意:这里一定要勾选服务器组(上个页面没有,这里新出来的选项) Start of Authority Record检查一遍SOA记录 Save Changes最后点一下保存,导入成功 在对应的Zone里操作资源记录(增、删、改)View Domains选项卡details 按钮维护domain的基本配置,略 delete 按钮删除domain,略 domain record(od.com)配置页面 Domain Details Domain od.com selected for adjustment Nameserver Configuration 这里是配置NS记录的配置区,默认会生成一条 Type TTL Name/Origin Content - NS 120 od.com dns-manager.od.com - Mailserver Configuration 略,暂不配置MX记录 Host Records Configuration 这里是配置重点,A记录、CNAME记录、TXT记录等都在这个里配置这里增加两条A记录解析,增加一条CNAME解析 Type TTL Name Content ReversePTR - A 60 dns-manager 10.4.7.11 no delete A 60 www 10.4.7.11 no delete CNAME 60 eshop www.od.com no delete Save Changesdomain record(host.com)配置页面 Domain Details Domain host.com selected for adjustment Nameserver Configuration 这里是配置NS记录的配置区,默认会生成一条 Type TTL Name/Origin Content - NS 120 host.com dns-manager.od.com - Mailserver Configuration 略,暂不配置MX记录 Host Records Configuration 这里是配置重点,A记录、CNAME记录、TXT记录等都在这个里配置因为是从文件导入的域,默认会有记录 Type TTL Name Content ReversePTR - A 60 HDSS7-11 10.4.7.11 √ delete A 60 HDSS7-12 10.4.7.12 √ delete Save Changesdomain record(7.4.10.in-addr.arpa)配置页面 Domain Details Domain 7.4.10.in-addr.arpa selected for adjustment Nameserver Configuration 这里是配置NS记录的配置区,默认会生成一条 Type TTL Name/Origin Content - NS 120 7.4.10.in-addr.arpa dns-manager.od.com - Mailserver Configuration 略,暂不配置MX记录 Host Records Configuration 这里是配置重点,A记录、CNAME记录、TXT记录等都在这个里配置因为是从文件导入的域,默认会有记录 Type TTL Name Content - PTR 60 11 HDSS7-11.host.com delete PTR 60 12 HDSS7-12.host.com delete Save Changes返回Name Servers选项卡查看页面DNS服务器状态 Logging Status status_synced Zonefile Status status_synced 全部变绿且为status_synced即为正常 查看服务器上配置文件(都是由namedmanager服务自动生成的)named.namedmanager.conf/etc/named.namedmanager.conf1234567891011121314151617181920//// NamedManager Configuration//// This file is automatically generated any manual changes will be lost.//zone "od.com" IN { type master; file "od.com.zone"; allow-update { none; };};zone "host.com" IN { type master; file "host.com.zone"; allow-update { none; };};zone "7.4.10.in-addr.arpa" IN { type master; file "7.4.10.in-addr.arpa.zone"; allow-update { none; };}; 这里生成了三个zone,两个正解域,一个反解域,依次检查三个域的区域数据库文件: od.com.zone/var/named/od.com.zone12345678910111213141516171819202122232425262728$ORIGIN od.com.$TTL 60@ IN SOA dns-manager.od.com. 87527941.qq.com. ( 2018121610 ; serial 21600 ; refresh 3600 ; retry 604800 ; expiry 60 ; minimum ttl ); Nameserversod.com. 120 IN NS dns-manager.od.com.; Mailservers; Reverse DNS Records (PTR); CNAME; HOST RECORDSdns-manager 60 IN A 10.4.7.11www 60 IN A 10.4.7.11eshop 60 IN CNAME www.od.com. host.com.zone/var/named/host.com.zone123456789101112131415161718192021222324252627$ORIGIN host.com.$TTL 60@ IN SOA dns-manager.od.com. 87527941.qq.com. ( 2018121604 ; serial 10800 ; refresh 900 ; retry 604800 ; expiry 60 ; minimum ttl ); Nameservershost.com. 120 IN NS dns-manager.od.com.; Mailservers; Reverse DNS Records (PTR); CNAME; HOST RECORDSHDSS7-11 60 IN A 10.4.7.11HDSS7-12 60 IN A 10.4.7.12 7.4.10.in-addr.arpa.zone/var/named/7.4.10.in-addr.arpa.zone1234567891011121314151617181920212223242526$ORIGIN 7.4.10.in-addr.arpa.$TTL 60@ IN SOA dns-manager.od.com. 87527941.qq.com. ( 2018121603 ; serial 10800 ; refresh 900 ; retry 604800 ; expiry 60 ; minimum ttl ); Nameservers7.4.10.in-addr.arpa. 120 IN NS dns-manager.od.com.; Mailservers; Reverse DNS Records (PTR)11 60 IN PTR HDSS7-11.host.com.12 60 IN PTR HDSS7-12.host.com.; CNAME; HOST RECORDS 检查资源记录解析是否生效12345678# dig -t A www.od.com @10.4.7.11 +short10.4.7.11#dig -t A HDSS7-12.host.com @10.4.7.11 +short10.4.7.12#dig -x 10.4.7.11 @10.4.7.11 +shortHDSS7-11.host.com. 验证页面增、删、改是否均生效注意: 增、删、改资源记录时,对应域的SOA记录的serial序列号会自动滚动,非常方便 这里在页面上操作资源记录,会先写mysql,再由php脚本定期刷到磁盘文件上,所以大概需要1分钟的时间生效 在维护主机域时,添加正解记录,并勾选后面的reverse选项,将同时生成一条反解记录,简化了操作 由于服务器上的区域数据库文件是由php进程定期更新的(根据mysql数据库里的数据),所以手动在服务器上修改资源记录是无法生效的,应该严格禁止 配置DNS主辅同步略 配置客户端的DNS服务器/etc/resolv.conf1234# Generated by NetworkManagersearch od.com host.comnameserver 10.4.7.11nameserver 10.4.7.12 把所有客户端绑定的临时hosts删除 /etc/hosts1#10.4.7.11 dns-manager.od.com 配置客户端DNS服务器的小技巧 用户系统及操作审计功能用户系统可以创建不同的管理员用户 User Management选项卡该页面下可以查看所有的系统用户,并可以进行用户管理 Create a new User Account 增加用户User Details Username * wangdao Real Name * StanleyWang Contact Email * [email protected] User Password password * 123456 password_confirm * 123456 Save ChangesUser Permissions 用户权限 disabled 勾上,用户不生效不勾,用户生效这里不勾 admin(超级管理员) 勾上,可以创建用户管理用户权限不勾,不可以创建用户管理用户权限这里不勾 namedadmins(管理员) 勾上,dns管理员,可以管理zone和资源记录不勾,不可以管理zone和资源记录这里勾选 Save Changesdelete删除用户,略 details这里可以配置用户的基本信息 User Password超级管理员可以帮助用户修改密码 User Options option_shrink_tableoptions Automatically hide the options table when using defaults默认勾选,高级查询框显示与否 option_debug Enable debug logging - this will impact performance a bit but will show a full trail of all functions and SQL queries made默认不勾,勾选上可以在页面显示debug日志,建议部署时使用,投产后关闭 option_concurrent_logins Permit this user to make multiple simultaneous logins默认不勾,允许该用户在多点同时登录,应该严格禁止(审计) 使用wangdao用户登录可以进行DNS服务管理,但无法管理用户 审计使用wangdao用户在页面增加一条资源记录操作过程略 Changelog选项卡可以看到所有用户的操作记录,实现审计功能,做到操作可溯 Tips 生产上强烈建议新生成一个超级管理员用户并将setup用户删除! 超级管理员用户应只有一个且不要轻易外泄,可以创建多个管理员账户。(一般根据业务而定,每个管理员负责一个子域) 管理员账户创建好后,应由各人自行登录修改密码。 超级管理员用户密码的复杂度要足够高,定期更换超级管理员用户密码。]]></content>
<categories>
<category>Web DNS技术</category>
</categories>
</entry>
<entry>
<title><![CDATA[Markdown语法范例]]></title>
<url>%2F2018%2F01%2F11%2FMarkdown%E8%AF%AD%E6%B3%95%E8%8C%83%E4%BE%8B%2F</url>
<content type="text"><![CDATA[欢迎加入王导的VIP学习qq群:==>932194668<== 分级标题效果代码一级标题二级标题三级标题四级标题五级标题六级标题123456# 一级标题## 二级标题### 三级标题#### 四级标题##### 五级标题###### 六级标题 字体样式效果代码斜体粗体加粗斜体删除线1234\*斜体\*\*\*粗体\*\*\*\*\*加粗斜体\*\*\*\~\~删除线\~\~ 颜色、字号效果代码颜色字号12<font color="FF7F50">颜色</font><font size="20">字号</font> 列表有序列表有序列表则使用数字接着一个英文句点。 效果代码 有序列表项 一 有序列表项 二 有序列表项 三 1231. 有序列表项 一2. 有序列表项 二3. 有序列表项 三 在特殊情况下,项目列表很可能会不小心产生,像是下面这样的写法: 11986. What a great season. 会显示成: What a great season. 前面的1986成了序号,换句话说,也就是在行首出现了数字-句点-空白,要避免这样的状况,你可以在句点前面加上反斜杠: 11986\. What a great season. 则会正确的显示为: 1986. What a great season. 无序列表使用 *,+,-表示无序列表。 效果代码 无序列表项 一 无序列表项 二 无序列表项 三 123- 无序列表项 一- 无序列表项 二- 无序列表项 三 定义型列表定义型列表由名词和解释组成。一行写上定义,紧跟一行写上解释。解释的写法“:”紧跟一个缩进(Tab) 效果代码Markdown: 轻量级文本标记语言,可以转换成html,pdf等格式(左侧有一个可见的冒号和四个不可见的空格)代码块 2这是代码块的定义(左侧有一个可见的冒号和四个不可见的空格)1234Markdown: 轻量级文本标记语言,可以转换成html,pdf等格式(左侧有一个可见的冒号和四个不可见的空格)代码块 2: 这是代码块的定义(左侧有一个可见的冒号和四个不可见的空格) 引用一般引用引用需要在被引用的文本前加上>符号。 效果代码 这是一个有两段文字的引用无意义的占行文字1.无意义的占行文字2. 无意义的占行文字3.无意义的占行文字4. 引用123456> 这是一个有两段文字的引用> 无意义的占行文字1.> 无意义的占行文字2.>> 无意义的占行文字3.> 无意义的占行文字4. 引用嵌套区块引用可以嵌套(例如:引用内的引用),只要根据层次加上不同数量的 > : 效果代码 请问 Markdwon 怎么用? - 小白 自己看教程! - 愤青 教程在哪? - 小白 123> 请问 Markdwon 怎么用? - 小白>> 自己看教程! - 愤青>>> 教程在哪? - 小白 引用其它要素引用的区块内也可以使用其他的 Markdown 语法,包括标题、列表、代码区块等: 效果代码 这是第一行列表项。 这是第二行列表项。 引用代码行:return shell_exec("echo $input | $markdown_script");引用代码段: 123for (list in $lists);do echo $list;done 1234567891011> 1. 这是第一行列表项。> 2. 这是第二行列表项。>> 引用代码行:> `return shell_exec("echo $input | $markdown_script");`> 引用代码段:>{\% code %}for (list in $lists);do echo $list;done{\% endcode %} note标签引用defaultprimaryinfosuccesswarningdanger default with-icondefault 1.1default 1.2 defalt no-icondefault 1.1default 1.2 primary with-iconprimary 1.1primary 1.2 primary no-iconprimary 1.1primary 1.2 info with-iconinfo 1.1info 1.2 info no-iconinfo 1.1info 1.2 success with-iconsuccess 1.1success 1.2 success no-iconsuccess 1.1success 1.2 warning with-iconwarning 1.1warning 1.2 warning no-iconwarning 1.1warning 1.2 danger with-icondanger 1.1danger 1.2 danger no-icondanger 1.1danger 1.2 居中引用效果: Something 代码: 1{% cq %}Something{% endcq %} 表格第一行为表头,第二行分隔表头和主体部分,第三行开始每一行为一个表格行。列于列之间用管道符|隔开。原生方式的表格每一行的两边也要有管道符。第二行还可以为不同的列指定对齐方向。默认为左对齐,在“-”右边加上“:”就右对齐,在“-”两边都加上“:”就居中对齐。 效果:左对齐代码:左对齐效果:右对齐代码:右对齐效果:居中代码:居中 学号 姓名 分数 小明 男 75 小红 女 79 小陆 男 92 成绩表12345学号|姓名|分数-|-|-小明|男|75小红|女|79小陆|男|92 学号 姓名 分数 小明 男 75 小红 女 79 小陆 男 92 成绩表12345学号|姓名|分数-:|-:|-:小明|男|75小红|女|79小陆|男|92 学号 姓名 分数 小明 男 75 小红 女 79 小陆 男 92 成绩表12345学号|姓名|分数:-:|:-:|:-:小明|男|75小红|女|79小陆|男|92 分割线你可以在一行中用三个以上的星号、减号、底线来建立一个分隔线,行内不能有其他东西。你也可以在星号或是减号中间插入空格。下面每种写法都可以建立分隔线: 效果代码 12345\* \* \*\***\*****- - -\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\- 代码行内代码效果代码C语言里的函数 scanf() 怎么使用?C语言里的函数 scanf() 怎么使用?C语言里的函数 scanf() 怎么使用?C语言里的函数 scanf() 怎么使用?C语言里的函数 scanf() 怎么使用?C语言里的函数 scanf() 怎么使用?C语言里的函数 scanf() 怎么使用?1234567C语言里的函数 \`scanf()\` 怎么使用?C语言里的函数 {\% label default@scanf() %} 怎么使用?C语言里的函数 {\% label primary@scanf() %} 怎么使用?C语言里的函数 {\% label info@scanf() %} 怎么使用?C语言里的函数 {\% label success@scanf() %} 怎么使用?C语言里的函数 {\% label warning@scanf() %} 怎么使用?C语言里的函数 {\% label danger@scanf() %} 怎么使用? 多行代码效果代码123for (list in $lists);do echo $listdone12345\`\`\`for (list in $lists);do echo $listdone\`\`\` 超链接行内式超链接效果代码不带title:欢迎来到我的博客带title:欢迎来到我的博客1234不带title:欢迎来到\[我的博客\]\(https://blog.stanley.wang\)带title:欢迎来到\[我的博客\]\(https://blog.stanley.wang "Stanley's Blog"\) 自动超链接效果代码https://blog.stanley.wangstanley.wang.m@qq.com12\<https://blog.stanley.wang\>\<[email protected]\> 锚点超链接暂不支持锚点超链接 公式示例效果代码When $a \ne 0$, there are two solutions to $ax^2$ + bx + c = 0 and they are: $$x= {-b \pm \sqrt{b^2-4ac} \over 2a}$$1When $a \ne 0$, there are two solutions to $ax^2$ + bx + c = 0 and they are: $$x= {-b \pm \sqrt{b^2-4ac} \over 2a}$$ 语法规范呈现位置 行内公式: 使用$…$定义,此时公式在一行内显示效果代码$\displaystyle\sum_{i=0}^N\int_{a}^{b}g(t,i)\text{d}t$1$\displaystyle\sum_{i=0}^N\int_{a}^{b}g(t,i)\text{d}t$ 文内公式: 使用$$…$$定义,此时公式居中放大显示效果代码$$\sum_{i=0}^N\int_{a}^{b}g(t,i)\text{d}t$$1$$\sum_{i=0}^N\int_{a}^{b}g(t,i)\text{d}t$$ 字母、运算符与杂项希腊字母 显示 命令 显示 命令 $\alpha$ \alpha $\beta$ \beta $\gamma$ \gamma $\delta$ \delta $\epsilon$ \epsilon $\zeta$ \zeta $\eta$ \eta $\theta$ \theta $\iota$ \iota $\kappa$ \kappa $\lambda$ \lambda $\mu$ \mu $\nu$ \nu $\xi$ \xi $\pi$ \pi $\rho$ \rho $\sigma$ \sigma $\tau$ \tau $\upsilon$ \upsilon $\phi$ \phi $\chi$ \chi $\psi$ \psi $\omega$ \omega — — - 如果要大写希腊字母,则首字母大写即可,如$\Gamma$显示为$\Gamma$ - 如果要使希腊字母显示为斜体,则前面添加var即可,如$\varGamma$显示为$\varGamma$ 字母修饰上下标 上标:^ 下标:_ 举例:$C_n^2$显示为:$C_n^2$ 矢量 单字母向量: $\vec a$显示为$\vec a$$\overrightarrow a$显示为$\overrightarrow a$ 多字母向量: $\vec {abcde}$显示为$\vec {abcde}$$\overrightarrow {abcde}$显示为$\overrightarrow {abcde}$ 特殊修饰: 上尖号:$\hat {abcde}$显示为$\hat {abcde}$宽上尖号: $\widehat {abcde}$显示为$\widehat {abcde}$上划线:$\overline {abc}de$显示为$\overline {abc}de$下划线:$\underline ab{cde}$显示为$\underline ab{cde}$ 字体 TypeWriter:$\mathtt {A}$显示为:$\mathtt {ABCDEFGHIJKLMNOPQRSTUVWXYZ}$ Blackboard blod:$\mathbb {A}$显示为:$\mathbb {ABCDEFGHIJKLMNOPQRSTUVWXYZ}$ Sans Serif:$\mathsf {A}$显示为:$\mathsf {ABCDEFGHIJKLMNOPQRSTUVWXYZ}$ 空格 语法本身忽略空格,$ab$和$a b$都显示为$ab$ $a b$ 小空格:$a\ b$显示为$a\ b$ 4格空格:$a\quad b$显示为$a\quad b$ 分组 使用{}将同一级的括在一起,成组处理 $x_i^2$显示为$x_i^2$$x_{i^2}$显示为$x_{i^2}$ 括号 小括号:$(...)$显示为$(…)$ 中括号:$[...]$显示为$[…]$ 大括号:$\\{...\\}$显示为$\{…\}$ 尖括号:$\langle ... \rangle$显示为$\langle … \rangle$ 绝对值:$\vert ... \vert$显示为$\vert … \vert$ 双竖线:$\Vert ... \Vert$显示为$\Vert … \Vert$ 使用$\left$和$\right$使符号大小与邻近的公式相适应,该语句适用于所有括号类型 $\\{\frac{(x+y)}{[\alpha+\beta]}\\}$显示为$\{\frac{(x+y)}{[\alpha+\beta]}\}$$\left\\{\frac{(x+y)}{[\alpha+\beta]}\right\\}$显示为$\left\{\frac{(x+y)}{[\alpha+\beta]}\right\}$ 常用数学运算符基础符号 运算符 说明 应用举例 命令 + 加 $x+y$ $x+y$ - 减 $x−y$ $x-y$ \times 叉乘 $x\times y$ $x\timesy$ \cdot 点乘 $x\cdot y$ $x\cdot y$ \ast(*) 星乘 $x\ast(y)$ $x\ast(y)$ \div 除 $x\div y$ $x\div y$ \pm 加减 $x\pm y$ $x\pm y$ \mp 减加 $x\mp y$ $x\mp y$ \approx 约等于 $x\approx y$ $x\approx y$ \equiv 恒等于 $x\equiv y$ $x\equiv y$ \cong 全等于 $\triangle ABC\cong \triangle BCD$ $\triangle ABC\cong \triangle BCD$ \sim 相似于 $x\sim y$ $x\sim$ y \bigodot 定义运算符 $x\bigodot y$ $x\bigodot y$ \bigotimes 定义运算符 $x\bigotimes y$ $x\bigotimes y$ 比较运算符 运算符 说明 应用举例 命令 = 等于 $x=y$ $x=y$ \lt 小于 $x\lt y$ $x\lt y$ \gt 大于 $x\gt y$ $x\gt y$ \le 小于等于 $x\le y$ $x\le y$ \ge 大于等于 $x\ge y$ $x\ge y$ \ne 不等于 $x\ne y$ $x\ne y$ 逻辑运算符 运算符 说明 应用举例 命令 \land 与 $x\land y$ $x\land y$ \lor 或 $x\lor y$ $x\lor y$ \lnot 非 $\lnot x$ $\lnot x$ \oplus 异或 $x\oplus y=(\lnot x\land y)\lor(x\land \lnot y)$ $x\oplus y=(\lnot x\land y)\lor(x\land \lnot y)$ \forall 针对所有 $\forall x \in N$ $\forall x \in N$ \exists 存在 $\exists \xi$ $\exists \xi$ 集合符号 运算符 说明 应用举例 命令 \in 属于 $x\in y$ $x\in y$ \subseteq 子集 $x\subseteq y$ $x\subseteq y$ \subset 真子集 $x\subset y$ $x\subset y$ \supset 超集 $x\supset y$ $x\supset y$ \supseteq 超集 $x\supseteq y$ $x\supseteq y$ \varnothing 空集 $\varnothing$ $\varnothing$ \cup 并 $x\cup y$ $x\cup y$ \cap 交 $x\cap y$ $x\cap y$ 特殊符号 符号 命令 符号 命令 $\infty$ $\infty$ $\partial$ $\partial$ $\nabla$ $\nabla$ $\triangle$ $\triangle$ $\top$ $\top$ $\bot$ $\bot$ $\vdash$ $\vdash$ $\vDash$ $\vDash$ $\star$ $\star$ $\ast$ $\ast$ $\circ$ \circ $\bullet$ $\bullet$ 注:想要表达非的概念只需前加\not,会添加删除斜线,如:$x\not=y$显示为$x\not=y$,$x\not\in y$显示为$x\not\in y$ 其他 运算符 说明 应用举例 命令 \overbrace 上大括号 $\overbrace{a+\underbrace{b+c}_{1.0}+d}^{2.0}$ $\overbrace{a+\underbrace{b+c}_{1.0}+d}^{2.0}$ \underbrace 下大括号 $\underbrace{a+d}_3$ $\underbrace{a+d}_3$ \partial 偏导数 $\frac{\partial z}{\partial x}$ $\frac{\partial z}{\partial x}$ \ldots 底端对齐的省略号 $1,2,\ldots,n$ $1,2,\ldots,n$ \cdots 中线对齐的省略号 $1,2,\cdots,n$ $1,2,\cdots,n$ \uparrow 上箭头 $\uparrow$ $\uparrow$ \Uparrow 双上箭头 $\Uparrow$ $\Uparrow$ \downarrow 下箭头 $\downarrow$ $\downarrow$ \Downarrow 双下箭头 $\Downarrow$ $\Downarrow$ \leftarrow 左箭头 $\leftarrow$ $\leftarrow$ \Leftarrow 双左箭头 $\Leftarrow$ $\Leftarrow$ \rightarrow 右箭头 $\rightarrow$ $\rightarrow$ \Rightarrow 双右箭头 $\Rightarrow$ $\Rightarrow$ 求和、极限与积分求和、求积 求和符号$\sum$显示为$\sum$,注意要用\displaystyle显示成文内公式的模样,或者使用$$\sum$$(这时样式为居中) 举例:不加\displaystyle,$\sum_{i=0}^n$显示为$\sum_{i=0}^n$举例:加\displaystyle,$\displaystyle\sum_{i=0}^n$显示为$\displaystyle\sum_{i=0}^n$举例:$$\sum_{i=0}^n$$显示为$$\sum_{i=0}^n$$ 求积符号$\prod显示为$\prod$ 举例:$\displaystyle\prod_{i=0}^n$显示为$\displaystyle\prod_{i=0}^n$ 集合 大交集$\bigcap$显示为$\bigcap$ 举例:$\displaystyle\bigcap_{i=0}^n$显示为$\displaystyle\bigcap_{i=0}^n$ 大并集$\bigcup$显示为$\bigcup$ 举例:$\displaystyle\bigcup_{i=0}^n$显示为$\displaystyle\bigcup_{i=0}^n$ 极限 极限符号$\lim$显示为$\lim$ 举例: $\displaystyle\lim_{x\to\infty}$显示为$\displaystyle\lim_{x\to\infty}$ 积分 积分符号效果代码$\int$$\iint$$\iiint$$\oint$1234$\int$$\iint$$\iiint$$\oint$ 举例:$\int_0^\infty{fxdx}$显示为$\int_0^\infty{fxdx}$ 分式与根式分式 $\frac{公式1}{公式2}$显示为$\frac{公式1}{公式2}$ 举例:$\frac{b_i^2}{a_i^2}$显示为$\frac{b_i^2}{a_i^2}$ 连分式用$\cfrac{公式1}{公式2}$,样式与$\frac{公式1}{公式2}$略有不同 举例: 连分式$$x=a_0 + \cfrac {1^2}{a_1 + \cfrac {2^2}{a_2 + \cfrac {3^2}{a_3 + \cfrac {4^2}{a_4 + ...}}}}$$显示为$$x=a_0 + \cfrac {1^2}{a_1 + \cfrac {2^2}{a_2 + \cfrac {3^2}{a_3 + \cfrac {4^2}{a_4 + …}}}}$$举例: 连分式$$x=a_0 + \frac {1^2}{a_1 + \frac {2^2}{a_2 + \frac {3^2}{a_3 + \frac {4^2}{a_4 + ...}}}}$$显示为$$x=a_0 + \frac {1^2}{a_1 + \frac {2^2}{a_2 + \frac {3^2}{a_3 + \frac {4^2}{a_4 + …}}}}$$ 根式 $\sqrt[x]{y}$显示为$\sqrt[x]{y}$ 特殊函数 语法:$\函数名$ 举例: $\sin x$,$\ln x$,$\log_n^2$,$\max(A,B,C)$显示为$\sin x$,$\ln x$,$\log_n^2$,$\max(A,B,C)$ 矩阵基本语法 起始标记:$\begin{matrix},结束标记:\end{matrix}$ 每一行末尾标记\\\,行间元素之间以&分隔效果代码$\begin{matrix}1&0&0\0&1&0\0&0&1\\end{matrix}$12345$\begin{matrix}1&0&0\\\0&1&0\\\0&0&1\\\\end{matrix}$ 矩阵边框 在起始、结束标记处用下列词替换matrixpmatrix小括号bmatrix中括号Bmatrix大括号vmatrix单竖线Vmatrix双竖线$\begin{pmatrix}1&0&0\0&1&0\0&0&1\\end{pmatrix}$12345$\begin{pmatrix}1&0&0\\\0&1&0\\\0&0&1\\\\end{pmatrix}$$\begin{bmatrix}1&0&0\0&1&0\0&0&1\\end{bmatrix}$12345$\begin{bmatrix}1&0&0\\\0&1&0\\\0&0&1\\\\end{bmatrix}$$\begin{Bmatrix}1&0&0\0&1&0\0&0&1\\end{Bmatrix}$12345$\begin{Bmatrix}1&0&0\\\0&1&0\\\0&0&1\\\\end{Bmatrix}$$\begin{vmatrix}1&0&0\0&1&0\0&0&1\\end{vmatrix}$12345$\begin{vmatrix}1&0&0\\\0&1&0\\\0&0&1\\\\end{vmatrix}$$\begin{Vmatrix}1&0&0\0&1&0\0&0&1\\end{Vmatrix}$12345$\begin{Vmatrix}1&0&0\\\0&1&0\\\0&0&1\\\\end{Vmatrix}$ 省略元素 横省略号:$\cdots$ 竖省略号:$\vdots$ 斜省略号:$\ddots$效果代码$\begin{bmatrix}{a_{11}}&{a_{12}}&{\cdots}&{a_{1n}}\{a_{21}}&{a_{22}}&{\cdots}&{a_{2n}}\{\vdots}&{\vdots}&{\ddots}&{\vdots}\{a_{m1}}&{a_{m2}}&{\cdots}&{a_{mn}}\\end{bmatrix}$123456$\begin{bmatrix}{a_{11}}&{a_{12}}&{\cdots}&{a_{1n}}\\\{a_{21}}&{a_{22}}&{\cdots}&{a_{2n}}\\\{\vdots}&{\vdots}&{\ddots}&{\vdots}\\\{a_{m1}}&{a_{m2}}&{\cdots}&{a_{mn}}\\\\end{bmatrix}$ 阵列 需要array环境:起始、结束处以$\begin{array}、\end{array}$声明 对齐方式:在{array}后以{}逐行统一声明,左对齐:l;居中:c;右对齐:r 竖直线:在声明对齐方式时,插入|建立竖直线 插入水平线:\hline 换行: \\\,行间元素之间以&分隔效果代码$\begin{array}{c|lll}{\downarrow}&{a}&{b}&{c}\\hline{R_1}&{c}&{b}&{a}\{R_2}&{b}&{c}&{c}\\end{array}$123456$\begin{array}{c|lll}{\downarrow}&{a}&{b}&{c}\\\\hline{R_1}&{c}&{b}&{a}\\\{R_2}&{b}&{c}&{c}\\\\end{array}$ 方程组 需要cases环境:起始、结束处以$\begin{cases}、\end{cases}$声明 换行:\\\,行间元素之间以&分隔效果代码$\begin{cases}a_1x+b_1y+c_1z=d_1\a_2x+b_2y+c_2z=d_2\a_3x+b_3y+c_3z=d_3\\end{cases}$123456$\begin{cases}a_1x+b_1y+c_1z=d_1\\\a_2x+b_2y+c_2z=d_2\\\a_3x+b_3y+c_3z=d_3\\\\end{cases}$ 分段函数 与方程组定义方法类似效果代码$f(n)=\begin{cases}\cfrac n2, &if\ n\ is\ even\3n + 1, &if\ n\ is\ odd\\end{cases}$12345$f(n)=\begin{cases}\cfrac n2, &if\ n\ is\ even\\\3n + 1, &if\ n\ is\ odd\\\\end{cases}$ 多行表达式效果代码$\begin{equation}\begin{split}a&=b+c-d \&\quad +e-f\\&=g+h\&=i\\end{split}\end{equation}$123456$\begin{equation}\begin{split} a&=b+c-d \\\ &\quad +e-f\\\ &=g+h\\\ &=i\\\\end{split}\end{equation}$ 插入图片语法说明:![图片Alt](图片地址 "图片Title") 效果代码1!\[哆啦A梦\]\(/images/duola.jpg "哆啦A梦"\)]]></content>
<categories>
<category>Linux基础</category>
</categories>
</entry>
<entry>
<title><![CDATA[JVM基础知识和调优基础原理]]></title>
<url>%2F2017%2F03%2F14%2FJVM%E5%9F%BA%E7%A1%80%E7%9F%A5%E8%AF%86%E5%92%8C%E8%B0%83%E4%BC%98%E5%9F%BA%E7%A1%80%E5%8E%9F%E7%90%86%2F</url>
<content type="text"><![CDATA[欢迎加入王导的VIP学习qq群:==>932194668<== JVM原理什么是jvmjava虚拟机,就是个应用程序,工作在用户态 详解JVM是按照运行时数据的存储结构来划分内存结构的,JVM在运行java程序时,将它们划分成几种不同格式的数据,分别存储在不同的区域,这些数据统一称为运行时数据。运行时数据包括java程序本身的数据信息和JVM运行java需要的额外数据信息。 JVM运行时数据区 程序计数器–线程私有 行号,指示程序执行到哪个位置 Java虚拟机栈–线程私有 本地方法栈–线程私有 操作系统底层的方法 Java堆–线程公用 JVM内存分配栈内存分配 -xss 默认1M保存参数、局部变量、中间计算过程和其他数据。退出方法的时候,修改栈顶指针就可以把栈帧中的内容销毁。 栈的优点:存取速度比堆块,仅次于寄存器,栈数据可以共享。 栈的缺点:存在栈中的数据大小、生存期是在编译时就确定的,导致其缺乏灵活性。stack out of memory一般情况下不会溢出,方法不会写那么大堆内存分配:保存对象 堆的优点:动态分配内存大小,生存期不必事先告诉编译器,它是在运行期动态分配的,垃圾回收器会自动收走不再使用的空间区域。 堆的缺点:运行时动态分配内存,在分配和销毁时都要占用时间,因此堆的效率较低。堆结构: Young:E区,S0,S1 Old: Permanent: JVM堆配置参数:概述 -Xms 初始堆大小 默认物理内存的1/64(<1GB) -Xmx最大堆大小 默认物理内存的1/4(<1GB),实际中建议不大于4GB 一般建议设置 -Xms=-Xmx 好处是避免每次在gc后,调整堆的大小,减少系统内存分配开销 整个堆大小=年轻代大小+年老代大小+持久代大小 新生代: 新生代=1个eden区+2个survivor区 -Xmn 年轻代大小(1.4 or later) -XX:NewSize,-XX:MaxNewSize(设置年轻代大小,1.4之前) -XX:NewRatio 年轻代(包括E区和两个S区)与年老代的比值(除去持久代)一般情况下设置了Xms=Xmx并且设置了Xmn的情况下,该参数不需要设置。 -XX:ServivorRatio 1个S区与E区大小的比值,默认设置为8,则1个S区占整个年轻代的1/10 新生代用来存放JVM刚分配的Java对象 老年代: 老年代=整个堆-年轻代大小-持久代大小 年轻代中经过垃圾回收没有回收掉的对象被复制到年老代 老年代存储对象比年轻代年龄大的多,而且不乏大对象(缓存) 新建的对象也有可能直接进入老年代 大对象,可通过启动参数设置-XX:PretnureSizeThreshold=1024(单位为字节,默认为0)来代表超过多大时就不在新生代分配,而是直接在老年代分配。 大的数组对象,切数组中无引用外部对象 老年代大小无配置参数 持久代: 持久代=整个堆-年轻代大小-老年代大小 -XX:PermSize -XX:MaxPermSize 设置持久代的大小,一般情况推荐把-XX:PermSize设置成-XX:MaxPermSize的值为相同的值,因为持久代大小的调整也会导致堆内存需要触发fgc。 存放Class、Method元信息,其大小与项目的规模、类、方法的数量有关。一般设置为128M就足够,设置原则是预留30%的空间。 持久代的回收方式 常量池中的常量,无用的类信息,常量的回收很简单,没有引用了就可以被回收 对于无用的类进行回收,必须保证3点: 类的所有实例都已经被回收 加载类的ClassLoader已经被回收 类对象Class对象没有被引用(即没有通过反射引用该类的地方) JVM内存垃圾回收:垃圾收集算法: 引用计数算法(濒临被抛弃 根搜索算法: 从GC Roots开始向下搜索,搜索所走过的路径称为引用链。当一个对象到GC Roots没有任何引用链相连时,则证明对象是不可用的。即不可达对象。在Java语言中,GC Roots包括: 虚拟机栈中引用的对象。(大部分被回收的) 方法区中静态属性实体引用的对象。 方法区中常量引用的对象。 本地方法栈中JNI引用的对象。 垃圾回收算法:复制算法(Copying)当空间存活的对象比较少时,极为高效,此算法用于新生代内存回收,从E区回收到S0或S1 标记清除算法(Mark-Sweep)产生碎片,适合老年代垃圾回收。 标记整理压缩算法(Mark-Compac)稍慢,适合老年代垃圾回收,解决碎片问题,对象连续,成本更高 名词解释: 串行回收:gc单线程内存回收、会暂停所有用户线程,用于client端 并行回收:收集是指多个GC线程并行工作,但此时用户线程是暂停的 并发回收:是指用户线程与GC线程同时执行(不一定是并行,可能交替,但总体上是同时执行的),不需要停顿用户线程(其实CMS中用户线程还是需要停顿的,只是非常短,GC线程在另一个CPU上执行) JVM常见的垃圾回收器:Serial回收器(串行回收器)是一个单线程的收集器,只能使用一个CPU或者一条线程去完成垃圾收集,在进行垃圾收集时,必须暂停所有其他工作线程,直到收集完成 -XX:+UseSerialGC来开启(新生代和老年代都开启) 使用复制算法(新生代)标记-压缩算法(老年代) 串行的、独占式的垃圾回收器 缺点:Stop-The-World ParNew回收器(并行回收器)也是独占式回收器,在收集过程中,应用程序全部暂停。如果是单CPU上或者并发能力较弱的系统上,还不如串行回收器性能好。 -XX:+UseParNewGC开启 -XX:ParallelGCThreads指定线程数,默认最好与CPU数量相当 新生代Parallel Scavenge回收器吞吐量优先回收器 关注CPU吞吐量,即运行用户代码的时间/总时间,适合运行后台运算 -XX:+UserParallelGC开启,这也是在Server模式下的默认值 -XX:GCTimeRatio -XX:MaxGCPauseMillis 老年代ParallelOld回收器 -XX:+UseParallelOldGC开启 CMS(并发标记清除)回收器用的最广泛,标记和重新标记两个阶段仍然需要停止用户线程,但时间很快 初始标记并发标记重新标记并发清除 标记-清除算法:同时它又是一个使用多线程并发回收的垃圾收集器 -XX:ParallelCMSThreads:手工设定CMS线程数量,CMS默认启动的线程数是(ParallelGCThreads+3)/4 -XX:+UseConcMarkSweepGC开启 -XX:CMSInitialtingOccupancyFraction设置CMS收集器在老年代空间被使用多少后触发垃圾收集,默认值为68%,仅在CMS收集器时有效,-XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSCompactAtFullCollection由于CMS收集器会产生碎片,此参数设置在垃圾收集器后是否需要一次内存碎片整理过程,仅在CMS收集器时有效。 -XX:+CMSFullGCBeforeCompaction设置CMS收集器在进行若干次垃圾收集后再进行一次内存碎片整理过程,通常与UseCMSCompactAtFullCollection参数一起使用 -XX:CMSInitiatingPermOccupancyFraction设置持久代 GC性能指标吞吐量应用花在非GC上的时间百分比 GC负荷花在GC时间百分比 暂停时间(看GClog)应用划在GC stop-the-world的时间 GC频率反应速度从一个对象变成垃圾到这个对象被回收的时间 小结 一个交互式的应用要求暂停时间越少越好,然而,一个非交互式的应用,希望GC负荷越低越好 一个实时系统对暂停时间和GC负荷要求,都是越低越好 内存容量配置原则年轻代大小选择 响应时间优先的应用 尽可能设大,直到接近系统的最低响应时间限制(根据实际情况选择),在此情况下,年轻代收集发生的频率也是最小的,同时减少到达老年代的对象 吞吐量优先的应用 尽可能设置大,可能到达Gbit的程度,因为对响应时间没有要求,垃圾收集可以并行进行,一般适合8CPU以上的应用避免设置过小,当新生代设置过小时会导致 YGC次数更加频繁 可能导致YGC对象直接进入老年代,如果此时老年代满了,会触发FGC 老年代大小选择 响应时间优先的应用 使用并发垃圾收集器(CMS)设置小了会造成内存碎片,高回收频率以及应用暂停而使用传统的标记清除方式,如果堆大了,需要较长的收集时间,最优化的方案,一般参考以下数据获得:并发垃圾收集信息、持久代并发收集次数、传统GC信息、花在年轻代和年老代回收上的时间比例 吞吐量优先的应用 一般吞吐量优先的应用都有一个很大的年轻代和一个较小的年老代。原因是,这样可以尽可能回收掉大部分短期对象,减少中期的对象,而年老代尽量存放长期存活对象。 java排障使用jps获取java进程的pid1# jps -lvm 导出CPU占用高进程的线程栈1jstack `$pid` >> java.txt 查看对应进程的哪个线程占用CPU过高1# top -H -p 22056 将线程的pid转换为16进制1# echo "obase=16;`$pid`"|bc 第二步中导出的java.txt中查找转换为16进制的线程pid,找到对应的线程栈分析负载高的线程栈都是什么业务操作,优化程序并处理问题]]></content>
<categories>
<category>Linux基础</category>
</categories>
</entry>
<entry>
<title><![CDATA[RPM包制作]]></title>
<url>%2F2017%2F01%2F15%2FRPM%E5%8C%85%E5%88%B6%E4%BD%9C%2F</url>
<content type="text"><![CDATA[欢迎加入王导的VIP学习qq群:==>932194668<== RPM包能制作什么 一个应用程序 库文件 配置文件 文档包 制作步骤创建制作目录 BUILD 源代码解压以后,放在这个目录,不用用户参与,只需提供这个目录,真正的制作车间。 RPMS 制作完成的RPM包放在这个目录。有子目录,跟硬件架构相关,特定平台子目录,i386,ARM等等。交叉编译。 SOURCES 所有的原材料。 SPECS spec文件存放目录,制作RPM包的纲领性文件。软件包名.spec。 SRPMS SRC格式的RPM包存放目录。没有平台相关的概念。 注意:一般制作RPM包,建议不要用root用户,所以,以上制作目录结构,建议使用普通用户创建,不要用系统默认的。 宏定义macrofiles:~/.rpmmacros,以最后这个为准rmpbuild –showrc|grep _topdir所以切换普通用户1%_topdir /home/xxx/rpmbuild 命令: 12# mkdir -pv rpmbuild/{BUILD,RPMS,SOURCES,SPECS,SRPMS}# rpmbuild --showrc|grep _topdir 把源文件放进适当目录制作spec文件(至关重要)信息说明段(introduction section)Name,Version,Release,Group必须,其他可选。rpm -qpi 可以查看一个rpm包的相关信息定义各种tag Name:软件包的名字 Relocations:是否可换安装地址。not relocatable Version:版本号,至关重要,只能用X.XX.XXX不能使用- Release:发行版本号 1%{?dist} License:声明版权,例如:GPLv2 Group:属于那个组,不要自己定义,在以下组里找,只要存在的组就可以 less /usr/share/doc/rpm-4.4.2.3/GROUPS URL Packager:制作者<邮箱> Vendor:提供商 Summary:概要 %description:描述 Source:源文件,链接,Source0:解压缩主原材料,Source1:脚本等等 BuildRoot:编译好的程序,临时安装根目录,配合file section,收集哪些文件,打包到RPM包,最后在clean section中删除。可以规定任意目录:/%{_tmppath}/%{name}-%{version}-%{release}-root BuildRequires:定义依赖关系,编译依赖和安装依赖。 准备段prep section解压缩源码包到制作目录,cd进去,设定工作环境、宏,等等。单独的宏来定义: %prep %setup -q 静默模式 制作段build section123%build./configure balabalabala.......................%{__make} %{?_smp_mflags} 多CPU上,这个标识可以加快编译速度 安装段install section12345%install%{__rm} -rf %{buildroot}%{__make} install DESTDIR="%{buildroot}"%find_long %{name}%{__install} -p -D 0755 %{SOURCE1} %{buildroot}/etc/init.d/nginx 安装自定义的资源文件 -p保留原材料时间戳 补充:Linux系统install命令:类似于cpinstall /etc/fstab /tmpinstall -d /tmp/test 创建目录install -D /etc/fstab /tmp/noexistsdir/fstab可以直接指定安装目标处不存在的目录,但是要把安装的源文件名也指定 脚本段script section1234567891011%pre安装前 $1=0,1,2 卸载,安装,升级$1 == 1加个用户%post安装后$1 == 1chkconfig --add %preun卸载前$1 == 0service xxx stopchkconfig --del%postun卸载后 清理段clean cection12%clean%{__rm} -rf %{buildroot} 文件段files section除了debug信息,都要做进RPM包 123456%files -f %{name}.lang%defattr (-,root,root,0755) 定义文件默认权限%doc API CHANGES COPYING CREDITS README axelrc.examlpe 文档%config(noreplace) %{_sysconfdir}/axelrc 配置文件,noreplace不替换原来的/usr/local/bin/axel 包含的所有文件,可以直接写目录%attr (0755,root,root) /etc/rc.d/init.d/nginx 定义自定义资源的属性,不指定则继承%defattr 更改日志段change log section123%changelog* xxx 日期,制作人,版本号- Initial Version release号 制作RPM包rpmbuild命令 -bp 只执行到prep段 -bi 只执行到install段 -bc 只执行到build段 -bb 制作二进制格式的RPM包 -bs 制作源码格式的RPM包 -ba 既制作二进制格式又制作源码格式 -bl 检查files字段和buildroot文件是否一一对应,不一致则报错 展开rpm源码包rpm2cpio xxxx-src.rpm | cpio -id 到哪去找源码包呢?rpmfind.netrpm.pbone.net YUM后的rpm包保留在本地的方法1# sed -i 's#keepcache=0#keepcache=1#g' /etc/yum.conf rpm包默认存放路径 1/var/cache/yum/base/packages fpm制作rpm包1234# yum install ruby# gem source -a http://mirrors.aliyun.com/rubygems/# gem source -r http://rubygems.org/# gem install fpm 参考文档]]></content>
<categories>
<category>Linux基础</category>
</categories>
</entry>
<entry>
<title><![CDATA[CentOS操作系统基础优化]]></title>
<url>%2F2017%2F01%2F14%2FCentOS%E6%93%8D%E4%BD%9C%E7%B3%BB%E7%BB%9F%E5%9F%BA%E7%A1%80%E4%BC%98%E5%8C%96%2F</url>
<content type="text"><![CDATA[欢迎加入王导的VIP学习qq群:==>932194668<== 内核优化12345678910111213141516171819202122232425ECHOSTR='net.ipv4.tcp_fin_timeout = 2net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_tw_recycle = 1net.ipv4.tcp_syncookies = 1net.ipv4.tcp_keepalive_time =600net.ipv4.ip_local_port_range = 4000 65000net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.route.gc_timeout = 100net.ipv4.tcp_syn_retries = 1net.ipv4.tcp_synack_retries = 1net.core.somaxconn = 16384net.core.netdev_max_backlog = 16384net.ipv4.tcp_max_orphans = 16384net.nf_conntrack_max = 25000000net.netfilter.nf_conntrack_max = 25000000net.netfilter.nf_conntrack_tcp_timeout_established = 180net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120net.netfilter.nf_conntrack_tcp_timeout_close_wait = 60net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120'echo "$ECHOSTR" >> /etc/sysctl.conf &&\modprobe ip_conntrack && modprobe bridgeecho "modprobe ip_conntrack" >> /etc/rc.localecho "modprobe bridge" >> /etc/rc.local/sbin/sysctl -p 文件描述符/etc/security/limits.conf1234* hard nofile 65535* soft nofile 65535* hard noproc 65535* soft noproc 65535 更新yum源,安装epel源vi /etc/yum.repo.d/CentOS-Base.repo 略 1# yum install epel-release -y 系统时钟同步123# yum install chrony -y# systemctl start chronyd# systemctl enable chronyd 关闭SELinux和防火墙1234if [ -f /etc/selinux/config ]; then sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config setenforce 0fi 12# systemctl stop firewalld# systemctl disable firewalld 调整系统字符集123# echo 'export LC_ALL=C'>> /etc/profile# echo 'export LANG=en_US.UTF-8' >> /etc/profile# source /etc/profile 安装基础工具1# yum install wget net-tools telnet tree nmap sysstat lrzsz dos2unix -y]]></content>
<categories>
<category>Linux基础</category>
</categories>
</entry>
<entry>
<title><![CDATA[清华大学yum源--CentOS7]]></title>
<url>%2F2017%2F01%2F14%2F%E6%B8%85%E5%8D%8E%E5%A4%A7%E5%AD%A6yum%E6%BA%90--CentOS7%2F</url>
<content type="text"><![CDATA[欢迎加入王导的VIP学习qq群:==>932194668<== CentOS-Base.repo# The mirror system uses the connecting IP address of the client and theupdate status of each mirror to pick mirrors that are updated to andgeographically close to the client. You should use this for CentOS updatesunless you are manually picking other mirrors.# If the mirrorlist= does not work for you, as a fall back you can try theremarked out baseurl= line instead.## [base]name=CentOS-$releasever - Basebaseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/os/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=osgpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 #released updates[updates]name=CentOS-$releasever - Updatesbaseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/updates/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updatesgpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 #additional packages that may be useful[extras]name=CentOS-$releasever - Extrasbaseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/extras/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extrasgpgcheck=1gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 #additional packages that extend functionality of existing packages[centosplus]name=CentOS-$releasever - Plusbaseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/centosplus/$basearch/ #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplusgpgcheck=1enabled=0gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7]]></content>
<categories>
<category>Linux基础</category>
</categories>
</entry>
</search>