fs.aio-max-nr = 12019/4/17048576 fs.file-max = 6815744 kernel.shmall = 8160280 kernel.shmmax = 33424509440 kernel.shmmni = 4096 kernel.sem =250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 4194304 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 net.ipv4.tcp_wmem = 262144 262144 262144 net.ipv4.tcp_rmem = 4194304 4194304 4194304
【使内核参数立即生效】 [[email?protected] ~]#sysctl -p
================================================================= 三、停止ntp服务,11gR2新增的检查项,配置集群时间同步服务
[[email?protected] ~]# service ntpd status [[email?protected] ~] #service ntpd stop [[email?protected] ~] #chkconfig –level 2345 ntpd off [[email?protected] ~] #rm –rf /etc/ntp.conf
节点二和节点三上执行相同的命令,卸载NTP 在集群安装完后,要确认ctssd是否处于活动状态 [[email?protected] ~] #crsctl check ctss
问题1: [[email?protected] ~]# iscsiadm -m discovery -t st -p 192.168.0.10 ##用iscsiadm探测openfiler新添加的iscsi共享卷 -bash: iscsiadm: command not found [[email?protected] ~]# mount /dev/cdrom /media --加载光盘,安装iscsi-initiator rpm包
解决: [[email?protected] Packages]# pwd /mnt/cdrom/Packages [[email?protected] Packages]# yum install iscsi* #iscsi-initiator-utils-6.2.0.868-0.18.el5.i386.rpm
[[email?protected]~] # rpm –ivh scsi-target-utils*.rpm
// RedHat 6在光盘Packages目录下 // RedHat 5在光盘ClusterStorage目录下
默认情况下,iscsi发起方和目标方之间通过端口3260连接。假设已知iscsi的目标方IP是192.168.1.1,运行下列命令: # chkconfig iscsi on # chkconfig iscsi --list (查看ISCSI启动状态) # chkconfig --list |grep iscsi ##检测所有相关的iscsi服务状态 # service iscsi status # service iscsi start ##启动iscsi服务
# iscsiadm -m discovery -t sendtargets -p 10.20.4.215:3260
2、挂载ISCSI磁盘
A、节点一note1 上:
[[email?protected] ~] # rpm –ivh iscsi-initiator-utils*.rpm [[email?protected] ~] # service iscsid restart //重启iscsi服务 [[email?protected] ~] # chkconfig --level 2345 iscsid on //设置开机自启动 [[email?protected] ~] # chkconfig --list iscsid //查看自启动项 [[email?protected] ~] # iscsiadm –m node –p 172.16.1.20 –l //登录iscsi存储
B、节点二note2 上: [[email?protected] ~] # rpm –ivh iscsi-initiator-utils*.rpm [[email?protected] ~] # service iscsid restart //重启iscsi服务 [[email?protected] ~] # chkconfig --level 2345 iscsid on //设置开机自启动 [[email?protected] ~] # chkconfig --list iscsid //查看自启动项 [[email?protected] ~] # iscsiadm –m node –p 172.16.1.20 –l //登录iscsi存储
?
8.配置UDEV:
以下操作只需要2个节点都执行 ------------------------------------
问题: VMware 中使用 scsi_id 查询磁盘UUID
方法如下: 1、在虚拟机关闭以后,进入虚拟机的目录 2、用文本编辑器修改vmx文件,在vmx文件中任意位置(通常在最后)添加如下行: disk.EnableUUID = "TRUE" 3、重新启动虚拟机,此时可以正确获取SCSI ID -----------------------------------------------------------------------
建议使用脚本将所有磁盘的UUID输出到x.log,然后使用列编辑搞定所有asm磁盘 cd /bai vi test.sh 新建这个脚本,获得UUID:
#!/bin/sh for i in c d e f g h i j k do echo "KERNEL=="sd*",BUS=="scsi",PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="`/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`",NAME="asm-disk$i",OWNER="grid",GROUP="asmadmin",MODE="0660"" done
#./test.sh > x.log # chmod 755 test.sh
#/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/sdb 或 #/sbin/scsi_id -g -u /dev/sda //获取磁盘UUID
#添加记录到/etc/scsi_id.config 编辑/etc/scsi_id.config文件,如果该文件不存在,则创建该文件,添加如下行: [[email?protected] dev]# echo "options=--whitelisted --replace-whitespace" >> /etc/scsi_id.config
创建 rules 文件: # cd /etc/udev/rules.d # vi /etc/udev/rules.d/99-oracle-asmdevices.rules 把 x.log中 信息复制到99-oracle-asmdevices.rules 下面
重载 UDEV: sudo /etc/init.d/udev-post reload
rac1启动UDEV:
#start_udev 或者 #/sbin/start_udev Starting udev: [ OK ]
[[email?protected] rules.d]# ls -l /dev/asm* 0 brw-rw---- 1 grid asmadmin 8,48 Apr 30 14:12 /dev/asm-diskc 0 brw-rw---- 1 grid asmadmin 8,64 Apr 30 14:12 /dev/asm-diskd 0 brw-rw---- 1 grid asmadmin 8,80 Apr 30 14:12 /dev/asm-diske 0 brw-rw---- 1 grid asmadmin 8,96 Apr 30 14:12 /dev/asm-diskf 0 brw-rw---- 1 grid asmadmin 8,112 Apr 30 14:12 /dev/asm-diskg 0 brw-rw---- 1 grid asmadmin 8,128 Apr 30 14:12 /dev/asm-diskh 0 brw-rw---- 1 grid asmadmin 8,144 Apr 30 14:12 /dev/asm-diski 0 brw-rw---- 1 grid asmadmin 8,160 Apr 30 14:12 /dev/asm-diskj 0 brw-rw---- 1 grid asmadmin 8,176 Apr 30 14:12 /dev/asm-diskk
传输到rac2上:
[[email?protected] rules.d]# scp /etc/udev/rules.d/99-oracle-asmdevices.rules 10.20.4.216:/etc/udev/rules.d/ [email?protected]‘s password: 99-oracle-asmdevices.rules 100% 1945 1.9KB/s 00:00 You have new mail in /var/spool/mail/root
rac2启动UDEV #/sbin/start_udev [[email?protected] rules.d]# ls -l /dev/asm*
======================================================================================================================= 二.安装 Grid 使用 X manager 连接到节点 1 上,设置 DISPLAY。
?
节点2准备工作 我们已经在node1完成基本准备配置工作,在node2上重复上述2.2到2.10节中准备工作,以完成节点2的准备工作。 说明:2.3节配置SCAN IP已在节点2上完成,可忽略。2.4节中需要修改对应的环境变量。
问题:<<<<<配置oracle,grid 用户SSH对等性>>>>>>>>> 配置oracle用户对等性 node1:
[[email?protected] ~]# su - oracle rac1-> mkdir ~/.ssh rac1-> chmod 700 ~/.ssh rac1-> ls -al rac1-> ssh-keygen -t rsa (连续按三下回车) rac1-> ssh-keygen -t dsa (连续按三下回车)
node2:
[[email?protected] ~]# su - oracle rac2-> mkdir ~/.ssh rac2-> chmod 700 ~/.ssh rac2-> ls -al rac2-> ssh-keygen -t rsa (连续按三下回车) rac2-> ssh-keygen -t dsa (连续按三下回车
返回节点1:
rac1-> cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys rac1-> cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys rac1-> [email?protected]‘s password: rac1-> ssh rac2 cat ~/.ssh/id_dsa.pub >>~/.ssh/authorized_keys [email?protected]‘s password: rac1-> scp ~/.ssh/authorized_keys rac2:~/.ssh/authorized_keys [email?protected]‘s password:
验证oracle SSH对等性: 在node1,node2两个节点上分别执行下述命令,第一次执行时需要口令验证: ssh rac1 date ssh rac2 date ssh rac1-priv date ssh rac2-priv date ssh rac1-vip date ssh rac2-vip date
返回节点1:
rac1-> ssh rac2-vip date Tue Apr 9 19:37:06 CST 2019 rac1-> ssh rac1-vip date Tue Apr 9 19:37:07 CST 2019 rac1-> ssh rac1-priv date Tue Apr 9 19:37:14 CST 2019 rac1-> ssh rac2-priv date Tue Apr 9 19:37:18 CST 2019 rac1-> ssh rac2 date Tue Apr 9 19:37:22 CST 2019 rac1-> ssh rac1-vip date Tue Apr 9 19:37:25 CST 2019 rac1-> 返回节点2: 也要测试:
Nothing to do [[email?protected] network-scripts]# systemctl restart sshd
至此,Oracle用户SSH对等性配置完成!重复上述步骤,以grid用户配置其对等性。 <<<<<配置oracle,grid 用户SSH对等性>>>>>>>>> [[email?protected] ~]# su - grid node1: node2: 重复上述步骤,
格式化共享磁盘------------------------------------------- 以root用户分别在两个节点上执行fdisk命令,查看现有硬盘分区信息:
node1: [[email?protected] ~]# fdisk -l (编辑:惠州站长网)
【声明】本站内容均来自网络,其相关言论仅代表作者个人观点,不代表本站立场。若无意侵犯到您的权利,请及时与联系站长删除相关内容!
|