RAID6是在RAID5的基础上改良而成的,RAID6再将数据校验位增加一位,所以允许损坏的硬盘数量也由 RAID5的一个增加到二个。由于同一阵列中两个硬盘同时损坏的概率非常少,所以,RAID6用增加一块硬盘的代价,换来了比RAID5更高的数据安全性。
实验:RAID6创建,格式化,挂载使用,故障模拟,重新添加热备份。
1.添加6块20G的硬盘,分区,类型ID为fd。
[root@localhost ~]# fdisk -l | grep raid /dev/sdb1 2048 41943039 20970496 fd Linux raid autodetect /dev/sdc1 2048 41943039 20970496 fd Linux raid autodetect /dev/sdd1 2048 41943039 20970496 fd Linux raid autodetect /dev/sde1 2048 41943039 20970496 fd Linux raid autodetect /dev/sdf1 2048 41943039 20970496 fd Linux raid autodetect /dev/sdg1 2048 41943039 20970496 fd Linux raid autodetect2.创建RAID6,并添加2个热备份盘。
[root@localhost ~]# mdadm -C -v /dev/md6 -l6 -n4 /dev/sd[b-e]1 -x2 /dev/sd[f-g]1 mdadm: layout defaults to left-symmetric mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 512K mdadm: size set to 20953088K mdadm: Fail create md6 when using /sys/module/md_mod/parameters/new_array mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md6 started.3.查看raidstat状态。
[root@localhost ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md6 : active raid6 sdg1[5](S) sdf1[4](S) sde1[3] sdd1[2] sdc1[1] sdb1[0] 41906176 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU] [===>.................] resync = 18.9% (3962940/20953088) finish=1.3min speed=208575K/sec unused devices: <none> [root@localhost ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md6 : active raid6 sdg1[5](S) sdf1[4](S) sde1[3] sdd1[2] sdc1[1] sdb1[0] 41906176 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/4] [UUUU] unused devices: <none>4.查看RAID6的详细信息。
[root@localhost ~]# mdadm -D /dev/md6 /dev/md6: Version : 1.2 Creation Time : Sun Aug 25 16:34:36 2019 Raid Level : raid6 Array Size : 41906176 (39.96 GiB 42.91 GB) Used Dev Size : 20953088 (19.98 GiB 21.46 GB) Raid Devices : 4 Total Devices : 6 Persistence : Superblock is persistent Update Time : Sun Aug 25 16:34:43 2019 State : clean, resyncing Active Devices : 4 Working Devices : 6 Failed Devices : 0 Spare Devices : 2 Layout : left-symmetric Chunk Size : 512K Consistency Policy : resync Resync Status : 10% complete Name : localhost:6 (local to host localhost) UUID : 7c3d15a2:4066f2c6:742f3e4c:82aae1bb Events : 1 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1 3 8 65 3 active sync /dev/sde1 4 8 81 - spare /dev/sdf1 5 8 97 - spare /dev/sdg15.格式化。
[root@localhost ~]# mkfs.xfs /dev/md6 meta-data=/dev/md6 isize=512 agcount=16, agsize=654720 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0, sparse=0 data = bsize=4096 blocks=10475520, imaxpct=25 = sunit=128 swidth=256 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal log bsize=4096 blocks=5120, version=2 = sectsz=512 sunit=8 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=06.挂载使用。
[root@localhost ~]# mkdir /mnt/md6 [root@localhost ~]# mount /dev/md6 /mnt/md6/ [root@localhost ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/centos-root xfs 17G 1014M 16G 6% / devtmpfs devtmpfs 901M 0 901M 0% /dev tmpfs tmpfs 912M 0 912M 0% /dev/shm tmpfs tmpfs 912M 8.7M 903M 1% /run tmpfs tmpfs 912M 0 912M 0% /sys/fs/cgroup /dev/sda1 xfs 1014M 143M 872M 15% /boot tmpfs tmpfs 183M 0 183M 0% /run/user/0 /dev/md6 xfs 40G 33M 40G 1% /mnt/md67.创建测试文件。
[root@localhost ~]# touch /mnt/md6/test{1..9}.txt [root@localhost ~]# ls /mnt/md6/ test1.txt test2.txt test3.txt test4.txt test5.txt test6.txt test7.txt test8.txt test9.txt8.故障模拟。
[root@localhost ~]# mdadm -f /dev/md6 /dev/sdb1 mdadm: set /dev/sdb1 faulty in /dev/md6 [root@localhost ~]# mdadm -f /dev/md6 /dev/sdc1 mdadm: set /dev/sdc1 faulty in /dev/md69.查看测试文件。
[root@localhost ~]# ls /mnt/md6/ test1.txt test2.txt test3.txt test4.txt test5.txt test6.txt test7.txt test8.txt test9.txt