Linux Software RAID实践(5)

                [root@server ~]# mdadm /dev/md2 --fail /dev/sdh1
                mdadm: set /dev/sdh1 faulty in /dev/md2

                [root@server ~]# cat /proc/mdstat
                Personalities : [raid6] [raid5] [raid4] [raid1] [raid0]
                md2 : active raid5 sdj1[3] sdi1[2] sdh1[4](F) sdg1[0]
                      16771584 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
                      [>....................]  recovery =  0.8% (74172/8385792) finish=7.4min speed=18543K/sec
         第一次查看状态时,sdj1是热备盘,当我们指定sdh1损坏后,系统自动将数据重构到热备盘sdj1上,在重构过程中,状态是U_U.
         用--remove命令可以将损坏的磁盘移走.
                  [root@server ~]# mdadm /dev/md2 --remove /dev/sdh1
                  mdadm: hot removed /dev/sdh1
         此时查看状态时,已经只有三个盘了,没有备用的热备盘.
                  [root@server ~]# cat /proc/mdstat
                  Personalities : [raid6] [raid5] [raid4] [raid1] [raid0]
                  md2 : active raid5 sdj1[1] sdi1[2] sdg1[0]
                        16771584 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
         当我们的损坏的磁盘经过处理后,可以将其添加到阵列中作热备盘.
         用--add命令可以添加热备盘.
                  [root@server ~]# mdadm /dev/md2 --add /dev/sdh1
                  mdadm: added /dev/sdh1
                  [root@server ~]# cat /proc/mdstat
                  Personalities : [raid6] [raid5] [raid4] [raid1] [raid0]
                  md2 : active raid5 sdh1[3](S) sdj1[1] sdi1[2] sdg1[0]
                        16771584 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

         此外还可以用--grow命令增加可用的活动磁盘.

                  [root@server ~]#mdadm --grow /dev/md0 --raid-disks=4

 

        4.使用阵列
        新建三个挂载点:
                [root@server ]# mkdir /mnt/MD0
                [root@server ]# mkdir /mnt/MD1
                [root@server ]# mkdir /mnt/MD2
        对RAID设备做文件系统格式化:
                [root@server ]# mkfs.ext3 /dev/md0
                [root@server ]# mkfs.ext3 /dev/md1
                [root@server ]# mkfs.ext3 /dev/md2
        挂载:
                [root@server ]# mount /dev/md0 /mnt/MD0
                [root@server ]# mount /dev/md1 /mnt/MD1
                [root@server ]# mount /dev/md2 /mnt/MD2
        查看效果:
                [root@server ]# df -h
                ......
                /dev/md0               16G  173M   15G   2% /mnt/MD0
                /dev/md1              7.9G  147M  7.4G   2% /mnt/MD1
                /dev/md2               16G  173M   15G   2% /mnt/MD2
        现在我们就可以正常使用RAID设备了.
        当我们要停止RAID设备时,需要先将其卸载:
                [root@server ~]# umount /mnt/MD0
        然后再用如下命令停止设备:
                [root@server ~]# mdadm --stop /dev/md0
                mdadm: stopped /dev/md0
        此时再用命令查看发现,已经没有md0了.
                [root@server ~]# cat /proc/mdstat | grep md0
        如果需要再次使用则需要将其"组装起来",由于先前曾创建过,mdadm的assemble模式可检查底层设备的元数据信息,然后再组装为活跃的阵列.
                [root@server ~]# mdadm --assemble /dev/md0 /dev/sd[bc]1
                mdadm: /dev/md0 has been started with 2 drives.
                [root@server ~]# cat /proc/mdstat  | grep md0
                md0 : active raid0 sdb1[0] sdc1[1]
        这样就又可以重新挂载使用了.

 

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/25222.html