Linux系统管理.Raid的配置文件说明及相关命令介绍推荐

Raid的配置可以从cp /usr/share/doc/raidtools-1.00.3/目录里拷贝相应的配置文件到/etc目录,然后去修改/etc/目录下的相应的raid文件,并改名为/etc/raidtab文件.

[root@localhost root]# cp /usr/share/doc/raidtools-1.00.3/raid*.conf.* /etc[root@localhost root]# ls -l /etc/ |grep raid-rw-r r 1 root root 542 Mar 13 21:21 raid0.conf.sample-rw-r r 1 root root 179 Mar 13 21:21 raid1.conf.sample-rw-r r 1 root root 250 Mar 13 21:21 raid4.conf.sample-rw-r r 1 root root 462 Mar 13 21:21 raid5.conf.sample – RAID 0 的配置过程 –

[root@localhost root]# vi /etc/raid0.conf.sample 查看RAID 0 的配置文件# Sample raid-0 configuration

raiddev /dev/md0 -创建raid的设备名称

raid-level 0 # it s not obvious but this *must* be -所创建的raid的级别 # right after raiddev

persistent-superblock 0 # set this to 1 if you want autostart, # BUT SETTING TO 1 WILL DESTROY PREVIOUS # CONTENTS if this is a RAID0 array created # by older raidtools (0.40-0.51) or mdtools!

chunk-size 16 块大小

nr-raid-disks 2 Raid的磁盘数量(nr=number)nr-spare-disks 0 -冗余的磁盘数量

device /dev/hda1 根据实际的情况,更改此处的信息raid-disk 0 RAID磁盘编号

device /dev/hdb1 -根据实际的情况,更改此处的信息raid-disk 1 RAID磁盘编号

RAID 0 的配置文件是串接

RAID 1的配置过程 –

[root@localhost root]# vi /etc/raid1.conf.sample 查看RAID 1 的配置文件# Sample raid-1 configurationraiddev /dev/md0 -创建raid的设备名称raid-level 1 -所创建的raid的级别nr-raid-disks 2 Raid的磁盘数量(nr=number)nr-spare-disks 0 -冗余的磁盘数量chunk-size 4 块大小

device /dev/hda1 根据实际的情况,更改此处的信息raid-disk 0 RAID磁盘编号

device /dev/hdb1 根据实际的情况,更改此处的信息raid-disk 1 RAID磁盘编号

RAID 1 配置文件是冗余,需要偶数个磁盘数量,最少为二个.

[root@localhost root]# mkraid /dev/md0handling MD device /dev/md0analyzing super-blockdisk 0: /dev/sdb1, 4192933kB, raid superblock at 4192832kBdisk 1: /dev/sdc1, 4192933kB, raid superblock at 4192832kB

[root@localhost root]# mkfs.ext3 /dev/md0mke2fs 1.32 (09-Nov-2002)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)524288 inodes, 1048208 blocks52410 blocks (5.00%) reserved for the super userFirst data block=032 block groups32768 blocks per group, 32768 fragments per group16384 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: doneCreating journal (8192 blocks): doneWriting superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 28 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override.[root@localhost root]# lsraid -A -a /dev/md0 查看md0的状态信息[dev 9, 0] /dev/md0 8A28FBC4.EA10ACB9.6BB5ABF9.A735D161 online[dev 8, 17] /dev/sdb1 8A28FBC4.EA10ACB9.6BB5ABF9.A735D161 good[dev 8, 33] /dev/sdc1 8A28FBC4.EA10ACB9.6BB5ABF9.A735D161 good

不使用的时候请直接删除/etc/raidtab文件. # rm /etc/raidtab

– RAID 5 的配置过程 –

因为做RAID 5必须三个硬盘以上,所以要再增加一个磁盘来做RAID 5的实验.

[root@localhost root]# fdisk l 查看此时的硬盘信息

Disk /dev/sda: 5368 MB, 5368709120 bytes255 heads, 63 sectors/track, 652 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System/dev/sda1 * 1 652 5237158+ 83 Linux

Disk /dev/sdb: 4294 MB, 4294967296 bytes255 heads, 63 sectors/track, 522 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System/dev/sdb1 1 522 4192933+ 83 Linux

Disk /dev/sdc: 4294 MB, 4294967296 bytes255 heads, 63 sectors/track, 522 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System/dev/sdc1 1 522 4192933+ 83 Linux

Disk /dev/sdd: 4294 MB, 4294967296 bytes255 heads, 63 sectors/track, 522 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes[root@localhost root]# fdisk /dev/sdd 格式化新加硬盘/dev/sdd

Command (m for help): n 新建一个分区Command action e extended p primary partition (1-4)p 新建一个主分区Partition number (1-4): 1 -输入分区编号First cylinder (1-522, default 1): -默认回车即可Using default value 1Last cylinder or +size or +sizeM or +sizeK (1-522, default 522): -默认回车即可Using default value 522

Command (m for help): w 保存,退出!The partition table has been altered!

Calling ioctl() to re-read partition table.Syncing disks.[root@localhost root]# fdisk l -再次查看硬盘分区信息

Disk /dev/sda: 5368 MB, 5368709120 bytes255 heads, 63 sectors/track, 652 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System/dev/sda1 * 1 652 5237158+ 83 Linux

Disk /dev/sdb: 4294 MB, 4294967296 bytes255 heads, 63 sectors/track, 522 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System/dev/sdb1 1 522 4192933+ 83 Linux

Disk /dev/sdc: 4294 MB, 4294967296 bytes255 heads, 63 sectors/track, 522 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System/dev/sdc1 1 522 4192933+ 83 Linux

Disk /dev/sdd: 4294 MB, 4294967296 bytes255 heads, 63 sectors/track, 522 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System/dev/sdd1 1 522 4192933+ 83 Linux

[root@localhost root]# cp /usr/share/doc/raidtools-1.00.3/raid5.conf.sample /etc/raidtab -拷贝配置文件[root@localhost root]# vi /etc/raidtab -修改配置文件# Sample raid-5 configurationraiddev /dev/md0raid-level 5nr-raid-disks 3chunk-size 4

# Parity placement algorithm

#parity-algorithm left-asymmetric

## the best one for maximum performance:#parity-algorithm left-symmetric

#parity-algorithm right-asymmetric#parity-algorithm right-symmetric

# Spare disks for hot reconstruction#nr-spare-disks 0

device /dev/sdb1 修改此处分区信息raid-disk 0

device /dev/sdc1 修改此处分区信息raid-disk 1

device /dev/sdd1 修改此处分区信息raid-disk 2

[root@localhost root]# mkraid /dev/md0 创建raid硬盘: /dev/md0handling MD device /dev/md0analyzing super-blockdisk 0: /dev/sdb1, 4192933kB, raid superblock at 4192832kB/dev/sdb1 appears to be already part of a raid array use -f to 提示加 f 选项force the destruction of the old superblockmkraid: aborted.(In addition to the above messages, see the syslog and /proc/mdstat as wellfor potential clues.)[root@localhost root]# mkraid -f /dev/md0 加 f 选项后,再次创建raid硬盘

WARNING! 出现下列警告 –

NOTE: if you are recovering a double-disk error or some other failure modethat made your array unrunnable but data is still intact then it s stronglyrecommended to use the lsraid utility and to read the lsraid HOWTO.

If your RAID array holds useful and not yet backed up data then forceand the hot-add/hot-remove functionality should be used with extreme care!If your /etc/raidtab file is not in sync with the real array configuration,then force might DESTROY ALL YOUR DATA. It s especially dangerous to use-f if the array is in degraded mode.

If your /etc/raidtab file matches the real layout of on-disk data thenrecreating the array will not hurt your data, but be aware of the risksof doing this anyway: freshly created RAID1 and RAID5 arrays do a fullresync of their mirror/parity blocks, which, if the raidtab is incorrect,the resync will wipe out data irrecoverably. Also, if your array is indegraded mode then the raidtab must match the degraded config exactly,otherwise you ll get the same kind of data destruction during resync.(see the failed-disk raidtab option.) You have been warned!

[ If your array holds no data, or you have it all backed up, or if youknow precisely what you are doing and you still want to proceed then usethe –really-force (or -R) flag. ] -并提示加 R 参数,进行强制性破坏,重建~![root@localhost root]# mkraid -R /dev/md0 加 R 选项后,破坏后,重建raid硬盘DESTROYING the contents of /dev/md0 in 5 seconds, Ctrl-C if unsure!handling MD device /dev/md0analyzing super-blockdisk 0: /dev/sdb1, 4192933kB, raid superblock at 4192832kBdisk 1: /dev/sdc1, 4192933kB, raid superblock at 4192832kBdisk 2: /dev/sdd1, 4192933kB, raid superblock at 4192832kB[root@localhost root]# more /proc/mdstat 查看内核的状态Personalities : [raid5]read_ahead 1024 sectorsmd0 : active raid5 sdd1[2] sdc1[1] sdb1[0] 8385664 blocks level 5, 4k chunk, algorithm 2 [3/3] [UUU] [== ………………] resync = 12.1% (510136/4192832) finish=5.9min speed=10324K/secunused devices: none [root@localhost root]# lsraid -A -a /dev/md0 查看raid是否完好~![dev 9, 0] /dev/md0 83824C00.34A9A7ED.D8D5B7A8.4B582652 online[dev 8, 17] /dev/sdb1 83824C00.34A9A7ED.D8D5B7A8.4B582652 good[dev 8, 33] /dev/sdc1 83824C00.34A9A7ED.D8D5B7A8.4B582652 good[dev 8, 49] /dev/sdd1 83824C00.34A9A7ED.D8D5B7A8.4B582652 good[root@localhost root]# mkfs.ext3 /dev/md0 -格式化raid磁盘,分区格式为ext3mke2fs 1.32 (09-Nov-2002)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)1048576 inodes, 2096416 blocks104820 blocks (5.00%) reserved for the super userFirst data block=064 block groups32768 blocks per group, 32768 fragments per group16384 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Writing inode tables: doneCreating journal (8192 blocks): doneWriting superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 29 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override.[root@localhost root]# mount /dev/md0 /opt 挂载raid硬盘到/opt目录[root@localhost root]# df lh 查看分区挂载情况Filesystem Size Used Avail Use% Mounted on/dev/sda1 5.0G 1.1G 3.6G 24% /none 78M 0 78M 0% /dev/shm/dev/md0 7.9G 33M 7.5G 1% /opt[root@localhost root]# mount 查看挂载情况/dev/sda1 on / type ext3 (rw)none on /proc type proc (rw)usbdevfs on /proc/bus/usb type usbdevfs (rw)none on /dev/pts type devpts (rw,gid=5,mode=620)none on /dev/shm type tmpfs (rw)/dev/md0 on /opt type ext3 (rw)

如何恢复一个破坏的RAID设备信息~?

[root@localhost root]# lsraid -A -a /dev/md0 查看raid是否完好~![dev 9, 0] /dev/md0 83824C00.34A9A7ED.D8D5B7A8.4B582652 online[dev 8, 17] /dev/sdb1 83824C00.34A9A7ED.D8D5B7A8.4B582652 good[dev 8, 33] /dev/sdc1 83824C00.34A9A7ED.D8D5B7A8.4B582652 good[dev 8, 49] /dev/sdd1 83824C00.34A9A7ED.D8D5B7A8.4B582652 good

[root@localhost root]# mkfs.ext3 /dev/md0mke2fs 1.32 (09-Nov-2002)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)1048576 inodes, 2096416 blocks104820 blocks (5.00%) reserved for the super userFirst data block=064 block groups32768 blocks per group, 32768 fragments per group16384 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Writing inode tables: doneCreating journal (8192 blocks): doneWriting superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 29 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override.[root@localhost root]# mount /dev/md0 /opt[root@localhost root]# df -lhFilesystem Size Used Avail Use% Mounted on/dev/sda1 5.0G 1.1G 3.6G 24% /none 78M 0 78M 0% /dev/shm/dev/md0 7.9G 33M 7.5G 1% /opt[root@localhost root]# mount/dev/sda1 on / type ext3 (rw)none on /proc type proc (rw)usbdevfs on /proc/bus/usb type usbdevfs (rw)none on /dev/pts type devpts (rw,gid=5,mode=620)none on /dev/shm type tmpfs (rw)/dev/md0 on /opt type ext3 (rw)[root@localhost root]# lsraid -A -a /dev/md0[dev 9, 0] /dev/md0 83824C00.34A9A7ED.D8D5B7A8.4B582652 online[dev 8, 17] /dev/sdb1 83824C00.34A9A7ED.D8D5B7A8.4B582652 good[dev 8, 33] /dev/sdc1 83824C00.34A9A7ED.D8D5B7A8.4B582652 good[dev 8, 49] /dev/sdd1 83824C00.34A9A7ED.D8D5B7A8.4B582652 good

[root@localhost root]# raidsetfaulty help 软件模拟raid磁盘错误usage: raidsetfaulty [–all] [–configfile] [–help] [–version] [-achv] /dev/md? *[root@localhost root]# raidsetfaulty /dev/md0 /dev/sdb1 -软件模拟raid磁盘中的一个磁盘分区错误[root@localhost root]# lsraid -A -a /dev/md0 再次查看raid磁盘分区是否完好[dev 9, 0] /dev/md0 83824C00.34A9A7ED.D8D5B7A8.4B582652 online[dev 8, 17] /dev/sdb1 83824C00.34A9A7ED.D8D5B7A8.4B582652 failed[dev 8, 33] /dev/sdc1 83824C00.34A9A7ED.D8D5B7A8.4B582652 good[dev 8, 49] /dev/sdd1 83824C00.34A9A7ED.D8D5B7A8.4B582652 good

[root@localhost root]# fdisk /dev/sde 新加入/dev/sde磁盘,来替换/dev/sdb1这个毁坏磁盘

Command (m for help): nCommand action e extended p primary partition (1-4)pPartition number (1-4): 1First cylinder (1-522, default 1):Using default value 1Last cylinder or +size or +sizeM or +sizeK (1-522, default 522):Using default value 522

Command (m for help): wThe partition table has been altered!

Calling ioctl() to re-read partition table.Syncing disks.[root@localhost root]# raidhot -查看raidhot开头命令,按两次tab键,系统会补全命令行raidhotadd raidhotremove[root@localhost root]# raidhotadd /dev/md0 /dev/sde1 加入修改磁盘/dev/sde1[root@localhost root]# more /proc/mdstat 查看系统内核的信息,系统在自动修复Personalities : [raid5]read_ahead 1024 sectorsmd0 : active raid5 sde1[3] sdd1[2] sdc1[1] sdb1[0](F) 8385664 blocks level 5, 4k chunk, algorithm 2 [3/2] [_UU] [ ………………..] recovery = 4.2% (176948/4192832) finish=6.4min speed=10408K/secunused devices: none [root@localhost root]# more /proc/mdstat 查看系统内核的信息,系统在自动修复Personalities : [raid5]read_ahead 1024 sectorsmd0 : active raid5 sde1[3] sdd1[2] sdc1[1] sdb1[0](F) 8385664 blocks level 5, 4k chunk, algorithm 2 [3/2] [_UU] [============ ……..] recovery = 62.3% (2615160/4192832) finish=2.5min speed=10315K/secunused devices: none [root@localhost root]# lsraid -A -a /dev/md0 -查看raid磁盘分区[dev 9, 0] /dev/md0 83824C00.34A9A7ED.D8D5B7A8.4B582652 online[dev 8, 65] /dev/sdb1 83824C00.34A9A7ED.D8D5B7A8.4B582652 failed[dev 8, 33] /dev/sdc1 83824C00.34A9A7ED.D8D5B7A8.4B582652 good[dev 8, 49] /dev/sdd1 83824C00.34A9A7ED.D8D5B7A8.4B582652 good[dev 8, 17] /dev/sde1 83824C00.34A9A7ED.D8D5B7A8.4B582652 spare[root@localhost root]# more /proc/mdstat 查看系统内核的信息,系统在自动修复Personalities : [raid5]read_ahead 1024 sectorsmd0 : active raid5 sde1[3] sdd1[2] sdc1[1] sdb1[0](F) 8385664 blocks level 5, 4k chunk, algorithm 2 [3/2] [_UU] [=============== …..] recovery = 75.0% (3149228/4192832) finish=1.6min speed=10303K/secunused devices: none [root@localhost root]# more /proc/mdstat 查看系统内核的信息,系统在自动修复,修复已经完成Personalities : [raid5]read_ahead 1024 sectorsmd0 : active raid5 sde1[3] sdd1[2] sdc1[1] sdb1[0](F) 8385664 blocks level 5, 4k chunk, algorithm 2 [3/3] [UUU]

unused devices: none [root@localhost root]# lsraid -A -a /dev/md0 -查看raid磁盘分区[dev 9, 0] /dev/md0 83824C00.34A9A7ED.D8D5B7A8.4B582652 online[dev 8, 65] /dev/sde1 83824C00.34A9A7ED.D8D5B7A8.4B582652 good[dev 8, 33] /dev/sdc1 83824C00.34A9A7ED.D8D5B7A8.4B582652 good[dev 8, 49] /dev/sdd1 83824C00.34A9A7ED.D8D5B7A8.4B582652 good

[root@localhost root]# raidhotremove /dev/md0 /dev/sdb1 移除受毁磁盘分区/dev/sdb1[root@localhost root]# lsraid -A -a /dev/md0 -再次查看raid磁盘分区是否完好[dev 9, 0] /dev/md0 83824C00.34A9A7ED.D8D5B7A8.4B582652 online[dev 8, 65] /dev/sde1 83824C00.34A9A7ED.D8D5B7A8.4B582652 good[dev 8, 33] /dev/sdc1 83824C00.34A9A7ED.D8D5B7A8.4B582652 good[dev 8, 49] /dev/sdd1 83824C00.34A9A7ED.D8D5B7A8.4B582652 good

使用冗余磁盘自动重建Raid,那需要在/etc/raidtab文件里增加两行内容,并修改nr-spare-disks 处的数字:

[root@localhost root]# cat /etc/raidtab# Sample raid-5 configurationraiddev /dev/md0raid-level 5nr-raid-disks 3nr-spare-disks 1 -更改这个冗余磁盘的数量chunk-size 4

# Parity placement algorithm

#parity-algorithm left-asymmetric

## the best one for maximum performance:#parity-algorithm left-symmetric

#parity-algorithm right-asymmetric#parity-algorithm right-symmetric

# Spare disks for hot reconstruction

device /dev/sdb1raid-disk 0

device /dev/sdc1raid-disk 1

device /dev/sdd1raid-disk 2

device /dev/sde1 设备文件的添加spare-disk 0 冗余磁盘的数量从0开始编号

注:冗余磁盘增加的位置,不能放在raid-disk之前,如果做如下的调整将出现问题:

[root@localhost root]# cat /etc/raidtab# Sample raid-5 configurationraiddev /dev/md0raid-level 5nr-raid-disks 3nr-spare-disks 1 -更改这个冗余磁盘的数量chunk-size 4

# Parity placement algorithm

#parity-algorithm left-asymmetric

## the best one for maximum performance:#parity-algorithm left-symmetric

#parity-algorithm right-asymmetric#parity-algorithm right-symmetric

# Spare disks for hot reconstruction

device /dev/sde1 设备文件的添加spare-disk 0 冗余磁盘的数量从0开始编号

device /dev/sdb1raid-disk 0

device /dev/sdc1raid-disk 1

device /dev/sdd1raid-disk 2

这样将出现错误,无法创建/dev/md0,所以做的时候,最好在raid磁盘的后面增加冗余磁盘。

最后再讲一个RAID 0+1 或被称为RAID 10的模式,在做这个RAID 0+1的模式的时候,最好先做RAID 0,然后再做RAID 1 ~

在RAID 0的配置文件里添加RAID 1 的成员为RAID 0 ,可参照下面的配置文件:

[root@localhost root]# vi /etc/raidtab# Sample raid-0 configuration

raiddev /dev/md0

raid-level 0 # it s not obvious but this *must* be # right after raiddev

persistent-superblock 0 # set this to 1 if you want autostart, # BUT SETTING TO 1 WILL DESTROY PREVIOUS # CONTENTS if this is a RAID0 array created # by older raidtools (0.40-0.51) or mdtools!

chunk-size 16

nr-raid-disks 2nr-spare-disks 0

device /dev/sdb1raid-disk 0

device /dev/sdc1raid-disk 1

raiddev /dev/md1

raid-level 1

nr-raid-disks 2

chunk-size 4

device /dev/sdd1raid-disk 0

device /dev/md0raid-disk 1

[root@localhost root]# mkraid /dev/md0handling MD device /dev/md0analyzing super-block[root@localhost root]# mkraid /dev/md1handling MD device /dev/md1analyzing super-blockdisk 0: /dev/sdd1, 4192933kB, raid superblock at 4192832kB/dev/sdd1 appears to be already part of a raid array use -f toforce the destruction of the old superblockmkraid: aborted.(In addition to the above messages, see the syslog and /proc/mdstat as wellfor potential clues.)[root@localhost root]# mkraid -R /dev/md1DESTROYING the contents of /dev/md1 in 5 seconds, Ctrl-C if unsure!handling MD device /dev/md1analyzing super-blockdisk 0: /dev/sdd1, 4192933kB, raid superblock at 4192832kBdisk 1: /dev/md0, 8385856kB, raid superblock at 8385792kB[root@localhost root]# lsraid -A -a /dev/md1[dev 9, 1] /dev/md1 0D874FBE.5DCF83BF.44319094.24463119 online[dev 8, 49] /dev/sdd1 0D874FBE.5DCF83BF.44319094.24463119 good[dev 9, 0] /dev/md0 0D874FBE.5DCF83BF.44319094.24463119 good

[root@localhost root]# lsraid -A -a /dev/md0[dev 9, 0] /dev/md0 A5689BB7.0C86653E.5E760E64.CCC163AB online[dev 8, 17] /dev/sdb1 A5689BB7.0C86653E.5E760E64.CCC163AB good[dev 8, 33] /dev/sdc1 A5689BB7.0C86653E.5E760E64.CCC163AB good

[dev 9, 1] /dev/md1 0D874FBE.5DCF83BF.44319094.24463119 online[dev 8, 49] /dev/sdd1 0D874FBE.5DCF83BF.44319094.24463119 good[dev 9, 0] /dev/md0 0D874FBE.5DCF83BF.44319094.24463119 good [root@localhost root]# more /proc/mdstatPersonalities : [raid0] [raid1] [raid5]read_ahead 1024 sectorsmd1 : active raid1 md0[1] sdd1[0] 4192832 blocks [2/2] [UU] [ ………………..] resync = 0.6% (26908/4192832) finish=261.3min speed=263K/secmd0 : active raid0 sdc1[1] sdb1[0] 8385856 blocks 16k chunks

unused devices: none [root@localhost root]#mkfs.ext3 /dev/md1

正如我总是意犹未尽的想起你。

Linux系统管理.Raid的配置文件说明及相关命令介绍推荐

相关文章:

你感兴趣的文章:

标签云: