Difference between revisions of "Raid"

From PostgreSQL_wiki
Jump to: navigation, search
(kopieer partities over volgende drives)
(Uitbreiden van de array)
Line 69: Line 69:
 
629 pvscan
 
629 pvscan
 
</pre>
 
</pre>
  +
root@dom0-149:~# mdadm --detail /dev/md0
  +
/dev/md0:
  +
Version : 1.2
  +
Creation Time : Sun Nov 4 14:26:21 2012
  +
Raid Level : raid5
  +
Array Size : 37722624 (35.98 GiB 38.63 GB)
  +
Used Dev Size : 12574208 (11.99 GiB 12.88 GB)
  +
Raid Devices : 4
  +
Total Devices : 4
  +
Persistence : Superblock is persistent
  +
  +
Update Time : Wed Nov 21 06:25:12 2012
  +
State : clean
  +
Active Devices : 4
  +
Working Devices : 4
  +
Failed Devices : 0
  +
Spare Devices : 0
  +
  +
Layout : left-symmetric
  +
Chunk Size : 512K
  +
  +
Name : dom0-149:0 (local to host dom0-149)
  +
UUID : 6e734f54:d0d226ff:36feb94a:8308e813
  +
Events : 83
  +
  +
Number Major Minor RaidDevice State
  +
0 8 3 0 active sync /dev/sda3
  +
2 8 35 1 active sync /dev/sdc3
  +
3 8 19 2 active sync /dev/sdb3
  +
4 8 51 3 active sync /dev/sdd3
  +
root@dom0-149:~#
  +
root@dom0-149:~# pvscan
  +
PV /dev/sdc5 VG kvm-swap lvm2 [8.00 GiB / 7.12 GiB free]
  +
PV /dev/md1 VG kvm-data lvm2 [212.63 GiB / 198.63 GiB free]
  +
PV /dev/md0 VG kvm-root lvm2 [11.99 GiB / 3.99 GiB free]
  +
Total: 3 [232.62 GiB] / in use: 3 [232.62 GiB] / in no VG: 0 [0 ]
  +
root@dom0-149:~# watch cat /proc/mdstat
  +
root@dom0-149:~# pvresize /dev/md0
  +
Physical volume "/dev/md0" changed
  +
1 physical volume(s) resized / 0 physical volume(s) not resized
  +
root@dom0-149:~# pvscan
  +
PV /dev/sdc5 VG kvm-swap lvm2 [8.00 GiB / 7.12 GiB free]
  +
PV /dev/md1 VG kvm-data lvm2 [212.63 GiB / 198.63 GiB free]
  +
PV /dev/md0 VG kvm-root lvm2 [35.97 GiB / 27.97 GiB free]
  +
Total: 3 [256.60 GiB] / in use: 3 [256.60 GiB] / in no VG: 0 [0 ]
  +
root@dom0-149:~#
  +
  +
root@dom0-149:~# mdadm --detail /dev/md1
  +
/dev/md1:
  +
Version : 1.2
  +
Creation Time : Sun Nov 4 14:27:02 2012
  +
Raid Level : raid5
  +
Array Size : 668891136 (637.90 GiB 684.94 GB)
  +
Used Dev Size : 222963712 (212.63 GiB 228.31 GB)
  +
Raid Devices : 4
  +
Total Devices : 4
  +
Persistence : Superblock is persistent
  +
  +
Update Time : Wed Nov 21 08:39:42 2012
  +
State : clean
  +
Active Devices : 4
  +
Working Devices : 4
  +
Failed Devices : 0
  +
Spare Devices : 0
  +
  +
Layout : left-symmetric
  +
Chunk Size : 512K
  +
  +
Name : dom0-149:1 (local to host dom0-149)
  +
UUID : 6e065d11:4fbae631:5eb4952e:8f46d8c8
  +
Events : 1049
  +
  +
Number Major Minor RaidDevice State
  +
0 8 4 0 active sync /dev/sda4
  +
2 8 36 1 active sync /dev/sdc4
  +
4 8 52 2 active sync /dev/sdd4
  +
3 8 20 3 active sync /dev/sdb4
  +
root@dom0-149:~# pvscan
  +
PV /dev/sdc5 VG kvm-swap lvm2 [8.00 GiB / 7.12 GiB free]
  +
PV /dev/md1 VG kvm-data lvm2 [212.63 GiB / 198.63 GiB free]
  +
PV /dev/md0 VG kvm-root lvm2 [35.97 GiB / 27.97 GiB free]
  +
Total: 3 [256.60 GiB] / in use: 3 [256.60 GiB] / in no VG: 0 [0 ]
  +
root@dom0-149:~# pvresize /dev/md1
  +
Physical volume "/dev/md1" changed
  +
1 physical volume(s) resized / 0 physical volume(s) not resized
  +
root@dom0-149:~# pvscan
  +
PV /dev/sdc5 VG kvm-swap lvm2 [8.00 GiB / 7.12 GiB free]
  +
PV /dev/md1 VG kvm-data lvm2 [637.90 GiB / 623.90 GiB free]
  +
PV /dev/md0 VG kvm-root lvm2 [35.97 GiB / 27.97 GiB free]
  +
Total: 3 [681.87 GiB] / in use: 3 [681.87 GiB] / in no VG: 0 [0 ]
  +
root@dom0-149:~# history
  +
 
== lvm op een software raid ==
 
== lvm op een software raid ==
 
Na de installatie van nuttige software:
 
Na de installatie van nuttige software:

Revision as of 09:16, 21 November 2012

Software matige RAID

Configureer de Raid voor de dom0 tijdens de installatie van de fysieke host.

De dom0 server zelf

Een een softwarematige RAID bestaat op een dom0 uit meerdere md-devices. Creeer het eerste kleine device 8Gb - 20Gb tijdens de installatie volgens deze stappen:

  • in the following menu, scroll to your first disk and hit enter: the partitionier asks you, if you want to create an empty partition table. Say "yes". (Hint: this will erase your existing data, if any.)
  • The partitioner is back in the disk overview, scroll one line downwards over the line with "FREE SPACE" and hit enter.
  • Create a partition with the size you need, but remember the size and the logical type.
  • In the "Partition settings" menu, go to "Use as" and hit enter.
  • Change the type to "physical volume for RAID".
  • Finish this partition with "Done setting up the partition".
  • Create other partitions on the same disk, if you like.
  • Now repeat all the steps from the first disk for the second disk.

After this, you should have at least two disks with the same partition schema and all partitions (beside swap) should be marked for RAID use.

  • Now look at the first menu entry in the partitioner menu, there is a new line: "Configure software RAID". Go into this menu.
  • Answer the question, if you like to write the changes, with "Yes".
  • Now pick "Create MD device".
  • Use RAID1 and give the number of active and spare devices (2 and 0 in our case).
  • In the following menu, select the same device number on the first and second disk and Continue.

Repeat this step for every two devices until you are done. Then use "Finish" from the Multidisk configuration options.

You are back in the partitioner menu and now you see one ore more new partitions named as "Software RAID Device". You can use this partitions like any normal partition and continue installing your system.

Meer info op : http://www.howtoforge.com/software-raid1-grub-boot-debian-etch

5.2.4. Managing RAID devices (RAID-1 and up!!) Setting a disk faulty/failed:

  1. mdadm --fail /dev/md0 /dev/hdc1

[Caution] DO NOT run this every on a raid0 or linear device or your data is toasted! Removing a faulty disk from an array:

  1. mdadm --remove /dev/md0 /dev/hdc1

Clearing any previous raid info on a disk (eg. reusing a disk from another decommissioned raid array)

  1. mdadm --zero-superblock /dev/hdc1

Adding a disk to an array

  1. mdadm --add /dev/md0 /dev/hdc1

Toevoegen drives

Voeg aan een bestaande raid5 configuratie met twee drives twee extra drives toe. Dit gaat in stappen:

  • Kopieer de partitie tabel van de tweed drive naar drives drie en vier
  • Voeg de relevante partities van beide drives als spare toe
  • Breid het aantal devices van de array uit
  • Breid het physical volume van de array uit

Kopieer partities over volgende drives

De beste manier om identieke partities op raid devices te maken is het kopieren van de partitie tabel over de volgende disk heen:

sfdisk -d /dev/sda | sfdisk /dev/sdb

Dit gaat niet altijd zonder slag of stoot, zeker als je drives van verschillende merken in de array hebt. In dat geval is het noodzakelijk om de partitietabel handmatig toe te voegen.

Uitbreiden van de array

Het uitbreiden van de array gaat door eerst spares toe te voegen en daarna de array met de spares uit te breiden

mdadm -a /dev/md0 /dev/sdb3 /dev/sdd3
mdadm -a /dev/md1 /dev/sdb4 /dev/sdd4
mdadm --grow --raid-devices=4 /dev/md0
mdadm --grow --raid-devices=4 /dev/md1

Volg het proces met

watch cat /proc/mdstat 

Aan het einde van het proces laat de array de nieuwe omvang zien met

mdadm --detail /dev/md0
mdadm --detail /dev/md1

De nieuwe extra ruimte is nog niet beschikbaar voor het op physical volume op de array

  pvscan
  628  pvresize /dev/md1
  629  pvscan

root@dom0-149:~# mdadm --detail /dev/md0 /dev/md0:

       Version : 1.2
 Creation Time : Sun Nov  4 14:26:21 2012
    Raid Level : raid5
    Array Size : 37722624 (35.98 GiB 38.63 GB)
 Used Dev Size : 12574208 (11.99 GiB 12.88 GB)
  Raid Devices : 4
 Total Devices : 4
   Persistence : Superblock is persistent
   Update Time : Wed Nov 21 06:25:12 2012
         State : clean 
Active Devices : 4

Working Devices : 4

Failed Devices : 0
 Spare Devices : 0
        Layout : left-symmetric
    Chunk Size : 512K
          Name : dom0-149:0  (local to host dom0-149)
          UUID : 6e734f54:d0d226ff:36feb94a:8308e813
        Events : 83
   Number   Major   Minor   RaidDevice State
      0       8        3        0      active sync   /dev/sda3
      2       8       35        1      active sync   /dev/sdc3
      3       8       19        2      active sync   /dev/sdb3
      4       8       51        3      active sync   /dev/sdd3

root@dom0-149:~# root@dom0-149:~# pvscan

 PV /dev/sdc5   VG kvm-swap   lvm2 [8.00 GiB / 7.12 GiB free]
 PV /dev/md1    VG kvm-data   lvm2 [212.63 GiB / 198.63 GiB free]
 PV /dev/md0    VG kvm-root   lvm2 [11.99 GiB / 3.99 GiB free]
 Total: 3 [232.62 GiB] / in use: 3 [232.62 GiB] / in no VG: 0 [0   ]

root@dom0-149:~# watch cat /proc/mdstat root@dom0-149:~# pvresize /dev/md0

 Physical volume "/dev/md0" changed
 1 physical volume(s) resized / 0 physical volume(s) not resized

root@dom0-149:~# pvscan

 PV /dev/sdc5   VG kvm-swap   lvm2 [8.00 GiB / 7.12 GiB free]
 PV /dev/md1    VG kvm-data   lvm2 [212.63 GiB / 198.63 GiB free]
 PV /dev/md0    VG kvm-root   lvm2 [35.97 GiB / 27.97 GiB free]
 Total: 3 [256.60 GiB] / in use: 3 [256.60 GiB] / in no VG: 0 [0   ]

root@dom0-149:~#

root@dom0-149:~# mdadm --detail /dev/md1 /dev/md1:

       Version : 1.2
 Creation Time : Sun Nov  4 14:27:02 2012
    Raid Level : raid5
    Array Size : 668891136 (637.90 GiB 684.94 GB)
 Used Dev Size : 222963712 (212.63 GiB 228.31 GB)
  Raid Devices : 4
 Total Devices : 4
   Persistence : Superblock is persistent
   Update Time : Wed Nov 21 08:39:42 2012
         State : clean 
Active Devices : 4

Working Devices : 4

Failed Devices : 0
 Spare Devices : 0
        Layout : left-symmetric
    Chunk Size : 512K
          Name : dom0-149:1  (local to host dom0-149)
          UUID : 6e065d11:4fbae631:5eb4952e:8f46d8c8
        Events : 1049
   Number   Major   Minor   RaidDevice State
      0       8        4        0      active sync   /dev/sda4
      2       8       36        1      active sync   /dev/sdc4
      4       8       52        2      active sync   /dev/sdd4
      3       8       20        3      active sync   /dev/sdb4

root@dom0-149:~# pvscan

 PV /dev/sdc5   VG kvm-swap   lvm2 [8.00 GiB / 7.12 GiB free]
 PV /dev/md1    VG kvm-data   lvm2 [212.63 GiB / 198.63 GiB free]
 PV /dev/md0    VG kvm-root   lvm2 [35.97 GiB / 27.97 GiB free]
 Total: 3 [256.60 GiB] / in use: 3 [256.60 GiB] / in no VG: 0 [0   ]

root@dom0-149:~# pvresize /dev/md1

 Physical volume "/dev/md1" changed
 1 physical volume(s) resized / 0 physical volume(s) not resized

root@dom0-149:~# pvscan

 PV /dev/sdc5   VG kvm-swap   lvm2 [8.00 GiB / 7.12 GiB free]
 PV /dev/md1    VG kvm-data   lvm2 [637.90 GiB / 623.90 GiB free]
 PV /dev/md0    VG kvm-root   lvm2 [35.97 GiB / 27.97 GiB free]
 Total: 3 [681.87 GiB] / in use: 3 [681.87 GiB] / in no VG: 0 [0   ]

root@dom0-149:~# history

lvm op een software raid

Na de installatie van nuttige software:

root@host:~# apt-get install mdadm raidutils array-info

Configuratie van een nieuw md-device:

root@host:~# cat /proc/mdstat
root@host:~# mdadm -v --create /dev/md0 --level=raid5 --raid-devices=2 /dev/sda5 /dev/sdb5

Volg het proces met:

root@host:~# watch cat /proc/mdstat
root@host:~# pvcreate /dev/md0
root@host:~# pvscan
root@host:~# array-info -v -d /dev/md0
root@host:~# pvdisplay
root@host:~# vgcreate kvm-root /dev/md0

Volg hierna de normale procedure voor het bouwen van een domU.