This post is based assumption that KVM as hypervisor, and Openstack is running in Grizzly on top of RHEL6.4.
Openstack VM live migration can have 3 categories:
-Block live migration without shared storage
-Shared storage based livemigration
-Volume backed VM live migration
Block live migration
Block live migration does not require shared storage among nova-compute nodes, it uses network(TCP) to copy VM disk from source host to destination host, thus it takes longer time to complete than shared storage based live migration. Also during migration, host performance will be degraded in network and CPU points of view.
To enable block live migration, we need to change libvirtd and nova.conf configurations on every nova-compute hosts:
->Edit /etc/libvirt/libvirtd.conf
listen_tls = 0listen_tcp = 1auth_tcp = "none"
->Edit/etc/sysconfig/libvirtd
LIBVIRTD_ARGS="–listen"
->Restart libvirtd
service libvirtd restart
->Edit /etc/nova/nova.conf, add following line:
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE
Adding this line is to ask nova-compute to utilize libvirt’s true live migration function.
functionality, which means guests are suspended beforemigrationand may therefore experience several minutes of downtime.
>Restart nova-compute service
serviceopenstack-nova-compute restart
->Check running VMs
nova list+————————————–+——————-+——–+———————+| ID | Name | Status | Networks |+————————————–+——————-+——–+———————+| 5374577e-7417-4add-b23f-06de3b42c410 | vm-live-migration | ACTIVE | ncep-net=10.20.20.3 |+————————————–+——————-+——–+———————+
->Check which host the VM in running on
nova show vm-live-migration+————————————-+———————————————————-+| Property | Value |+————————————-+———————————————————-+| status | ACTIVE || updated | 2013-08-06T08:47:13Z || OS-EXT-STS:task_state | None || OS-EXT-SRV-ATTR:host |compute-1|
->Check all available hosts
nova host-list+————–+————-+———-+| host_name | service | zone |+————–+————-+———-+|compute-1| compute | nova ||compute-2| compute | nova |
>Let’s migrate the vm to host compute-2
nova live-migration –block-migratevm-live-migration compute-2
>Later when we check vm status again, we cloud see the host changes to compute-2
nova show vm-live-migration+————————————-+———————————————————-+| Property | Value |+————————————-+———————————————————-+| status | ACTIVE || updated | 2013-08-06T09:45:33Z || OS-EXT-STS:task_state | None || OS-EXT-SRV-ATTR:host |compute-2|
>We can just do a migration without specifying destination host, nova-scheduler will choose a free host for you
nova live-migration –block-migratevm-live-migration
Block live migration seems a good solution if we don’t have/want shared storage , but we still want to do some host level maintenance work without bring downtime to running VMs.
Shared storage based livemigration
This example we use GlusterFS as shared storage for nova-compute nodes. Since VM disk is stored on shared storage, hence live migration much much more faster than block live migration.
1.Setup GlusterFS cluster for nova-compute nodes
->Prepare 4 nodes, each node has an extra disk other than OS disk, dedicated for GlusterFS cluster
Node 1: 10.68.124.18 /dev/sdb 1TBNode 2: 10.68.124.19 /dev/sdb 1TBNode 3: 10.68.124.20 /dev/sdb 1TBNode 4: 10.68.124.21 /dev/sdb 1TB
->Configure Node 1-4
yum install -y xfsprogs.x86_64mkfs.xfs -f -i size=512 /dev/sdbmkdir /brick1echo "/dev/sdb /brick1 xfs defaults 1 2" >> /etc/fstab 1014 cat /etc/fstabmount -a && mountmkdir /brick1/sdbyum install glusterfs-fuse glusterfs-server/etc/init.d/glusterd startchkconfig glusterd on
->On Node-1, add peers
gluster peer probe 10.68.125.19gluster peer probe 10.68.125.20gluster peer probe 10.68.125.21
->On node-1, create a volume for nova-compute
gluster volume create nova-gluster-vol replica 2 10.68.125.18:/brick1/sdb 10.68.125.19:/brick1/sdb 10.68.125.20:/brick1/sdb 10.68.125.21:/brick1/sdbgluster volume start nova-gluster-vol
->We can check the volume information
如果前世五百次眸回,才换来今生的擦肩而过。