Nova is the Compute engine of the OpenStack project.
Based on the currently available code ( commit 114109dbf4094ae6b6333d41c84bebf6f85c4e48 - 2012-09-13)
This is a deep dive into what happens (and where in the code) during a resize up (e.g., flavor 2 to flavor 4) with a Nova configuration that is backed by XenServer/XenAPI. Hopefully the methods used in Nova’s code will not change over time, and this guide will remain good starting point.
Some abstractions such as go-between RPC calls and basic XenAPI calls have been deliberately ignored.
Disclaimer: I am _not_ a developer, and this is just my best guess through an overly-caffeinated code dive. Corrections are welcome.
-
API receives a resize request.
-
Request Validations performed.
-
Quota verifications are performed.
- ./nova/compute/api.py
- def resize
- Scheduler is notified to prepare the resize request. A target is selected for the resize and notified.
- ./nova/scheduler/filter_scheduler.py
- def schedule_prep_resize
- Usage notifications are sent as the resize is preparing. A migration entry is created in the nova database with the status pre-migrating. resize_instance (6) is fired.
- ./nova/compute/manager.py
- def prep_resize
- The migration record is updated to migrating. The instance’s task_state is updated from resize preparation to resize migrating. Usages receive notification that the resize has started. The instance is renamed on the source to have a suffix of -orig. (6a) migrate_disk_and_power_off is invoked. The migration record is updated to post migrating. The instance’s task_state is updated to resize migrated. (6b) Finish resize is called.
- ./nova/compute/manager.py
- def resize_instance
- def migrate_disk_and_power_off
- ./nova/virt/xenapi/vmops.py
6a) migrate_disk_and_power_off, where the work begins… Progress is zeroed out. A resize up or down is detected. We’re taking the resize up code path in 6a1.
- ./nova/virt/xenapi/driver.py -> ./nova/virt/xenapi/vmops.py
- def migrate_disk_and_power_off
6a1) Snapshot the instance. Via migrate_vhd, transfer the immutable VHDs, this is the base copy or the parent VHD belonging to the instance. Instance resize progress updated. Power down the instance. Again, via migrate_vhd (steps 6a1i-6a1v), transfer the COW VHD, or the changes which have occurred since the snapshot was taken.
- ./nova/virt/xenapi/vmops.py
- def _migrate_disk_resizing_up
6a1i) Call the XenAPI plugin on the hypervisor to transfer_vhd
- ./nova/virt/xenapi/vmops.py
- def _migrate_vhd
6a1ii) Make a staging on the source server to prepare the VHD transfer.
- ./plugins/xenserver/xenapi/etc/xapi.d/plugins/migration
- def transfer_vhd
6a1iii) Hard link the VHD(s) transferred to the staging area.
- ./plugins/xenserver/xenapi/etc/xapi.d/plugins/utils.py
- def prepare_staging_area
6a1iv) rsync the VHDs to the destination. The destination path is /images/instance-instance_uuid.
- ./plugins/xenserver/xenapi/etc/xapi.d/plugins/migration
- def _rsync_vhds
6a1v) Clean up the staging area which was created in 6d.
- ./plugins/xenserver/xenapi/etc/xapi.d/plugins/utils.py
- def cleanup_staging_area
6b) (6b1) Set up the newly transferred disk and turn on the instance on the destination host. Make the tenant’s quota usage reflect the newly resized instance.
- ./nova/compute/manager.py
- def finish_resize
6b1) The instance record is updated with the new instance_type (ram, CPU, disk, etc). Networking is set up on the destination (another day- best guess: nova-network, quantum, and the configured quantum plugin(s) are notified of the networking changes). The instance’s task_state is set to resize finish. Usages are notified of the beginning of the end of the resize process. (6b1i) Finish migration is invoked. The instance record is updated to resized. The migration record is set to finished. Usages are notified that the resize has completed.
- ./nova/compute/manager.py
- def _finish_resize
6b1i) (6b1ii) move_disks is called. (6b2) _resize_instance is called. The destination VM is created and started via XenAPI. The resize’s progress is updated.
- ./nova/virt/xenapi/driver.py -> ./nova/virt/xenapi/vmops.py
- def finish_migration
6b1ii) (6b1iii) The XenAPI plugin is called to move_vhds_into_sr. The SR is scanned. The Root VDI’s name-label and name-description are set to reflect the instance’s details.
- ./nova/virt/xenapi/vm_utils.py
- def move_disks
6b1iii) Remember the VHD destination from step 6a1iv? I thought so! =) (6b1iv) Call import_vhds on the VHDs in that destination. Cleanup this staging area, just like 6a1v.
- ./plugins/xenserver/xenapi/etc/xapi.d/plugins/migration
- def move_vhds_into_sr
6b1iv) Move the VHDs from the staging area to the SR. Staging areas are used because if the SR is unaware of VHDs contained within, it could delete our data. This function assembles our VHD chain’s order, and assigns UUIDs to them. Then they are renamed from #.vhd to UUID.vhd. The VHDs are then linked to each other, starting from parent and moving down the tree. This is done via calls to vhd-util modify. The VDI chain is validated. Swap files are given a similar treatment, and appended to the list of files to move into the SR. All of these files are renamed (python os.rename) from the staging area into the host’s SR.
- ./plugins/xenserver/xenapi/etc/xapi.d/plugins/utils.py
- def import_vhds
6b2) The instance’s root disk is resized. The current size of the root disk is retrieved from XenAPI (virtual_size). XenAPI VDI.resize_online ( XenServer > 5 ) is called to resize the disk to its new size as defined by the instance_type.
UPDATE: (2012-10-28) Added to step 6 the renaming to the instance-uuid-orig on the source.