Based on currently available code ( a77c0c50166aac04f0707af25946557fbd43ad44 2012-11-02)
This is a deep dive into what happens (and where in the code) during a rescue/unrescue scenario with a Nova configuration that is backed by XenServer/XenAPI. Hopefully the methods used in Nova’s code will not change over time, and this guide will remain good starting point.
Rescue
- nova-api receives a rescue request. A new admin password is generated via utils.generate_password meeting FLAGS.password_length length requirement. The API calls rescue on the compute api.
- nova/api/openstack/compute/contrib/rescue.py
- def _rescue
- The compute API updates the vm_state to RESCUING, and calls the compute rpcapi rescue_instance with the same details.
- nova/compute/api.py
- def rescue
- The RPC API casts a rescue_instance message to the compute node’s message queue.
- nova/compute/rpcapi.py
- def rescue_instance
- nova-compute consumes the message in the queue containing the rescue request. The admin password is retrieved, if one was not passed this far one will be generated via utils.generate_password with the same flags as step 1. It then records details about the source instance, like networking and image details. The compute driver rescue function is called. After that (4a-4c) completes, the instance’s vm_state is updated to rescued.
- nova/compute/manager.py
- def rescue_instance
4a) This abstraction was skipped over in the last two deepdives, but for the sake of completeness: Driver.rescue is called. This just calls _vmops.rescue, where the real work happens.
- nova/virt/xenapi/driver.py
- def rescue
4b) Checks are performed to ensure the instance isn’t in rescue mode already. The original instance is shutdown via XenAPI. The original instance is bootlocked. A new instance is spawned with -rescue in the name-label.
- nova/virt/xenapi/vmops.py
- def rescue
4c) A new VM is created just as all other VMs, with the source VM’s metadata. The root volume from the instance we are rescuing is attached as a secondary disk. The instance’s networking is the same, however the new hostname is RESCUE-hostname.
- nova/virt/xenapi/vmops.py
- def spawn -> attach_disks_step rescue condition
Unrescue
- nova-api receives an unrescue request.
- nova/api/openstack/compute/contrib/rescue.py
- def _unrescue
- The compute API updates the vm_state to UNRESCUING, and calls the compute rpcapi unrescue_instance with the same details.
- nova/compute/api.py
- def unrescue
- The RPC API casts an unrescue_instance message to the compute node’s message queue.
- nova/compute/rpcapi.py
- def unrescue_instance
- The compute manager receives the unrescue_instance message and calls the driver’s rescue method.
- nova/compute/manager.py
- def unrescue_instance
4a) Driver.unrescue is called. This just calls _vmops.unrescue, where the real work happens.
- nova/virt/xenapi/driver.py
- def unrescue
4b) The rescue VM is found. Checks are done to ensure the VM is in rescue mode. The original VM is found. The rescue instance has _destroy_rescue_instance performed (4b1). After that completes, the source VM’s bootlock is released and the VM is started.
- nova/virt/xenapi/vmops.py
- def unrescue
4b1) A hard shutdown is issued on the rescue instance. Via XenAPI, the root disk of the original instance is found. All VDIs attached to the rescue instance are destroyed omitting the root of the original instance. The rescue VM is destroyed.
- nova/virt/xenapi/vmops.py
- def _destroy_rescue_instance