-
-
Notifications
You must be signed in to change notification settings - Fork 193
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VmAllocationPolicyMigrationAbstract creates and destroys VMs just to assess new placement #94
Labels
Comments
manoelcampos
changed the title
PowerVmAllocationPolicyMigrationAbstract creates and destroys VMs just to find new placement for VMs
PowerVmAllocationPolicyMigrationAbstract creates and destroys VMs just to find new placement
May 27, 2017
manoelcampos
changed the title
PowerVmAllocationPolicyMigrationAbstract creates and destroys VMs just to find new placement
PowerVmAllocationPolicyMigrationAbstract creates and destroys VMs just to assess new placement
May 30, 2017
manoelcampos
added a commit
that referenced
this issue
Apr 6, 2020
Creates a clone of the VM which needs to be temporarily placed into a Host to check if it won't be overload after the placement. The issue was because calling host.destroyTemporaryVm was supposed to only destroy the temporary VM created. But since an actual VM was passed to host.createTemporaryVm, that actual VM was set as not created after host.destroyTemporaryVm. Creating a clone of the actual VM so that the clone is temporarily created, fixed the issue. This is the kind of issue that happens due to #94. Signed-off-by: Manoel Campos <manoelcampos@gmail.com>
manoelcampos
changed the title
PowerVmAllocationPolicyMigrationAbstract creates and destroys VMs just to assess new placement
VmAllocationPolicyMigrationAbstract creates and destroys VMs just to assess new placement
Apr 9, 2020
manoelcampos
added a commit
that referenced
this issue
Aug 28, 2020
Signed-off-by: Manoel Campos <manoelcampos@gmail.com>
An easier solution would be cloning the VM and try to temporarily place it into the candidate host. This way, wet avoid messing with the VM state before actually placing it into the host. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
ISSUE:
The
VmAllocationPolicyMigrationAbstract.optimizeAllocation()
method destroys and creates VMs into their current placed Hosts only to compute a new placement for VMs for under and overloaded Hosts.VMs from those Hosts need to be rearranged to fix the under/overload situation. This way, new Hosts need to be found for all VMs in underloaded Hosts, so that such Hosts can be shut down. Overloaded Hosts need to have some VMs migrated to reduce the load. In this case, a VmSelectionPolicy needs to select VMs until the Host is not overloaded anymore.
After finding a Host for VMs into under and overloaded Hosts, VM_MIGRATION requests are sent to actually start the migration process, that takes some time to complete (considering the available bandwidth and the VM RAM size).
Expected behavior
Since a VM migration only starts after a VM_MIGRATION message is sent, the current mapping of VMs to PMs must not be changed. Only after the migration of a given VM finishes (which is when the Datacenter receives the VM_MIGRATION message) and the target Host is able to create the VM (provided it has enough available resources), the placement of that VM must be changed, destroying it at the source Host and creating it into the target Host.
The process of computing a new mapping of VMs to PMs doesn't have to change the current mapping.
Actual behavior
The
VmAllocationPolicyMigrationAbstract.optimizeAllocation()
changes the mapping of VMs and PMs just to get a map of VMs to be migrated. It starts by saving the current allocation and then changes it to find the migration map. At the end, it restores the previous allocation to get the system back to the state it was before computing the Map of VMs that can be migrated.The mapping is changed because it is an easier way to compute the new mapping. For instance, if you want to select VMs to migrate from an overloaded Host until such a Host is not overloaded anymore, it is easier to select a VM and then remove it from the Host, so that the resources it was using will be deallocated. Then you can repeat this process until the Host is not overloaded anymore. At the end, you have a List of VMs that can be migrated from this Host so that it won't be overloaded anymore. But this process is supposed to only get a List of possible migrations and not to change current allocation map.
Side Effects
Performance
This issue interferes in framework performance since it increases drastically the number of calls to the Host.vmCreate() and Host.destroyVm() methods, used just to check if a Host is suitable for some VMs. It also makes debugging confusing, since it's difficult to know when the framework is in fact, placing a VM inside a Host or it is just checking if the Host has enough resources to place such a VM.
False notifications about VM creation and destruction
It also makes subscribers of
EventListeners
to get wrong notifications for when a VM is created or destroyed into a Host. TheVmAllocationPolicyMigrationAbstract
temporarily changes the mapping of VMs to PMs by creating and destroying VMs, and then restores the original placement at the end, before sending the actual request for the Datacenter to migrate a VM to a chosen Host. This way, subscribers of such events will get multiple notifications about the creation and destruction of a specific VM.Such not expected additional notifications may bring issues to the researcher simulation code and may be very confusing to him/her figure out what is actually happening.
Mutable state
Arbitrarily changing the state of VMs and Hosts is error-prone since the current behavior is not expected. It may confuse developers trying to extend CloudSim Plus and avoids applying a more functional style programming. This mutable state also stops using Java 8 Streams internally to execute tasks in parallel. For instance, finding a new VM placement could be done in parallel for each existing Datacenter.
Implementation Directions
It's easy to introduce bugs by mutating the state of objects in such a way.
The same process described above can be achieved, for instance, by creating a method such as
isOverloaded(List<Vm> ignoredVms)
into the Host so that it checks if it is overloaded or not, considering its current running VMs, but excluding the given list of VMs to ignore. Such a list can be the List that is built by theVmAllocationPolicyMigrationAbstract
, each time it selects a Vm to migrate from the Host.However, this approach may not be appropriate since there are other methods that should be changed, such as the Host.isSuitable. This method checks if a Host is suitable to place a given VM, but when it is assessed if multiple VMs are suitable to be placed at the same time into a Host, checking one VM at a time doesn't give the required result. In this case, it will be checked only if a single VM can be placed into the Host, not all VMs.
A possible approach would be creating a
VmGroup
or fake VM which represents the capacity of all VMs to be checked if they can be placed into a specific Host. For instance, if we have 3 VMs to check if all of them can be placed together into a Host, if each VM requires 10MB of RAM, 100MB of Storage, 2 PEs of 1000 MIPS and 100Mb/s of BW, the fake Vm will be the sum of 3 VMs, that is: 30MB of RAM, 300MB of Storage, 6 PEs of 1000 MIPS and 300Mb/s of BW. Thus, we can cal host.isSuitable(fakeVm) to check if all of them can be placed into the Host together.This fakeVm can be created using
host.createTemporaryVm()
.Related Issues
The text was updated successfully, but these errors were encountered: