[pve-devel] Default cache mode for VM hard drives

Dietmar Maurer dietmar at proxmox.com
Thu Nov 20 06:39:35 CET 2014


> Again, migration code flush all changes to disk, so there are no "out of sync"
> blocks
> after migration. What do I miss?
> 
> I'll try to explain within more details.
> When write cache is enabled then KVM process write, write and write. It doesn't
> care about what really happen to this data after it goes to buffer.

Sorry, but we use "cache=none", so cache is not enabled (what cache do you talk exactly?).

> What DRBD does it is writing data from buffer to both nodes but DRBD can't do
> this simultaneously. So it writes data to local node first and then to another
> node. Between these 'writes' data in buffer can be changed and nobody know it
> was changed. So every time data is written DRBD can't be sure that data written
> locally and data written remotely are identical.
> Why barriers usually helps? Because OS inside VM doesn't write anything until
> 'data is committed' message received from DRBD. 'Data
>  is committed' comes from DRBD when real data committed to both nodes, local
> and remote.

Right. Both, the VM kernel, and the KVM migration code issue a flush when required.
So you never end up in inconsistent state.

Migration would never work if above theory is correct. But it works perfectly
with iSCSI, NFS, glusterfs, ceph, ... 


More information about the pve-devel mailing list