[pve-devel] Default cache mode for VM hard drives

Cesar Peschiera brain at click.com.py
Mon Apr 13 20:56:15 CEST 2015


Hi Stanislav

I don't try to convince to everybody that the problem not exist, i am only commenting my personal experiences and trying of help to those in need.

But if you think that my comments are not necessaries, please excuse me, and for this topic, I will not do comments again.
Moreover, I don't know why I don't have this same problem that you have ¿? ¿? ¿?
I just wanted to cooperate with this community trying of find the root of this strange problem !!!.

Best regards
Cesar
  ----- Original Message ----- 
  From: Stanislav German-Evtushenko 
  To: Cesar Peschiera 
  Cc: Alexandre Derumier ; pve-devel at pve.proxmox.com 
  Sent: Monday, April 13, 2015 2:11 PM
  Subject: Re: [pve-devel] Default cache mode for VM hard drives


  Hello Cesar,

  If you don't have this problem I do not really understand why cut in any try to convince everybody that the problem does not exist.

  Regards,
  Stanislav

  On Apr 13, 2015 8:46 PM, "Cesar Peschiera" <brain at click.com.py> wrote:
  >
  > Hi Stanislav
  >
  > Excuse me please, but your link don't tell me nothing about of the root of the problem of "oos" in DRBD (assuming that the directive "data-integrity-alg" is disabled).
  >
  > Also, i have configured in the lvm.conf file "write_cache_state = 0" (may be this can help you, that is other recomendation of Linbit)
  >
  > I think that the tuning of DRBD is the key for the success, that i did in workstations and real servers, and never had problems of "oos", always with all firmwares of hardware updated and with NICs Intel of 1 Gb/s or 10 Gb/s with bonding balance-rr exclusive for replication of DRBD (NIC-to-NIC, and i don't know if will work well with the broadcom brand or with other brands, I never did a test).
  >
  > In real servers, with I/OAT engine enabled in the BIOS, and with Intel NICs, you get better performance (and in my case, without get  a "oos")
  >
  > Best regards
  > Cesar
  >
  > ----- Original Message ----- From: Stanislav German-Evtushenko
  > To: Cesar Peschiera
  > Cc: Alexandre DERUMIER ; pve-devel
  > Sent: Monday, April 13, 2015 12:12 PM
  >
  > Subject: Re: [pve-devel] Default cache mode for VM hard drives
  >
  >
  > Hi Cesar,
  >
  >
  > Out of sync with cache=directsync happen in very specific cases. Here is the decription of one of them: http://forum.proxmox.com/threads/18259-KVM-on-top-of-DRBD-and-out-of-sync-long-term-investigation-results?p=108099#post108099
  >
  >
  > Best regards,
  >
  > Stanislav
  >
  >
  >
  >
  > On Mon, Apr 13, 2015 at 7:02 PM, Cesar Peschiera <brain at click.com.py> wrote:
  >
  > Hi to all
  >
  > I use directsync in my VMs with DRBD 8.4.5 in four nodes (LVM on top of DRBD), since some months ago, never did have problems (all sunday days, a automated system verify all storages DRBD),
  >
  > These are the version of packages of my PVE nodes:
  >
  > In a pair of nodes:
  > Shell# pveversion -v
  > proxmox-ve-2.6.32: 3.3-139 (running kernel: 3.10.0-5-pve)
  > pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
  > pve-kernel-2.6.32-27-pve: 2.6.32-121
  > pve-kernel-3.10.0-5-pve: 3.10.0-19
  > pve-kernel-2.6.32-28-pve: 2.6.32-124
  > pve-kernel-2.6.32-29-pve: 2.6.32-126
  > pve-kernel-2.6.32-34-pve: 2.6.32-139
  > lvm2: 2.02.98-pve4
  > clvm: 2.02.98-pve4
  > corosync-pve: 1.4.7-1
  > openais-pve: 1.1.4-3
  > libqb0: 0.11.1-2
  > redhat-cluster-pve: 3.2.0-2
  > resource-agents-pve: 3.9.2-4
  > fence-agents-pve: 4.0.10-1
  > pve-cluster: 3.0-15
  > qemu-server: 3.3-3
  > pve-firmware: 1.1-3
  > libpve-common-perl: 3.0-19
  > libpve-access-control: 3.0-15
  > libpve-storage-perl: 3.0-25
  > pve-libspice-server1: 0.12.4-3
  > vncterm: 1.1-8
  > vzctl: 4.0-1pve6
  > vzprocps: 2.0.11-2
  > vzquota: 3.1-2
  > pve-qemu-kvm: 2.1-10
  > ksm-control-daemon: 1.1-1
  > glusterfs-client: 3.5.2-1
  >
  > In other pair of nodes:
  > Shell# pveversion -v
  > proxmox-ve-2.6.32: 3.3-139 (running kernel: 3.10.0-5-pve)
  > pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)
  > pve-kernel-3.10.0-5-pve: 3.10.0-19
  > pve-kernel-2.6.32-34-pve: 2.6.32-139
  > lvm2: 2.02.98-pve4
  > clvm: 2.02.98-pve4
  > corosync-pve: 1.4.7-1
  > openais-pve: 1.1.4-3
  > libqb0: 0.11.1-2
  > redhat-cluster-pve: 3.2.0-2
  > resource-agents-pve: 3.9.2-4
  > fence-agents-pve: 4.0.10-1
  > pve-cluster: 3.0-15
  > qemu-server: 3.3-5 <-particularly made by Alexandre
  > pve-firmware: 1.1-3
  > libpve-common-perl: 3.0-19
  > libpve-access-control: 3.0-15
  > libpve-storage-perl: 3.0-25
  > pve-libspice-server1: 0.12.4-3
  > vncterm: 1.1-8
  > vzctl: 4.0-1pve6
  > vzprocps: 2.0.11-2
  > vzquota: 3.1-2
  > pve-qemu-kvm: 2.2-2 <-particularly made by Alexandre
  > ksm-control-daemon: 1.1-1
  > glusterfs-client: 3.5.2-1 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.proxmox.com/pipermail/pve-devel/attachments/20150413/28654368/attachment.htm>


More information about the pve-devel mailing list