<p dir="ltr">Hello Cesar,</p>
<p dir="ltr">If you don't have this problem I do not really understand why cut in any try to convince everybody that the problem does not exist.</p>
<p dir="ltr">Regards,<br>
Stanislav</p>
<p dir="ltr">On Apr 13, 2015 8:46 PM, "Cesar Peschiera" <<a href="mailto:brain@click.com.py">brain@click.com.py</a>> wrote:<br>
><br>
> Hi Stanislav<br>
><br>
> Excuse me please, but your link don't tell me nothing about of the root of the problem of "oos" in DRBD (assuming that the directive "data-integrity-alg" is disabled).<br>
><br>
> Also, i have configured in the lvm.conf file "write_cache_state = 0" (may be this can help you, that is other recomendation of Linbit)<br>
><br>
> I think that the tuning of DRBD is the key for the success, that i did in workstations and real servers, and never had problems of "oos", always with all firmwares of hardware updated and with NICs Intel of 1 Gb/s or 10 Gb/s with bonding balance-rr exclusive for replication of DRBD (NIC-to-NIC, and i don't know if will work well with the broadcom brand or with other brands, I never did a test).<br>
><br>
> In real servers, with I/OAT engine enabled in the BIOS, and with Intel NICs, you get better performance (and in my case, without get a "oos")<br>
><br>
> Best regards<br>
> Cesar<br>
><br>
> ----- Original Message ----- From: Stanislav German-Evtushenko<br>
> To: Cesar Peschiera<br>
> Cc: Alexandre DERUMIER ; pve-devel<br>
> Sent: Monday, April 13, 2015 12:12 PM<br>
><br>
> Subject: Re: [pve-devel] Default cache mode for VM hard drives<br>
><br>
><br>
> Hi Cesar,<br>
><br>
><br>
> Out of sync with cache=directsync happen in very specific cases. Here is the decription of one of them: <a href="http://forum.proxmox.com/threads/18259-KVM-on-top-of-DRBD-and-out-of-sync-long-term-investigation-results?p=108099#post108099">http://forum.proxmox.com/threads/18259-KVM-on-top-of-DRBD-and-out-of-sync-long-term-investigation-results?p=108099#post108099</a><br>
><br>
><br>
> Best regards,<br>
><br>
> Stanislav<br>
><br>
><br>
><br>
><br>
> On Mon, Apr 13, 2015 at 7:02 PM, Cesar Peschiera <<a href="mailto:brain@click.com.py">brain@click.com.py</a>> wrote:<br>
><br>
> Hi to all<br>
><br>
> I use directsync in my VMs with DRBD 8.4.5 in four nodes (LVM on top of DRBD), since some months ago, never did have problems (all sunday days, a automated system verify all storages DRBD),<br>
><br>
> These are the version of packages of my PVE nodes:<br>
><br>
> In a pair of nodes:<br>
> Shell# pveversion -v<br>
> proxmox-ve-2.6.32: 3.3-139 (running kernel: 3.10.0-5-pve)<br>
> pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)<br>
> pve-kernel-2.6.32-27-pve: 2.6.32-121<br>
> pve-kernel-3.10.0-5-pve: 3.10.0-19<br>
> pve-kernel-2.6.32-28-pve: 2.6.32-124<br>
> pve-kernel-2.6.32-29-pve: 2.6.32-126<br>
> pve-kernel-2.6.32-34-pve: 2.6.32-139<br>
> lvm2: 2.02.98-pve4<br>
> clvm: 2.02.98-pve4<br>
> corosync-pve: 1.4.7-1<br>
> openais-pve: 1.1.4-3<br>
> libqb0: 0.11.1-2<br>
> redhat-cluster-pve: 3.2.0-2<br>
> resource-agents-pve: 3.9.2-4<br>
> fence-agents-pve: 4.0.10-1<br>
> pve-cluster: 3.0-15<br>
> qemu-server: 3.3-3<br>
> pve-firmware: 1.1-3<br>
> libpve-common-perl: 3.0-19<br>
> libpve-access-control: 3.0-15<br>
> libpve-storage-perl: 3.0-25<br>
> pve-libspice-server1: 0.12.4-3<br>
> vncterm: 1.1-8<br>
> vzctl: 4.0-1pve6<br>
> vzprocps: 2.0.11-2<br>
> vzquota: 3.1-2<br>
> pve-qemu-kvm: 2.1-10<br>
> ksm-control-daemon: 1.1-1<br>
> glusterfs-client: 3.5.2-1<br>
><br>
> In other pair of nodes:<br>
> Shell# pveversion -v<br>
> proxmox-ve-2.6.32: 3.3-139 (running kernel: 3.10.0-5-pve)<br>
> pve-manager: 3.3-5 (running version: 3.3-5/bfebec03)<br>
> pve-kernel-3.10.0-5-pve: 3.10.0-19<br>
> pve-kernel-2.6.32-34-pve: 2.6.32-139<br>
> lvm2: 2.02.98-pve4<br>
> clvm: 2.02.98-pve4<br>
> corosync-pve: 1.4.7-1<br>
> openais-pve: 1.1.4-3<br>
> libqb0: 0.11.1-2<br>
> redhat-cluster-pve: 3.2.0-2<br>
> resource-agents-pve: 3.9.2-4<br>
> fence-agents-pve: 4.0.10-1<br>
> pve-cluster: 3.0-15<br>
> qemu-server: 3.3-5 <-particularly made by Alexandre<br>
> pve-firmware: 1.1-3<br>
> libpve-common-perl: 3.0-19<br>
> libpve-access-control: 3.0-15<br>
> libpve-storage-perl: 3.0-25<br>
> pve-libspice-server1: 0.12.4-3<br>
> vncterm: 1.1-8<br>
> vzctl: 4.0-1pve6<br>
> vzprocps: 2.0.11-2<br>
> vzquota: 3.1-2<br>
> pve-qemu-kvm: 2.2-2 <-particularly made by Alexandre<br>
> ksm-control-daemon: 1.1-1<br>
> glusterfs-client: 3.5.2-1 <br>
</p>