<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Thu, Nov 20, 2014 at 10:49 AM, Cesar Peschiera <span dir="ltr"><<a href="mailto:brain@click.com.py" target="_blank">brain@click.com.py</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class=""><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Cache=none means no host cache but backend cache is still in use. In case<br>
of DRBD this is a buffer in DRBD. So O_DIRECT return OK when data<br>
reaches this buffer and not RAID cache.<br>
</blockquote>
<br></span>
Excuse me please if i intervene in this conversation, but as i understand, if the data is in a buffer of DRBD, then DRBD must know that there exist data to replicate, so obviuosly the problem isn't in the upper layers (KVM, any buffer in the RAM controlled by some software, etc.), so the buffer of DRBD should be optimized according to convenience.<br>
<br>
Moreover, DRBD have several Web pages that tell us with great detail about of optimize many things, including the configuration of his buffers for avoid the data loss, also with examples of with and without a RAID controller in the middle. So it, the softwares that are in the upper layers nothing can do about since that DRBD takes the control of the data as also of his own buffer.<br>
<br>
</blockquote></div><br>If we enable integrity checking (DRBD will compare checksums for all blocks prior committing to backand) in DRBD while using cache=none for DRBD then we cat this kind of messages from time to time:<br>block drbd0: Digest mismatch, buffer modified by upper layers during write: 25715616s +4096<br><br></div><div class="gmail_extra">Stanislav</div><div class="gmail_extra"><br></div></div>