[pve-devel] Quorum problems with NICs Intel of 10 Gb/s and VMsturns off

Cesar Peschiera brain at click.com.py
Tue Jan 6 05:41:49 CET 2015


Many thanks Alexandre !!!, it is the rule that i was searching long time
ago, i will add to the rc.local file.

Moreover and if you can, as i need permit multicast in some Windows servers
VMs, workstations in the local network, and PVE nodes,  can you show me the
configuration of your switch managed in terms of igmp snooping and querier?
(the switches managed Dell has configurations very similar to Cisco). It is
due that i don't have practice for do this exercise and i need a model as
point of start.

I guess that with my manuals of Dell and seeing your configuration, i will
can do it well.

Best regards
Cesar

----- Original Message ----- 
From: "Alexandre DERUMIER" <aderumier at odiso.com>
To: "Cesar Peschiera" <brain at click.com.py>
Cc: "dietmar" <dietmar at proxmox.com>; "pve-devel" <pve-devel at pve.proxmox.com>
Sent: Tuesday, January 06, 2015 12:37 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off


>>And about of your suggestion:
>>-A PVEFW-HOST-IN -s yournetwork/24 -p udp -m addrtype --dst-type
>>MULTICAST -m udp --dport 5404:5405 -j RETURN

Note that this is the rules for HOST-IN,
if you want to adapt you can do

-A FORWARD -s yournetwork/24 -p udp -m addrtype --dst-type MULTICAST -m
udp --dport 5404:5405 -j DROP

>>1) ¿Such rule will avoid the cluster communication to the VMs?
>>2) ¿Such rule will not prejudice the normal use of the igmp protocol own
>>of
the Windows systems in the VMs?

this will block multicast traffic, on udp port 5404:5405 (corosync default
port), from your source network.

>>3) If both answers are correct, where i should put the rule that you
suggest?

Currently it's not possible to do it with proxmox firewall,
but you can add it in rc.local for example.

iptables -A FORWARD -s yournetwork/24 -p udp -m addrtype --dst-type
MULTICAST -m udp --dport 5404:5405 -j DROP

Proxmox firewall don't override custom rules


----- Mail original -----
De: "Cesar Peschiera" <brain at click.com.py>
À: "aderumier" <aderumier at odiso.com>, "dietmar" <dietmar at proxmox.com>
Cc: "pve-devel" <pve-devel at pve.proxmox.com>
Envoyé: Mardi 6 Janvier 2015 00:09:17
Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off

Hi to all

Recently i have tested in the company that igmp is necessary for the VMs
(tested with tcpdump), the company has Windows Servers as VMs and several
Windows systems as workstations in the local network, so i can tell you that
i need to have the protocol igmp enabled in some VMs for that the Windows
systems in the company work perfectly.

And about of your suggestion:
-A PVEFW-HOST-IN -s yournetwork/24 -p udp -m addrtype --dst-type
MULTICAST -m udp --dport 5404:5405 -j RETURN

I would like to do some questions:
1) ¿Such rule will avoid the cluster communication to the VMs?
2) ¿Such rule will not prejudice the normal use of the igmp protocol own of
the Windows systems in the VMs?
3) If both answers are correct, where i should put the rule that you
suggest?


----- Original Message ----- 
From: "Alexandre DERUMIER" <aderumier at odiso.com>
To: "dietmar" <dietmar at proxmox.com>
Cc: "pve-devel" <pve-devel at pve.proxmox.com>
Sent: Monday, January 05, 2015 6:18 AM
Subject: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
VMsturns off


>>>Following rule on your pve nodes should prevent igmp packages flooding
>>>your bridge:
>>>iptables -t filter -A FORWARD -i vmbr0 -p igmp -j DROP
>>>
>>>If something happens you can remove the rule this way:
>>>iptables -t filter -D FORWARD -i vmbr0 -p igmp -j DROP
>
> Just be carefull that it'll block all igmp, so if you need multicast
> inside your vms,
> I'll block it too.
>
> Currently, we have a default rule for IN|OUT for host communication
>
> -A PVEFW-HOST-IN -s yournetwork/24 -p udp -m addrtype --dst-type
> MULTICAST -m udp --dport 5404:5405 -j RETURN
> to open multicast between nodes.
>
> Bit indeed, currently, in proxmox firewall, we can't define global rule in
> FORWARD.
>
>
>
>
> @Dietmar: maybe can we add a default drop rule in -A PVEFW-FORWARD, to
> drop multicast traffic from host ?
>
> Or maybe better, allow to create rules at datacenter level, and put them
> in -A PVEFW-FORWARD ?
>
>
>
> ----- Mail original ----- 
> De: "datanom.net" <mir at datanom.net>
> À: "pve-devel" <pve-devel at pve.proxmox.com>
> Envoyé: Dimanche 4 Janvier 2015 03:34:57
> Objet: Re: [pve-devel] Quorum problems with NICs Intel of 10 Gb/s and
> VMsturns off
>
> On Sat, 3 Jan 2015 21:32:54 -0300
> "Cesar Peschiera" <brain at click.com.py> wrote:
>
>>
>> Now in the switch i have igmp snooping disabled, but i want to avoid
>> flooding the entire VLAN and the VMs
>>
> Following rule on your pve nodes should prevent igmp packages flooding
> your bridge:
> iptables -t filter -A FORWARD -i vmbr0 -p igmp -j DROP
>
> If something happens you can remove the rule this way:
> iptables -t filter -D FORWARD -i vmbr0 -p igmp -j DROP
>
> PS. Your SPF for click.com.py is configured wrong:
> Received-SPF: softfail (click.com.py ... _spf.copaco.com.py: Sender is
> not authorized by default to use 'brain at click.com.py' in 'mfrom'
> identity, however domain is not currently prepared for false failures
> (mechanism '~all' matched)) receiver=mail1.copaco.com.py;
> identity=mailfrom; envelope-from="brain at click.com.py"; helo=gerencia;
> client-ip=190.23.61.163
> Received-SPF: softfail (click.com.py ... _spf.copaco.com.py: Sender is
> not authorized by default to use 'brain at click.com.py' in 'mfrom'
> identity, however domain is not currently prepared for false failures
> (mechanism '~all' matched)) receiver=mail1.copaco.com.py;
> identity=mailfrom; envelope-from="brain at click.com.py"; helo=gerencia;
> client-ip=190.23.61.163
> Received-SPF: softfail (click.com.py ... _spf.copaco.com.py: Sender is
> not authorized by default to use 'brain at click.com.py' in 'mfrom'
> identity, however domain is not currently prepared for false failures
> (mechanism '~all' matched)) receiver=mail1.copaco.com.py;
> identity=mailfrom; envelope-from="brain at click.com.py"; helo=gerencia;
> client-ip=190.23.61.163
> -- 
> Hilsen/Regards
> Michael Rasmussen
>
> Get my public GnuPG keys:
> michael <at> rasmussen <dot> cc
> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xD3C9A00E
> mir <at> datanom <dot> net
> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE501F51C
> mir <at> miras <dot> org
> http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xE3E80917
> -------------------------------------------------------------- 
> /usr/games/fortune -es says:
> Why does a hearse horse snicker, hauling a lawyer away?
> -- Carl Sandburg
>
> _______________________________________________
> pve-devel mailing list
> pve-devel at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> _______________________________________________
> pve-devel mailing list
> pve-devel at pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>




More information about the pve-devel mailing list