Project

General

Profile

Bug #16148

ICMPv6 leaks detected by test suite

Added by bertagaz 6 months ago. Updated 7 days ago.

Status:
Confirmed
Priority:
Elevated
Assignee:
-
Category:
Test suite
Target version:
-
Start date:
11/23/2018
Due date:
% Done:

0%

QA Check:
Feature Branch:
Type of work:
Code
Blueprint:
Starter:
Affected tool:

Description

While working on #14596, FirewallAssertionFailedErrors were sometimes throwned by the test suite.

It seems there are some ICMPv6 multicast "neighboor solicitation" and "router solicitation" packets catched by the test suite firewall leak detector. Whether this exceptions are legit or not is unclear. It happens only in the "Revovering in offline mode after..." scenario of the "Additional Software" feature, which is the longest one, and does boots Tails three times.

I'm attaching the logs from Jenkins run 75 which exposes this bug.


Related issues

Related to Tails - Bug #11521: The check_tor_leaks hook is fragile Confirmed 06/10/2016
Blocks Tails - Feature #16209: Core work: Foundations Team Confirmed 03/22/2019

History

#1 Updated by intrigeri 6 months ago

#2 Updated by nodens 6 months ago

Hi,

If there is no ipv6 default gateway, it's expected for the host to send icmpv6 multicast packets for router/neighbor discovery. It is needed in order to reach the Tor network over IPv6, unless DHCPv6 is used - but that's not often the case outside of corporate network.

If there is also ipv4 connectivity, then of course it's not mandatory, but we should treat those the same as DHCP requests.
as long as it's to a multicast address, it won't be routed to the outside, only replied to by gateways. It might be used as a fingerprinting mean (you can tell it's from a linux OS), but only locally.

Legit and unavoidable ICMPv6 Type, outgoing from the host, would be:
  • type 133 (Router Sollicitation), to ff02::02 (all router multicast address on link-local) and the default gateway on the "local" network if there is already one.
  • type 135 (Neighbor Sollicitation), to ff02::1:ff00:0/104 (Solicited node multicast address), fe80::/10 (link-local) and the local subnet.
  • type 136 (Neighbor Advertisement), in response to a Neighbor Solicitation message. This is usually handled nicely by a stateful ruleset (i.e. no explicit rule needed, just allowing ESTABLISHED,RELATED, it's never in NEW state).

There might be others, those are the most used ones and thus the ones I can remember easily. If needed, I can research a bit more to provide a comprehensive list of needed ICMPv6 outgoing packets. Most rulesets I've seen allows outgoing ICMPv6 trafic without restriction on the link-local (fe80::/10) prefix and

Althought it's not a big issue, as it seems really unlikely to have an ipv6 only network nowadays, you never know... I can indeed imagine a situation where ipv4 would be unusable because tor is filtered, while it'd work on ipv6. So we should make sure it's actually working. :)

Cheers,

#3 Updated by geb 6 months ago

Hi,

nodens wrote:

[....]

There might be others, those are the most used ones and thus the ones I can remember easily. If needed, I can research a bit more to provide a comprehensive list of needed ICMPv6 outgoing packets. Most rulesets I've seen allows outgoing ICMPv6 trafic without restriction on the link-local (fe80::/10) prefix and

https://tools.ietf.org/html/rfc4890#section-4.4 :-)

However, it may not be the only topic of this bug. As far as i understand it (please correct me if i am wrong) there are different questions :
- Is this trafic legit ? Yes it, as you said (thanks!) thoses packets are just the equivalent of DHCP and ARP requests/response, so they are fully legit for a given host.
- Is Tails suppose to emit this kind of trafic ? I don't think so as IPv6 should be filtered by the firewall https://git-tails.immerda.ch/tails/plain/config/chroot_local-includes/etc/ferm/ferm.conf
- Why does Tails emit this traffic ? This is a normal behaviour for an IPv6 enabled host to emit this kind of trafic when an interface goes up (or up/down/up), which this test seems to be design to trigger
- Why this trafic is not filtered ? It may depends how the firewalls rules are applied, for example, if they are per interface (which it seems to be), and the update is trigger with network-manager (https://git-tails.immerda.ch/tails/plain/config/chroot_local-includes/etc/NetworkManager/dispatcher.d/00-firewall.sh) it is not fully unexpected that the kernel will send those packets before those the firewall rules being updated
- How to filter those trafic ? Maybe /etc/network/if-pre-up.d will be trigger before the interface guets up. Maybe using sysctl (net.ipv6.conf.*.disable_ipv6), or (in my opinion, the better option) to avoid using per interface iptables rules and thus to configure ferm according if possible, maybe also disabling ipv6 directly from the kernel boot options.
- What would be the practical impact of this bug ? If somebody add a network adapter (after the firewall is firstly configured), it may emit ICMPv6 Network Sollicitation / Neighbor Sollicitation packets. I don't think there is any practical impact, if no other packet that are supposed to be filtered are emitted in this (really short) timeframe

If needed I will be happy to investigate a bit more, maybe with some clarification how what the test precisely does

#4 Updated by geb 6 months ago

Hi,

geb wrote:

- Why this trafic is not filtered ? It may depends how the firewalls rules are applied, for example, if they are per interface (which it seems to be), and the update is trigger with network-manager (https://git-tails.immerda.ch/tails/plain/config/chroot_local-includes/etc/NetworkManager/dispatcher.d/00-firewall.sh) it is not fully unexpected that the kernel will send those packets before those the firewall rules being updated
- How to filter those trafic ? Maybe /etc/network/if-pre-up.d will be trigger before the interface guets up. Maybe using sysctl (net.ipv6.conf.*.disable_ipv6), or (in my opinion, the better option) to avoid using per interface iptables rules and thus to configure ferm according if possible, maybe also disabling ipv6 directly from the kernel boot options.

I was wrong: there is a -A OUTPUT -j REJECT --reject-with icmp6-port-unreachable line, that should reject the trafic on every interface.

Then, my best guest for the fact packets aren't filtered would maybe be the moment ferm is restarted when an interface goes up (by https://git-tails.immerda.ch/tails/plain/config/chroot_local-includes/etc/NetworkManager/dispatcher.d/00-firewall.sh).

I gave a quick look to ferm, it seems to use ip(6)tables-restore ip(6)tables-saves by default, not sure if its atomic or would let interface being unfiltered for a really short while.

And maybe ferm is not the problem ..

#5 Updated by intrigeri 6 months ago

  • Target version changed from Tails_3.11 to Tails_3.12

#6 Updated by intrigeri 5 months ago

#7 Updated by intrigeri 5 months ago

#8 Updated by hefee 5 months ago

  • Assignee set to hefee

#9 Updated by hefee 4 months ago

  • Assignee changed from hefee to bertagaz
  • QA Check set to Info Needed

So far I can see there are only two packages, that get transmitted by Tails. Others are successfully sopped by ferm.

In the journal you can see entries like this, as we expected them to get dropped:

Nov 21 14:37:49 amnesia kernel: Dropped outbound packet: IN= OUT=eth0 SRC=fe80:0000:0000:0000:5254:00ff:fe42:37d2 DST=ff02:0000:0000:0000:0000:0000:0000:0002 LEN=56 TC=0 HOPLIMIT=255 FLOWLBL=0 PROTO=ICMPv6 TYPE=133 CODE=0·

Unfortunately the pcap file does only have timedelta as time, so I can't link the pcap together with the ferm start. Can you give me the timestamp, when the pcap started? Than we should see, when the packages got transmitted and were not catched.

@geb, @nodens:
I think, we should be able to control the output on all interfaces and not just say, yeah well, this packages sent are not that bad. If Tails starts to support IPv6, we need to enable these packages. But currently Tails is not ready for IPv6, so we should disable everything and also try to catch for those two packages...

#10 Updated by intrigeri 4 months ago

  • Assignee changed from bertagaz to hefee

Can you give me the timestamp, when the pcap started? Than we should see, when the packages got transmitted and were not catched.

AFAICT all the artifacts we have are attached to this ticket. I think this info is in the debug log. If it's not explicit, let me know, I'll check our test suite code to tell you which debug log line indicates the start of the capture.

(Also, you mean packets, not packages, right? :)

#11 Updated by hefee 4 months ago

  • Assignee changed from hefee to intrigeri

intrigeri wrote:

AFAICT all the artifacts we have are attached to this ticket. I think this info is in the debug log. If it's not explicit, let me know, I'll check our test suite code to tell you which debug log line indicates the start of the capture.

Sorry my mistake, the pcap has a absolute time but normaly only secs form start are displayed. Than I can match the journal and pcap.

The interesting packets:

15:33:57 - Router Sollicitation
15:33:57 - Neighbor Sollicitation

I can match the pcap with the journal by the dchp-client packets:

15:37:44 - DHCP Request (pcap)
Nov 21 14:37:46 amnesia dhclient[1137]: DHCPREQUEST of 10.2.1.45 on eth0 to 255.255.255.255 port 67

-> We have around 1h 2secs diff between journal and pcap. But the saved journal starts at 14:34:28 UTC. We are interested what was happening at 14:33:55 UTC. So it is maybe a shutdown issue?

Unfortunately the debuglog uses completely different timestamps (sec from start). And tells me, that the test fails at 2:56:48. (It would help, if the timestapms would be UTC too). So if I speculate, that this is the end time of the journal. I see this scenario, that is triggering the packets:

02:52:09.157565101: execution complete
    And I can open the Additional Software configuration window from the notification 
# features/step_definitions/additional_software_packages.rb:136
02:52:09.158840925: spawning as root: poweroff
    And I shutdown Tails and wait for the computer to power off
# features/step_definitions/common_steps.rb:562
02:52:19.485028828: [log] CLICK on L(1023,384)@S(0)[0,0 1024x768]                                  
02:52:22.893751238: [log]  TYPE " autotest_never_use_this_option blacklist=psmouse #ENTER." 
02:53:46.493521850: calling as root: echo 'hello?'

That would match to my thought, that the packets are triggered from the previous Tails run and are a shutdown issue.

Can it be, that the tcpdump starts along with the start of the scenario and that reboots do not test the tcpdump, so only the last journal is saved?

#12 Updated by geb 4 months ago

Hi,

geb wrote:

I gave a quick look to ferm, it seems to use ip(6)tables-restore ip(6)tables-saves by default, not sure if its atomic or would let interface being unfiltered for a really short while.

And maybe ferm is not the problem ..

I wanted to confirm this intuition. It was wrong. I tested to flood the one interface (ping6 -f ::1) while continiously restarting ferm (while : ; do ferm /etc/ferm.conf; done;) and was not able to see any packet flowing (I activated inbound log to log replies, but no request was received).

Thus sorry for the wrong (and thus offtopic) intuition .

#13 Updated by intrigeri 4 months ago

  • Assignee changed from intrigeri to anonym

I won't have time to look into this in time for 3.12, but anonym might have time and could be excited about it. If that does not happen, let's organize the next steps at the FT meeting.

#14 Updated by anonym 4 months ago

  • Target version changed from Tails_3.12 to Tails_3.13

#15 Updated by intrigeri 4 months ago

  • Assignee deleted (anonym)

intrigeri wrote:

I won't have time to look into this in time for 3.12, but anonym might have time and could be excited about it. If that does not happen, let's organize the next steps at the FT meeting.

Did not happen, let's discuss this at the FT meeting.

#17 Updated by intrigeri 3 months ago

  • Related to Bug #11521: The check_tor_leaks hook is fragile added

#18 Updated by intrigeri 3 months ago

Anyone who works on this and wants to investigate the "leftover from previous scenario" hypothesis, see #11521 where I've made the very same hypothesis a few years ago.

#19 Updated by intrigeri 3 months ago

  • Category set to Test suite
  • QA Check deleted (Info Needed)

#20 Updated by intrigeri 2 months ago

  • Target version deleted (Tails_3.13)

#21 Updated by intrigeri 2 months ago

#22 Updated by intrigeri 2 months ago

#23 Updated by intrigeri 7 days ago

  • Priority changed from Normal to Elevated

This has caused 3 out of the last 10 test suite runs on the stable branch to fail, which makes me spend an amount of time I can't justify on analyzing such failures (and I would assume that some other folks on the RMs list do the same) => bumping priority. @anonym, looking for your next FT semi-procrastination task, for when you won't be busy with Tor Browser 9 matters?

Also available in: Atom PDF