Project

General

Profile

Bug #9654

Bug #10288: Fix newly identified issues to make our test suite more robust and faster

"IPv4 TCP non-Tor Internet hosts were contacted" during the test suite

Added by kytv almost 4 years ago. Updated about 3 years ago.

Status:
Resolved
Priority:
Elevated
Assignee:
-
Category:
Test suite
Target version:
Start date:
06/29/2015
Due date:
% Done:

100%

Feature Branch:
test/9521-chutney
Type of work:
Wait
Blueprint:
Starter:
Affected tool:

Description

Working on #9518 I ran into the following:

Full network capture available at: /home/kytv/git/tails/tmp/torified_gnupg_sniffer.pcap-2015-06-21T03:54:00+00:00
^[[31m      The following IPv4 TCP non-Tor Internet hosts were contacted:^[[0m
^[[31m      93.104.209.61 (RuntimeError)^[[0m
^[[31m      /home/kytv/git/tails/features/support/helpers/firewall_helper.rb:115:in `assert_no_leaks'^[[0m
^[[31m      /home/kytv/git/tails/features/support/hooks.rb:152:in `After'^[[0m
Scenario failed at time 01:25:32

Host is listed here


Related issues

Related to Tails - Bug #8961: The automated test suite doesn't fetch Tor relays from unverified-microdesc-consensus.bak Resolved 02/26/2015
Related to Tails - Bug #9812: "IPv4 non-TCP Internet hosts were contacted" during the test suite Rejected 07/29/2015
Blocked by Tails - Feature #9521: Use the chutney Tor network simulator in our test suite Resolved 04/15/2016

History

#1 Updated by kytv almost 4 years ago

  • Target version set to Tails_1.5

#2 Updated by intrigeri almost 4 years ago

How can Tor possibly connect to a relay that's not in the consensus we use for this test? Perhaps this test isn't looking at (all?) the files that Tor actually uses?

#3 Updated by intrigeri almost 4 years ago

This ticket needs an assignee.

#4 Updated by anonym almost 4 years ago

intrigeri wrote:

How can Tor possibly connect to a relay that's not in the consensus we use for this test? Perhaps this test isn't looking at (all?) the files that Tor actually uses?

Seems like an instance of #8961. Hmm. Perhaps something that changed semi-recent in tor makes this relevant, after all.

#5 Updated by anonym almost 4 years ago

Perhaps we should change strategy: instead of getting the consensus from the system under test, perhaps we should fetch it using some python script based on stem? I'm not sure if that would make things more robust, or less so.

#6 Updated by intrigeri almost 4 years ago

Perhaps we should change strategy: instead of getting the consensus from the system under test, perhaps we should fetch it using some python script based on stem? I'm not sure if that would make things more robust, or less so.

Doesn't stem need a running tor with an up-to-date consensus? If so, which tor do you want to query, if not the one from the system under test? (By the way, querying the system under test's running tor with stem could be an improvement against the current version of things -- we're already trusting that tor's knowledge of the Tor network).

#7 Updated by intrigeri almost 4 years ago

  • Related to Bug #9812: "IPv4 non-TCP Internet hosts were contacted" during the test suite added

#8 Updated by intrigeri almost 4 years ago

  • Status changed from New to Confirmed
  • Assignee set to kytv
  • Target version changed from Tails_1.5 to Tails_1.6

anonym, kytv: please find an assignee and set a suitable milestone.

#9 Updated by kytv almost 4 years ago

  • Assignee changed from kytv to anonym
  • QA Check set to Info Needed

Have any ideas as to how to tackle this? (I have 0 familiarity with stem).

#10 Updated by intrigeri almost 4 years ago

(I have 0 familiarity with stem).

FYI I don't think that's related to stem.

#11 Updated by anonym almost 4 years ago

intrigeri wrote:

Perhaps we should change strategy: instead of getting the consensus from the system under test, perhaps we should fetch it using some python script based on stem? I'm not sure if that would make things more robust, or less so.

Doesn't stem need a running tor with an up-to-date consensus?

For directory fetches, I think not: https://stem.torproject.org/api/descriptor/remote.html

#12 Updated by anonym almost 4 years ago

anonym wrote:

Perhaps we should change strategy: instead of getting the consensus from the system under test, perhaps we should fetch it using some python script based on stem? I'm not sure if that would make things more robust, or less so.

I guess we could get the most stable situation by using stem to check if the results we get are false positives.

#13 Updated by anonym almost 4 years ago

  • Assignee changed from anonym to kytv
  • Target version changed from Tails_1.6 to Tails_1.7

Would you like to give this a try for Tails 1.7? IIRC you've said that you know python well, so making a helper stem-based script (that we can call from inside Ruby) shouldn't be very hard. stem has excellent docs!

#14 Updated by kytv over 3 years ago

  • QA Check changed from Info Needed to Dev Needed

anonym wrote:

Would you like to give this a try for Tails 1.7? IIRC you've said that you know python well, so making a helper stem-based script (that we can call from inside Ruby) shouldn't be very hard. stem has excellent docs!

No, I know "a bit" of Python but maybe I can make this happen. At the very least I can try.

#15 Updated by kytv over 3 years ago

  • Target version changed from Tails_1.7 to Tails_1.8

#16 Updated by anonym over 3 years ago

  • Related to Bug #8961: The automated test suite doesn't fetch Tor relays from unverified-microdesc-consensus.bak added

#17 Updated by anonym over 3 years ago

  • Related to deleted (Bug #9812: "IPv4 non-TCP Internet hosts were contacted" during the test suite)

#18 Updated by anonym over 3 years ago

  • Related to deleted (Bug #8961: The automated test suite doesn't fetch Tor relays from unverified-microdesc-consensus.bak)

#19 Updated by anonym over 3 years ago

  • Parent task set to #10288

#20 Updated by anonym over 3 years ago

  • Related to Bug #8961: The automated test suite doesn't fetch Tor relays from unverified-microdesc-consensus.bak added

#21 Updated by anonym over 3 years ago

  • Related to Bug #9812: "IPv4 non-TCP Internet hosts were contacted" during the test suite added

#23 Updated by anonym over 3 years ago

  • Assignee changed from kytv to anonym

#24 Updated by anonym over 3 years ago

  • Target version changed from Tails_1.8 to 246

#25 Updated by kytv over 3 years ago

I'm seeing this very, very frequently. :(

      The following IPv4 TCP non-Tor Internet hosts were contacted:
      82.71.246.79 (RuntimeError)

Or maybe that's a consequence of re-using a several day old snapshot. Hmm…

#26 Updated by kytv over 3 years ago

kytv wrote:

I'm seeing this very, very frequently. :(

[...]

Or maybe that's a consequence of re-using a several day old snapshot. Hmm…

I trashed all of my *.memstate files to test this theory. (It may be evident that I have no idea how the consensus works)

#27 Updated by sajolida over 3 years ago

  • Target version changed from 246 to Tails_2.0

#28 Updated by intrigeri over 3 years ago

Just seen that with 109.230.231.166 that is currently https://globe.torproject.org/#/relay/F03A37ADE9366BC5A5899DD7BB0B06AF2CB0B952 (which Globe reports as down since 2hours 42minutes).

#29 Updated by anonym over 3 years ago

  • Target version changed from Tails_2.0 to Tails_2.2

#30 Updated by anonym over 3 years ago

  • Category set to Test suite

Setting this category, since I do not think it matters otherwise. I mean, Tor apparently happily uses these routers. Could be a Tor bug, but likely it's just that Tor doesn't keep the state files up-to-date. Perhaps shutting Tor down (or HUP:ing?) before looking at the consensus will flush it?

#31 Updated by anonym over 3 years ago

  • Priority changed from Normal to Elevated
  • Target version changed from Tails_2.2 to Tails_2.3

Will look at this at the same time as #8961 and #10238.

#32 Updated by anonym over 3 years ago

https://lists.torproject.org/pipermail/tor-dev/2016-March/010588.html

- moria1 (source 128.31.0.39 vs. consensus 128.31.0.34)
- longclaw (source 199.254.238.52 vs. consensus 199.254.238.53)

I.e. these two authorities are should be a problem for us since they're listed with the IP address from the sources, and not the actual one. Hm. I find it worrying that we don't see these two fail the test suite regularly.

#33 Updated by anonym about 3 years ago

Once we use Chutney (#9521) I think this is solved.

But #9654#note-32 is still worrying me a bit.

#34 Updated by anonym about 3 years ago

  • Target version changed from Tails_2.3 to Tails_2.4

#35 Updated by anonym about 3 years ago

  • Status changed from Confirmed to In Progress
  • % Done changed from 0 to 50
  • Feature Branch set to test/9521-chutney
  • Type of work changed from Code to Wait

anonym wrote:

Once we use Chutney (#9521) I think this is solved.

I'm quite convinced about this, so I'll go with it.

#36 Updated by anonym about 3 years ago

  • Blocked by Feature #9521: Use the chutney Tor network simulator in our test suite added

#37 Updated by intrigeri about 3 years ago

I guess next step is to create a branch that unmarks this test as fragile, and see how it fares in a few days on Jenkins.

#38 Updated by anonym about 3 years ago

  • Status changed from In Progress to Fix committed
  • Assignee deleted (anonym)
  • % Done changed from 50 to 100
  • QA Check changed from Dev Needed to Pass

intrigeri wrote:

I guess next step is to create a branch that unmarks this test as fragile, and see how it fares in a few days on Jenkins.

I'm truly convinced that Chutney (#9521) solved this, so I'm confident that we can just close this. I'll reopen it if I see it reappear.

#39 Updated by anonym about 3 years ago

  • Status changed from Fix committed to Resolved

Also available in: Atom PDF