Project

General

Profile

Bug #9645

Feature #5288: Run the test suite automatically on autobuilt ISOs

Create at least a second VM for testing ISO images

Added by bertagaz almost 4 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
High
Assignee:
-
Category:
Continuous Integration
Target version:
Start date:
06/25/2015
Due date:
07/15/2015
% Done:

100%

QA Check:
Feature Branch:
Type of work:
Sysadmin
Blueprint:
Starter:
No
Affected tool:

Description

To test our automated tests design, we'll need something close to what our setup will look alike. For that, having several isotesters running will help.


Related issues

Blocks Tails - Feature #9486: Support running multiple instances of the test suite in parallel Resolved 06/25/2015

History

#1 Updated by bertagaz almost 4 years ago

  • Parent task set to #5288

#3 Updated by bertagaz almost 4 years ago

  • Blocked by Feature #9399: Extend lizard's storage capacity added

#4 Updated by intrigeri almost 4 years ago

  • Subject changed from Create at least a second isotester to Create at least a second VM for testing ISO images

#5 Updated by bertagaz almost 4 years ago

  • Target version changed from Tails_1.4.1 to Tails_1.5

#6 Updated by intrigeri almost 4 years ago

  • Blocked by deleted (Feature #9399: Extend lizard's storage capacity)

#7 Updated by bertagaz almost 4 years ago

  • Status changed from Confirmed to In Progress
  • % Done changed from 0 to 50

Installed and started to configure the 3 new isotesters. Using virt-clone, it wasn't much more work to do 1 or 3.

They are waiting for their first successful puppet agent run, it seems we'll need to upgrade our Jenkins master and slaves to a newer version before, and it's only in Sid at the moment.

#8 Updated by intrigeri almost 4 years ago

Installed and started to configure the 3 new isotesters.

Yay :)

Using virt-clone, it wasn't much more work to do 1 or 3.

Does virt-clone make e.g. machine-id, random seeds, and sshd / Puppet TLS key pairs unique?

#9 Updated by bertagaz almost 4 years ago

intrigeri wrote:

Does virt-clone make e.g. machine-id, random seeds, and sshd / Puppet TLS key pairs unique?

It doesn't for the sshd / Puppet TLS part, but at least save time on the installation process of the base system. I changed this last bits myself. I didn't think about checking the random seed and machine-id though. Will do.

#10 Updated by intrigeri almost 4 years ago

It doesn't for the sshd / Puppet TLS part, but at least save time on the installation process of the base system. I changed this last bits myself. I didn't think about checking the random seed and machine-id though. Will do.

OK, cool. Needless to say, please make sure that this alternate way of setting up a VM is documented :)
My only remaining concern is that it removes one of the rare opportunities to test deploying our manifests from scratch, but oh well, we should find better ways to do so anyway.

#11 Updated by bertagaz almost 4 years ago

intrigeri wrote:

It doesn't for the sshd / Puppet TLS part, but at least save time on the installation process of the base system. I changed this last bits myself. I didn't think about checking the random seed and machine-id though. Will do.

OK, cool. Needless to say, please make sure that this alternate way of setting up a VM is documented :)
My only remaining concern is that it removes one of the rare opportunities to test deploying our manifests from scratch, but oh well, we should find better ways to do so anyway.

So that's good, because I cloned the freshly installed isotester2, not isotester1, so they will, be tested from scratch. :)

#12 Updated by intrigeri over 3 years ago

  • Blocks Feature #9486: Support running multiple instances of the test suite in parallel added

#13 Updated by intrigeri over 3 years ago

  • Due date set to 07/15/2015

#14 Updated by intrigeri over 3 years ago

  • Priority changed from Normal to High

#16 Updated by bertagaz over 3 years ago

  • Target version changed from Tails_1.5 to Tails_1.6

Postponing, even if it will be finished very soon.

#17 Updated by bertagaz over 3 years ago

  • Blocked by Bug #10066: Python-otr removed from Debian testing added

#18 Updated by bertagaz over 3 years ago

  • Blocked by deleted (Bug #10066: Python-otr removed from Debian testing)

#19 Updated by bertagaz over 3 years ago

  • Assignee changed from bertagaz to intrigeri
  • % Done changed from 50 to 80
  • QA Check set to Ready for QA

Ok, I think I finished this, the 4 isotesters are up and configured and I'm currently running the test suite on all of them.

I did change their respective machine_id, installed haveged and done some scratch on the disk to feed their randomness, restarted them for their random seed to be different and then created new ssh and puppet keys (Tor wasn't installed yet). Had to adapt a bit our manifests, and in the end, the install went fine.

So if it feels good enough to you, please close this ticket.

#20 Updated by bertagaz over 3 years ago

Forgot to add the relevant yaml config files, so I've created and configured some xmpp user accounts and created the needed files (local.yml, pidgin.yml, tor.yml, ssh.yml and sftp.yml) on each isotesters.

#21 Updated by bertagaz over 3 years ago

  • Assignee changed from intrigeri to bertagaz
  • QA Check changed from Ready for QA to Dev Needed

Seems that the isotesters need a bit bigger rootfs. Will grow them.

#22 Updated by bertagaz over 3 years ago

  • Assignee changed from bertagaz to intrigeri
  • QA Check changed from Dev Needed to Ready for QA

bertagaz wrote:

Seems that the isotesters need a bit bigger rootfs. Will grow them.

Done.

#23 Updated by intrigeri over 3 years ago

  • Assignee changed from intrigeri to bertagaz
  • QA Check changed from Ready for QA to Dev Needed

I've looked at the corresponding Puppet changes. Sounds good! Two questions:

Regarding the firewall changes: any reason not to restrict this to a specific set of ports?

Regarding the *.yml local config files: I see nothing about them in our Puppet stuff. Were they deployed by hand? IMO that stuff belongs to our tails_secrets_jenkins module, or similar.

#25 Updated by bertagaz over 3 years ago

intrigeri wrote:

I've looked at the corresponding Puppet changes. Sounds good! Two questions:

Regarding the firewall changes: any reason not to restrict this to a specific set of ports?

That's because one connected to the master 8080 port, the slave connects to it with a randomly chosen port. Excerpt from a slave log:

INFO: Locating server among [https://jenkins.tails.boum.org/, http://jenkins.lizard:8080/]
[...]
INFO: Connecting to jenkins.lizard:33967

Regarding the *.yml local config files: I see nothing about them in our Puppet stuff. Were they deployed by hand? IMO that stuff belongs to our tails_secrets_jenkins module, or similar.

Right, good idea. I haven't done it because the git wasn't deployed by our puppet module. Will do that and put the files in /etc/TailsToaster/

#26 Updated by bertagaz over 3 years ago

  • Assignee changed from bertagaz to intrigeri
  • QA Check changed from Dev Needed to Ready for QA

bertagaz wrote:

intrigeri wrote:

Regarding the *.yml local config files: I see nothing about them in our Puppet stuff. Were they deployed by hand? IMO that stuff belongs to our tails_secrets_jenkins module, or similar.

Right, good idea. I haven't done it because the git wasn't deployed by our puppet module. Will do that and put the files in /etc/TailsToaster/

Done, please review.

#27 Updated by intrigeri over 3 years ago

  • Assignee changed from intrigeri to bertagaz
  • QA Check changed from Ready for QA to Dev Needed

Done, please review.

Yay!

  • MAX_NEW_TOR_CIRCUIT_RETRIES: 50 was meant as a temporary fix. I'd rather not encode it in our config forever => please drop it.
  • Nitpicking: I find the TailsToaster_config and /etc/TailsToaster names misleading: at the moment these configuration files are not about the TailsToaster VM (the system under test), but instead they configure how the test suite (which controls that VM) works. This might change a bit in the future (e.g. when the test suite's config supports specifying how many cores shall be given to the system under test), but not essentially. Granted, the default name of the temporary directory is misleading too, so well, let's say that's not a blocker.
  • Firewall rules: OK, makes sense; I pushed 46cdf04 on top.

Almost there :)

#28 Updated by bertagaz over 3 years ago

  • Assignee changed from bertagaz to intrigeri
  • QA Check changed from Dev Needed to Ready for QA

intrigeri wrote:

  • MAX_NEW_TOR_CIRCUIT_RETRIES: 50 was meant as a temporary fix. I'd rather not encode it in our config forever => please drop it.

Done and deployed. Didn't know it was temporary.

  • Nitpicking: I find the TailsToaster_config and /etc/TailsToaster names misleading: at the moment these configuration files are not about the TailsToaster VM (the system under test), but instead they configure how the test suite (which controls that VM) works. This might change a bit in the future (e.g. when the test suite's config supports specifying how many cores shall be given to the system under test), but not essentially. Granted, the default name of the temporary directory is misleading too, so well, let's say that's not a blocker.

Hmmm yes, get your point. In my mind, the TailsToaster name is not only the VM name, but also the codename of the software I thought about when POC'ing it, hence this situation. :)

  • Firewall rules: OK, makes sense; I pushed 46cdf04 on top.

Good catch, thanks!

#29 Updated by intrigeri over 3 years ago

  • Assignee changed from intrigeri to bertagaz
  • QA Check changed from Ready for QA to Info Needed
  • MAX_NEW_TOR_CIRCUIT_RETRIES: 50 was meant as a temporary fix. I'd rather not encode it in our config forever => please drop it.

Done and deployed. Didn't know it was temporary.

Double-checked, looks good!

In my mind, the TailsToaster name is not only the VM name, but also the codename of the software I thought about when POC'ing it, hence this situation. :)

OK, thanks.

One last question: why do all isotesterN have 10GB allocated to /srv? (except cargo-culting what isotester1 had, which is probably no good reason in itself since that one had very specific requirements)

#30 Updated by bertagaz over 3 years ago

  • Assignee changed from bertagaz to intrigeri

intrigeri wrote:

One last question: why do all isotesterN have 10GB allocated to /srv? (except cargo-culting what isotester1 had, which is probably no good reason in itself since that one had very specific requirements)

Hmmm, because I just copied isotester1's conf? Shall I shorten that to something like 4GB or 5GB?

#31 Updated by intrigeri over 3 years ago

  • Assignee changed from intrigeri to bertagaz

Hmmm, because I just copied isotester1's conf? Shall I shorten that to something like 4GB or 5GB?

What do you plan to use that filesystem for? Jenkins workspace? If yes, then 1. it should probably be mounted somewhere more appropriate; 2. it needs to be large enough to contain a Git checkout, two ISO images (--iso and --old-iso), and that's all, no?

#32 Updated by bertagaz over 3 years ago

  • Assignee changed from bertagaz to intrigeri

intrigeri wrote:

Hmmm, because I just copied isotester1's conf? Shall I shorten that to something like 4GB or 5GB?

What do you plan to use that filesystem for? Jenkins workspace?

Yes.

If yes, then
1. it should probably be mounted somewhere more appropriate;

We could mount them in /var/lib/jenkins/, but we can configure the slaves to use a subdirectory of /srv/ as the workspace.

I guess you prefer the former, as it won't require a manual configuration of the slaves.

2. it needs to be large enough to contain a Git checkout, two ISO images (--iso and --old-iso), and that's all, no?

Yes, hence my proposal to reduce them to something like 4GB or 5GB. That would leave a little room just in case.

#33 Updated by intrigeri over 3 years ago

  • Assignee changed from intrigeri to bertagaz
  • QA Check changed from Info Needed to Dev Needed

We could mount them in /var/lib/jenkins/, but we can configure the slaves to use a subdirectory of /srv/ as the workspace.

I guess you prefer the former, as it won't require a manual configuration of the slaves.

Indeed, unless we have good reasons to use a non-default directory, let's not :)

2. it needs to be large enough to contain a Git checkout, two ISO images (--iso and --old-iso), and that's all, no?

Yes, hence my proposal to reduce them to something like 4GB or 5GB. That would leave a little room just in case.

OK, let's go for 4GB then!

#34 Updated by bertagaz over 3 years ago

  • Status changed from In Progress to Resolved
  • Assignee deleted (bertagaz)
  • % Done changed from 80 to 100
  • QA Check deleted (Dev Needed)

Closing this ticket, after having moved the /srv/ filesystem in /var/lib/jenkins for all isotesters but isotester1. Left the later untouched so that the test suite team can keep on playing with it. That was the last bit to do here.

I've created #10158 to track this isotester1 specific case and apply the same change later.

Also available in: Atom PDF