Project

General

Profile

Feature #10117

Feature #5288: Run the test suite automatically on autobuilt ISOs

Design how to run our test suite in Jenkins

Added by bertagaz over 3 years ago. Updated over 3 years ago.

Status:
Resolved
Priority:
Elevated
Assignee:
-
Category:
Continuous Integration
Target version:
Start date:
08/28/2015
Due date:
% Done:

100%

QA Check:
Feature Branch:
Type of work:
Research

Description

There are different Jenkins plugins to chain jobs, and we need to gather and compare them to choose the one that fits the needs describes in the blueprint

Then we'll probably need to test them a bit.

Deliverables:

  • Research and test available solutions
  • Write a blueprint that compares these options
  • Propose a design
  • Lead the discussion to a decision

Related issues

Related to Tails - Bug #10068: Upgrade to Jenkins 2.x, using upstream packages In Progress 01/08/2018
Related to Tails - Feature #9486: Support running multiple instances of the test suite in parallel Resolved 06/25/2015
Blocks Tails - Feature #10118: Write library code that maps Jenkins jobs from building to testing Resolved 08/28/2015

History

#1 Updated by bertagaz over 3 years ago

  • Parent task set to #5288

#2 Updated by bertagaz over 3 years ago

  • Blocks Feature #10118: Write library code that maps Jenkins jobs from building to testing added

#3 Updated by bertagaz over 3 years ago

  • Status changed from Confirmed to In Progress
  • % Done changed from 0 to 10
  • Blueprint set to https://tails.boum.org/blueprint/automated_builds_and_tests/jenkins/#index8h2

Wrote down my research on the job chaining and parameter passing through jobs.

#4 Updated by intrigeri over 3 years ago

  • Blueprint changed from https://tails.boum.org/blueprint/automated_builds_and_tests/jenkins/#index8h2 to https://tails.boum.org/blueprint/automated_builds_and_tests/jenkins/#chain

#5 Updated by intrigeri over 3 years ago

  • Starter changed from Yes to No

#6 Updated by intrigeri over 3 years ago

Added some info to the blueprint. I say let's try the native way and if it's too limited for our needs, go for ParameterizedTrigger (keeping in mind that it may make #10068 more of a blocker).

#7 Updated by intrigeri over 3 years ago

  • Related to Bug #10068: Upgrade to Jenkins 2.x, using upstream packages added

#8 Updated by bertagaz over 3 years ago

  • QA Check changed from Info Needed to Dev Needed
  • Type of work changed from Research to Test

intrigeri wrote:

Added some info to the blueprint. I say let's try the native way and if it's too limited for our needs, go for ParameterizedTrigger (keeping in mind that it may make #10068 more of a blocker).

Agree. That's what I was thinking about. I added a possible solution for this job chaining issue and a way to pass parameters between them.

#9 Updated by intrigeri over 3 years ago

I added a possible solution for this job chaining issue and a way to pass parameters between them.

I like it! Not sure if we can export all needed info "At the beginning of the build job", though. E.g. artifact names are not known yet at this stage. We'll see :)

#10 Updated by intrigeri over 3 years ago

  • Description updated (diff)

#12 Updated by bertagaz over 3 years ago

Added a short section in the bottom of the same blueprint regarding how we can retrieve the ISOs we need for the tests.

#13 Updated by bertagaz over 3 years ago

intrigeri wrote:

I added a possible solution for this job chaining issue and a way to pass parameters between them.

I like it! Not sure if we can export all needed info "At the beginning of the build job", though. E.g. artifact names are not known yet at this stage. We'll see :)

Yeah, but we have the build number at that moment, and we should be able to get the artifact with that.

Scanning through the needed infos in the automated test design doc, I don't see some that would block us with this tech design.

#14 Updated by bertagaz over 3 years ago

So my next move will be to try to test this design. I've already wrote the needed scripts and job definitions. I have to push in our manifest and jjb repo. I think I'll take isotester3 and set a test job chained to the build_Tails_ISO_devel job to see how it behaves. For this I'll adapt out build job generation script. This should be < 10 lines of code.

#15 Updated by intrigeri over 3 years ago

So my next move will be to try to test this design. I've already wrote the needed scripts and job definitions. I have to push in our manifest and jjb repo. I think I'll take isotester3 and set a test job chained to the build_Tails_ISO_devel job to see how it behaves. For this I'll adapt out build job generation script. This should be < 10 lines of code.

Sounds good!

Regarding retrieving the last release's ISO, I agree that using http://iso-history.tails.boum.org is the way to go. Simply add a login/password shared between isotesters and deploy via tails::jenkins_secrets? No need for a dedicated vhost IMO. At some point I hesitated dropping this service, though (it eats lots of disk space, has accomplished its mission, and now is unused), but well, git-annex would be a bit overkill here I guess, so yay for iso-history.

#16 Updated by bertagaz over 3 years ago

intrigeri wrote:

Regarding retrieving the last release's ISO, I agree that using http://iso-history.tails.boum.org is the way to go. Simply add a login/password shared between isotesters and deploy via tails::jenkins_secrets? No need for a dedicated vhost IMO.

Ack!

At some point I hesitated dropping this service, though (it eats lots of disk space, has accomplished its mission, and now is unused)

Yeah, but you see, this was a useful idea in the end. :)

Now we'll have to feed it at every releases.

but well, git-annex would be a bit overkill here I guess, so yay for iso-history.

Happy not to have to script a git-annex script :)

#17 Updated by bertagaz over 3 years ago

  • Assignee changed from bertagaz to intrigeri
  • QA Check changed from Dev Needed to Info Needed

Oh, there's one more thing that raised while thinking about that: for most of the ISO, testing against the last released one is fine, but maybe not in the case of the feature/jessie branch. What shall we do for this one? Simply avoid testing it, testing it against the last release to test the upgrade path from one to another, or doing so against the last successful automatically built ISO of this branch?

#18 Updated by intrigeri over 3 years ago

Now we'll have to feed it at every releases.

... which is not formally part of the release process yet, and in practice I've been doing it myself every 2-4 releases => please add it to the release process. I think the details are documented already in our internal Git repo so it's a simple matter of adding a line that points there (and explains why it's useful).

Here also, beware of race conditions: we don't want all test jobs to start failing because a tag was pushed or something, but the ISO is still not in git-annex.

#19 Updated by intrigeri over 3 years ago

  • Assignee changed from intrigeri to bertagaz
  • QA Check changed from Info Needed to Dev Needed

Oh, there's one more thing that raised while thinking about that: for most of the ISO, testing against the last released one is fine, but maybe not in the case of the feature/jessie branch.

Right.

More generally, we're in this situations whenever changes e.g. in Tails Installer, or in the Persistent Volume Assistant, or in the Greeter, etc., affect the "set up an old Tails" processes our test suite goes through. So it's not just about feature/jessie (we already have at least one such other WIP branch) and I'm afraid we'll need a generic solution.

What shall we do for this one?

Simply avoid testing it,

No way! :)

testing it against the last release to test the upgrade path from one to another,

Our automated tests don't support that. They basically assume that the old ISO is similar enough to the new one to be compatible with a subset of our steps.

or doing so against the last successful automatically built ISO of this branch?

Yes.

Basically, looks like we need a way for branches to specify details of how their ISO needs to be tested, e.g. in this case, to express something like "old_iso = same". I guess this could be done a flag file in Git should be enough to start with. Not sure where exactly it should go (features/config/ feels wrong). But we need to find out when exactly that flag file needs to be removed -- I guess it should be removed from a main branch post-release, but it's probably more complex than that, see similar issues we've been dealing with for the APT overlays thing.

Alternatively, these branches could be listed in some central place, to which committers would have read-write access. Not sure I like it.

#20 Updated by bertagaz over 3 years ago

  • Assignee changed from bertagaz to intrigeri
  • QA Check changed from Dev Needed to Info Needed

intrigeri wrote:

More generally, we're in this situations whenever changes e.g. in Tails Installer, or in the Persistent Volume Assistant, or in the Greeter, etc., affect the "set up an old Tails" processes our test suite goes through. So it's not just about feature/jessie (we already have at least one such other WIP branch) and I'm afraid we'll need a generic solution.

Right, happy to raise that now and not later in the process...

Simply avoid testing it,

No way! :)

:D

or doing so against the last successful automatically built ISO of this branch?

Basically, looks like we need a way for branches to specify details of how their ISO needs to be tested, e.g. in this case, to express something like "old_iso = same". I guess this could be done a flag file in Git should be enough to start with. Not sure where exactly it should go (features/config/ feels wrong). But we need to find out when exactly that flag file needs to be removed -- I guess it should be removed from a main branch post-release, but it's probably more complex than that, see similar issues we've been dealing with for the APT overlays thing.

Maybe we could get the experience we (you) had with this APT overlay dev, and use a file like config/old_iso (or something similar), which by default would have as content something like "previous" or "release" or "last-release", and in need have its content changed to "same"? Then the merge process won't be much different than with the APT overlay one.

Alternatively, these branches could be listed in some central place, to which committers would have read-write access. Not sure I like it.

Hmm, yeah, that would encode this in another place than in where the devs already have access, not fond of this neither.

#21 Updated by intrigeri over 3 years ago

  • Assignee changed from intrigeri to bertagaz
  • QA Check changed from Info Needed to Dev Needed

Maybe we could get the experience we (you) had with this APT overlay dev, and use a file like config/old_iso (or something similar),

I think we should drop it in a subdir of config, e.g. config/ci.d/.

which by default would have as content something like "previous" or "release" or "last-release", and in need have its content changed to "same"? Then the merge process won't be much different than with the APT overlay one.

Either this, or a boolean flag file (present = use same ISO as old ISO; absent = use last release as old ISO). I think the boolean flag file is easier to handle (no need to look at its content, ever) but it won't scale if we ever have to convey something that doesn't fit in a boolean (I can't think of any reason we would need that, though).

Note that at testing time, we'll have to merge the base branch merge before we look at that config setting (because for some reason the base branch might itself require old ISO = same).

The only worries I have are about the handling of merge conflicts on that config file, and how it can impact the base branch merge done by automatic builds. E.g. it's worth checking that we won't suddenly have tons of failures in topic branches based on devel, after putting out a point-release (that implies changes to that config file in stable, then merged into devel by the RM, then merged into the topic branch by the autobuilder). It's probably worth thinking a bit about it, trying to find other problematic situations, and checking that the design will support them just fine.

#23 Updated by bertagaz over 3 years ago

  • Assignee changed from bertagaz to intrigeri
  • QA Check deleted (Dev Needed)
  • Type of work changed from Test to Discuss

intrigeri wrote:

I think we should drop it in a subdir of config, e.g. config/ci.d/.

Fine with me.

which by default would have as content something like "previous" or "release" or "last-release", and in need have its content changed to "same"? Then the merge process won't be much different than with the APT overlay one.

Either this, or a boolean flag file (present = use same ISO as old ISO; absent = use last release as old ISO). I think the boolean flag file is easier to handle (no need to look at its content, ever) but it won't scale if we ever have to convey something that doesn't fit in a boolean (I can't think of any reason we would need that, though).

Hmmm, me neither. Maybe we should switch this conversation on the mailing list, to have good brains like our beloved RM one having a look at that? Or maybe assign to him this ticket to have an input?

I wonder if we could be in the situation of needing a specific ISO to test against as old iso, but I don't see at first why.

Note that at testing time, we'll have to merge the base branch merge before we look at that config setting (because for some reason the base branch might itself require old ISO = same).

You mean I guess look at that file in the base branch, then look at it in the feature branch, and then only issue the merge?

The only worries I have are about the handling of merge conflicts on that config file, and how it can impact the base branch merge done by automatic builds. E.g. it's worth checking that we won't suddenly have tons of failures in topic branches based on devel, after putting out a point-release (that implies changes to that config file in stable, then merged into devel by the RM, then merged into the topic branch by the autobuilder). It's probably worth thinking a bit about it, trying to find other problematic situations, and checking that the design will support them just fine.

Yes, the merge conflict is problematic. OTOH, if we just need a boolean file (present or absent), then this shouldn't happen I guess. So maybe we should resolve this question first.

#24 Updated by anonym over 3 years ago

intrigeri wrote:

Basically, looks like we need a way for branches to specify details of how their ISO needs to be tested, e.g. in this case, to express something like "old_iso = same". I guess this could be done a flag file in Git should be enough to start with. Not sure where exactly it should go (features/config/ feels wrong). But we need to find out when exactly that flag file needs to be removed -- I guess it should be removed from a main branch post-release, but it's probably more complex than that, see similar issues we've been dealing with for the APT overlays thing.

I'm not sure if this is an idea you already dismissed, but have something close to this already. In features/config/defaults.yml one could set OLD_TAILS_ISO. So, we could update our release process so that when we have released version X, we set OLD_TAILS_ISO = tails-i386-X.iso. Maybe we need to change the logic so that if OLD_TAILS_ISO doesn't exist, we don't error-out, but set it to TAILS_ISO and print a warning saying that we couldn't find the default OLD_TAILS_ISO.

Still, I think we may consider to at least start off with the KISS approach of simply letting OLD_TAILS_ISO == TAILS_ISO in Jenkins. I believe we'll get most of what we want really cheaply, without the hassle of setting the OLD_TAILS_ISO each release. Also, sometimes stuff will change so that OLD_TAILS_ISO cannot run any way (e.g. if the look of the installer changes, or certain things in the boot sequence to that point, like the Greeter). Of course, when testing a release, someone will be responsible to run the automated test suite with an appropriate OLD_TAILS_ISO.

#25 Updated by intrigeri over 3 years ago

(Missed that one too due to QA Check being empty + my backlog.)

intrigeri wrote:

Note that at testing time, we'll have to merge the base branch merge before we look at that config setting (because for some reason the base branch might itself require old ISO = same).

You mean I guess look at that file in the base branch, then look at it in the feature branch, and then only issue the merge?

No. What I mean is that when testing a topic branch, the correct value for this new setting should be the result of a (correct) merge between the topic branch and its base branch.

Yes, the merge conflict is problematic. OTOH, if we just need a boolean file (present or absent), then this shouldn't happen I guess.

I don't think this is correct. Conflicts can very well happen for file creation/deletion. Think of a file's existence and metadata as just one entry in its parent directory, and think of the parent directory as a text file.

#26 Updated by intrigeri over 3 years ago

  • Assignee changed from intrigeri to bertagaz
  • Type of work changed from Discuss to Research

Still, I think we may consider to at least start off with the KISS approach of simply letting OLD_TAILS_ISO == TAILS_ISO in Jenkins.

This makes sense to me as a KISS default, but...

Of course, when testing a release, someone will be responsible to run the automated test suite with an appropriate OLD_TAILS_ISO.

... I think that release time is way too late to realize that upgrading from the previous stable release is broken somehow. Perhaps the tests for ISOs built from the stable and devel branches should default to @OLD_TAILS_ISO = last stable Tails", or something.

bertagaz, wrt. the whole "old ISO" topic, I think it's time to sum up what's your current idea of the best design on the blueprint and then validate it against common use cases / "user" stories. If you need input on specific questions, please file dedicated subtasks about them (this ticket is already way too long and hard to follow).

#27 Updated by intrigeri over 3 years ago

  • Target version changed from Tails_1.6 to Tails_1.7

#28 Updated by bertagaz over 3 years ago

intrigeri wrote:

... which is not formally part of the release process yet, and in practice I've been doing it myself every 2-4 releases => please add it to the release process. I think the details are documented already in our internal Git repo so it's a simple matter of adding a line that points there (and explains why it's useful).

Pushed in Tails.git:6737c27.

#29 Updated by bertagaz over 3 years ago

intrigeri wrote:

bertagaz, wrt. the whole "old ISO" topic, I think it's time to sum up what's your current idea of the best design on the blueprint and then validate it against common use cases / "user" stories. If you need input on specific questions, please file dedicated subtasks about them (this ticket is already way too long and hard to follow).

Pushed a sum-up in Tails.git:11be60f. Didn't compare with the scenarios yet, but I don't think it will be a problem.

#30 Updated by intrigeri over 3 years ago

intrigeri wrote:

... which is not formally part of the release process yet, and in practice I've been doing it myself every 2-4 releases => please add it to the release process. I think the details are documented already in our internal Git repo so it's a simple matter of adding a line that points there (and explains why it's useful).

Pushed in Tails.git:6737c27.

I think it should be done before the new version is released, otherwise we're silently relying on the fact that nothing (e.g. Jenkins jobs) expect all released ISOs to be in that historical ISO archive. I feel that we've already have too many tight coupling between our subsystems that make the whole thing quite brittle, let's not add more :)

#31 Updated by intrigeri over 3 years ago

  • Blueprint changed from https://tails.boum.org/blueprint/automated_builds_and_tests/jenkins/#chain to https://tails.boum.org/blueprint/automated_builds_and_tests/jenkins/

#32 Updated by intrigeri over 3 years ago

intrigeri wrote:

bertagaz, wrt. the whole "old ISO" topic, I think it's time to sum up [...]

Pushed a sum-up in Tails.git:11be60f.

Thanks! (FYI that markup syntax doesn't create HTML links.)

I've pushed 3aa178c and ca5bf58 on top to make it clear what we decided to do (the KISS approach proposed by anonym), and that the branch-dependent config/default.yml idea was not discussed nor agreed upon (for the record, I don't like it much, but I didn't bother discussing it since anonym proposed something else that's easier and that I like more).

#33 Updated by bertagaz over 3 years ago

  • Assignee changed from bertagaz to intrigeri
  • QA Check set to Info Needed

intrigeri wrote:

intrigeri wrote:

bertagaz, wrt. the whole "old ISO" topic, I think it's time to sum up [...]

Pushed a sum-up in Tails.git:11be60f.

Thanks! (FYI that markup syntax doesn't create HTML links.)

I've pushed 3aa178c and ca5bf58 on top to make it clear what we decided to do (the KISS approach proposed by anonym), and that the branch-dependent config/default.yml idea was not discussed nor agreed upon (for the record, I don't like it much, but I didn't bother discussing it since anonym proposed something else that's easier and that I like more).

Thank you, your position wasn't so clear. It's always a bit difficult to capture someone else mind when she has a strong opinion and you don't. :)

So I guess this ticket can be closed? That was the last remaining of the design process AFAIK. I guess if we stumble upon new challenges in the design (which I doubt), we can still reopen it or create a new ticket.

#34 Updated by sajolida over 3 years ago

  • Priority changed from Normal to Elevated

Note that this is due on October 15 which is actually a bit before Tails 1.7. Raising priority accordingly.

#35 Updated by intrigeri over 3 years ago

  • Assignee changed from intrigeri to bertagaz
  • QA Check changed from Info Needed to Dev Needed

So I guess this ticket can be closed? That was the last remaining of the design process AFAIK.

Please update the blueprint to make it clear, for each problem that was identified, which solution we're going with (so far, some sub-sections look like a collection of potential solutions). I suggest moving the sub-sections that were lead to a conclusion to a dedicated section: currently everything is in a big "Resources" section, which sounds wrong.

The section about chaining jobs is particularly important (#10117#note-11).

#36 Updated by intrigeri over 3 years ago

  • % Done changed from 10 to 50

As mentioned on #9486#note-44, the blueprint must also document the chosen design to parallelize test suite runs and fire up a clean test runner VM before each test suite run.

#37 Updated by intrigeri over 3 years ago

  • Related to Feature #9486: Support running multiple instances of the test suite in parallel added

#38 Updated by bertagaz over 3 years ago

intrigeri wrote:

As mentioned on #9486#note-44, the blueprint must also document the chosen design to parallelize test suite runs and fire up a clean test runner VM before each test suite run.

Ok, I've tried in c7c1623 to clarify a bit this. As the design is the same I didn't have to change much regarding this. I'm not sure what you have in mind, so hope it fits it.

#39 Updated by bertagaz over 3 years ago

  • Assignee changed from bertagaz to intrigeri
  • QA Check changed from Dev Needed to Info Needed

#40 Updated by intrigeri over 3 years ago

  • Assignee changed from intrigeri to bertagaz
  • QA Check changed from Info Needed to Dev Needed

Ok, I've tried in c7c1623 to clarify a bit this. As the design is the same I didn't have to change much regarding this.

Thanks, it's a lot clearer now!

Please update the "When we tackle [[!tails_ticket 5288]] ..." paragraph, and then feel free to mark this ticket as resolved.

Meta: in the end, the goals we had were reached in a much different way than the one that's on the description of this ticket. I believe this has been one of the causes for tensions here and on some related tickets, due to a desynchronization between my expectations and the way you were self-managing this project. I think it could smooth working together a bit in the future if we kept that description up-to-date. Shall we try that? (I'm not saying it's the only cause of problems, nor the perfect solution, but it sounds like a low-hanging fruit.)

#41 Updated by bertagaz over 3 years ago

  • Status changed from In Progress to Resolved
  • Assignee deleted (bertagaz)
  • % Done changed from 50 to 100
  • QA Check deleted (Dev Needed)

intrigeri wrote:

Thanks, it's a lot clearer now!

\o/

Please update the "When we tackle [[!tails_ticket 5288]] ..." paragraph, and then feel free to mark this ticket as resolved.

Done.

Meta: in the end, the goals we had were reached in a much different way than the one that's on the description of this ticket. I believe this has been one of the causes for tensions here and on some related tickets, due to a desynchronization between my expectations and the way you were self-managing this project.

Yes, difficult interactions through Redmine has been helped by my sometime sloppy syncro between tickets and the actual state/vision of them on my side. Sometimes, in a rush of design/test for such complex things, it's not that easy to drop a big piece of mind in understandable words, and take a precious time. I'm sometimes not efficient at answering questions, also because Redmine is slow compared to that kind of moments. Having to answer to questions on something that is a week old is sometimes difficult to remember. :)

I think it could smooth working together a bit in the future if we kept that description up-to-date. Shall we try that? (I'm not saying it's the only cause of problems, nor the perfect solution, but it sounds like a low-hanging fruit.)

I agree we have to fix this situation. I also think we should try to spend more time on IRC/Jabber too, for more fast back and forth, and better feelings of what's in the other mind. Mumble as suggested somewhere could be an option.

So yes, let's try!

#42 Updated by intrigeri over 3 years ago

Please update the "When we tackle [[!tails_ticket 5288]] ..." paragraph, and then feel free to mark this ticket as resolved.

Done.

Well:

  • it still says "we might want to restart", which sounds a bit too hypothetical given the amount of energy that's been put into making it happen;
  • after this paragraph, I see vague ideas ("If such VMs are Jenkins slave"), and in the end only one learns that we've actually found and implemented a solution.

Not worth reopening, and that's probably good enough design doc as-is.

So yes, let's try!

:)

Also available in: Atom PDF