jason, thanks for your reply.
- > Build your own, customized version of the fetch task (as a Shell script, or whatever) that takes a list of 0 or more dependencies as input. Call that script from the template, passing in the appropriate dependencies for each project. The fetch task is essentially just an API call, so you should be able to replicate it without too much trouble. There is a risk here of potentially exposing secrets (the authentication keys required to call the APIs), so you would want to consider how to protect those. yes this also occurred to me after i wrote my email. from the logs, i believe the artifacts fetches may be little more than just an http get themselves. if this works it probably is the best solution. relevant docs: - gocd artifact api downloads (trick here would be also grabbing the pipeline/stage counters) : https://api.gocd.org/current/#get-artifact-directory - SO post on similar task: https://stackoverflow.com/a/44647409/25282822 - python gocd client artifact downloads: https://yagocd.readthedocs.io/en/latest/usage.html#accessing-artifacts - other gocd client libraries (one maybe newer than yagocd: https://github.com/grundic/yagocd?tab=readme-ov-file#different-implementations-of-gocd-api - > Add a "publish" step to your workflow so that every project gets published to some centralized location. Have the agents pull their dependencies from there instead of the local "cache". Depending on how this is handled, there may be risks associated with either versioning or partially completed publications. our 'package' stage in this template already does this, so might be an option. > Add a "publish" step to your workflow that runs on every agent and downloads the latest version to a defined location on the local file system that is outside the agent working directory. This creates a lot of additional jobs for the agents to complete, though. And there is still the issue that a newly minted agent will need a way to get the latest version of everything before it runs its first build. this is sort of what our workaround/hack does, it just forces all the agents to refresh/rebuild everything current. -- i also want to have somebody look at the source, both to see how it's interpreting fetch parameters in the config file (resources can deal with multiple values in the same config, so perhaps fetch tasks also does this) and because I'd imagine it's not super hard to allow it to take multiple fetches from the same parameter. this is probably the least-amount-of-code way, but it would require getting PR approved and then upgrading to latest gocd. the quick and dirty way w/least-code-and-least-testing-required is probably what i originally suggested and which you also suggested, adding N-fetch parameters to template. but again this depends on it being able to ignore if the parameter is undefined. if that is the case, it might work for our specific configuration because i don't think to many single pipelines have more than 3/4 immediate upstream dependencies. I'm still a little bit surprised I guess that this isn't cleanly handled by templates, because this seems like something everyone would need to be able to do and that the "normal" way seems a bit janky is a bit out of character for gocd. however perhaps it was assumed that if you are packaging every pipeline then you could just restore from your own package (tho this is all external to gocd, whereas fetch seems like it should be able to take it's arguments from pipeline template parameters... but nothing's perfect either). i will try to remember to report back what we find out On Fri, Dec 20, 2024 at 3:59 PM Jason Smyth <[email protected]> wrote: > Hi Josh, > > First, I think you are on the right track as far as understanding the > nature of the issue. > > It seems like the process your builds currently follow assumes that there > is always a "current enough" version of my dependencies present on the > agent. Depending on a number of factors (e.g., number of dependencies, > number of agents, frequency of builds in upstream vs downstream projects, > etc.) this assumption seems to hold true for most of your builds, but > proves false often enough that it is causing you grief. > > You have also, in my opinion, identified the primary "GoCD way" of > addressing the issue: downstream projects should pull the upstream > artifacts from the GoCD artifact store using fetch tasks. The issue of "How > can I make a template that runs an arbitrary number of fetch tasks?" is > rooted in the primary reason we moved away from templates. We found the > constraints too limiting. > > In addition to the ideas you put forth, here are some others that might > work for you: > > - Put X fetch tasks in the template, where X is the largest number of > dependencies across all of your projects. For projects that have fewer than > X upstream projects, use duplicate values for the extra parameters. > (Caveat: I haven't tested this; GoCD may flat out reject it.) > - Create X+1 templates, where X is the number of unique dependency > counts across all of your pipelines (from 0 through the highest count). > Assign each pipeline to the appropriate template based on its number of > dependencies. > - Build your own, customized version of the fetch task (as a Shell > script, or whatever) that takes a list of 0 or more dependencies as input. > Call that script from the template, passing in the appropriate dependencies > for each project. The fetch task is essentially just an API call, so you > should be able to replicate it without too much trouble. There is a risk > here of potentially exposing secrets (the authentication keys required to > call the APIs), so you would want to consider how to protect those. > - Add a "publish" step to your workflow so that every project gets > published to some centralized location. Have the agents pull their > dependencies from there instead of the local "cache". Depending on how this > is handled, there may be risks associated with either versioning or > partially completed publications. > - Add a "publish" step to your workflow that runs on every agent and > downloads the latest version to a defined location on the local file system > that is outside the agent working directory. This creates a lot of > additional jobs for the agents to complete, though. And there is still the > issue that a newly minted agent will need a way to get the latest version > of everything before it runs its first build. > - Move to elastic agents. Change the upstream project flows so that > they push their updates into the elastic agent template(s) that the > downstream projects depend on. > - Move away from GoCD templates and use one of the config repository > plugins. I have no experience with it personally, but I suspect the Groovy > plugin may be able to generate a list of pipelines with an arbitrary number > of fetch tasks, but that are otherwise identical. > - Update the part of the existing process that assumes there is a > "current enough" version. Change the assumption to "there is _a_ version", > and do an explicit update and rebuild. This is probably the quickest and > dirtiest solution. I don't recommend it long-term, but, depending on the > impact of the current failure rate, it might be worth looking at until you > can design and implement something better. > > Hope this helps, > Jason Smyth > > > On Tuesday, 17 December 2024 at 04:39:32 UTC-5 pracplay devs wrote: > >> >> this is closer to the config option i think would solve it. not sure if >> it can already do this: >> >> PROJECT-TWO USING TEMPLATE: MULTI-UPSTREAM-PIPELINE-FETCH-ARTIFACTS >> -PARAMETER: fetch-upstream-list // for example, >> fetch-upstream-list:project1,project0 >> >> MULTI-UPSTREAM-PIPELINE-FETCH-ARTIFACTS-TEMPLATE FOR >> JOB: >> FETCH TARGET: source: #{fetch-upstream-list}/build dest: artifacts/ >> >> // and then when project2's pipeline is started, it would fetch each >> artifacts from project1/build and project2/build to dump in project 2's >> artifacts >> >> >> On Tue, Dec 17, 2024 at 3:27 AM Joshua Franta <[email protected]> >> wrote: >> >>> >>> chad thanks again for quick response. >>> >>> TL;DR: I felt I didn't explain this very well and i don't think i've >>> ever done this before but since the support on this forum is very good i >>> recorded myself reading the key parts of this email : >>> https://www.youtube.com/watch?v=m4C0o7u_Iow >>> (ignore tl;dr if you find that pretentious or high maint or whatever, >>> any help appreciated) >>> >>> EMPTY ARTIFACTS >>> >>> i don't think it's downloading an empty artifact. typically when it >>> can't find an artifact it will give a 404 error, but the stage will fail. >>> this happens way, way more rarely and it's almost always because the >>> artifact max size setting got hit and it cleared some artifacts. >>> (i'm guessing this is why you are perhaps asking about size, we've >>> carefully tuned our max artifact size so we probably haven't had missing >>> artifacts for almost a year. ) >>> to answer your question our largest artifact store is about 80M, the >>> smallest ones are around 3-4MB and the mean is probably 20-30M. the disk >>> w/artifact store has about 700-800GB free tho, we don't clean artifacts >>> until we get around 200-300GB free. so more than enough to hold several >>> previous revisions of every pipelines' artifacts. >>> >>> the other reason i don't think this is the issue (also the same reason i >>> don't have any reason to suspect it downloading old artifacts) is because >>> >>> 1. we would get more failed tests (and to lesser extent, failed >>> packages) because usually if a new test was added but run against old >>> binaries this would cause the test failure. this never happens >>> 2. also- the ultimate problem trying to fix here- we never have >>> failed pipelines. it's just occasionally a downstream will get packaged >>> with an old upstream that fails on runtime (not in the pipeline, or at >>> least never in these particular pipelines, which don't run code outside >>> of >>> tests) >>> >>> >>> UPSTREAM ARTIFACTS VS SAME-PIPELINE ARTIFACTS (w/variable upstream >>> pipelines) >>> >>> i think i didn't explain the scenario clearly enough that i think is >>> happening >>> >>> project2 depends on project1 >>> (project here is synonymous w/pipelines) >>> >>> two agents: agentA and agentB w/ this directory tree: >>> agent-working/project1-working >>> agent-working/project2-working >>> >>> assume both agents have built both projects most recent revisions >>> (iow both of their 'agent-working' directories are essentially identical) >>> agentA-project1-version=1 >>> agentA-project2-version=1 >>> agentB-project1-version=1 >>> agentB-project2-version=1 >>> >>> then comes a new commit to project1 (commit-aka-version=2) >>> >>> >>> 1. project1 commit#2 build stage/job assigned to agent A by gocd, it >>> builds and uploads it artifacts >>> 2. the rest of the stages complete by fetching the artifacts from >>> commit#2 >>> 3. project2 gets it's second commit, which gets assigned to agentB >>> by gocd >>> 4. agentB builds it fine, but recall that agentB wasn't involved in >>> project1-commit#2 and so it only has built project1-commit#1 >>> 5. because project2 isn't a stage of project1, it can't fetch >>> project1's build artifacts (unless you untemplate everything, or unless i >>> can figure out how to templatize multiple upstream artifact fetches) >>> 6. so in this instance, project2 builds but it uses the builds from >>> "../project1-working", which is commit#1 >>> 7. this works in most cases and gets packaged up, but then fails in >>> runtime because it's still got an old build from project1 mixed in with >>> project2's second commit >>> >>> i'm pretty sure this is whats happening, because when this happens if i >>> go look at the versions of the dlls it's got the old ones. >>> that's also why the workaround of re-pulling and refetching the >>> pipelines periodically works, tho this is messy/inefficient and done >>> outside of gocd. >>> >>> what i want to fix this is to add extra artifact fetches to project2. >>> if i can change project2 to fetch project1's artifacts always pre-build, >>> it should work (this is essentially what the localized >>> refetch-and-rebuild-everything hack does ) >>> with templates tho, i have to parametize what projects is upstream (the >>> "project2 depends on project1 build artifacts" relationship) >>> if it was just one upstream per pipeline, i know how to do this >>> w/templates and parameters. >>> >>> however for almost all the pipelines, there are MULTIPLE upstream >>> pipelines that a given downstream pipeline needs to build. >>> and not just more than one, it's an unknown number (sometimes there are >>> zero, 1, 2, and even one has 7-8) >>> how can i parameterize an unknown number of artifact fetches through a >>> template? >>> >>> if i could do this, then this is what i believe would happen in the >>> above scenario: >>> >>> >>> 1. project1 commit#2 build stage/job assigned to agent A by gocd, it >>> builds and uploads it artifacts >>> 2. the rest of the stages complete by fetching the artifacts from >>> commit#2 >>> 3. project2 gets it's second commit, which gets assigned to agentB >>> by gocd >>> 4. agentB looks at it's 'upstream-pipeline-list' and sees it has to >>> pull artifacts from 'project1' so it does this, and then gets the correct >>> upstream version >>> 5. then everything builds and works. >>> >>> is this possible? >>> >>> superficially it seems it's not, but i thought something similiar about >>> having different resource requirements per-pipeline and you and some other >>> people explained how to do it in the config. >>> not sure if it's the same but this seems like it should be possible, and >>> maybe it's not clear through the gocd web pipeline editor. >>> >>> other solutions i can think of are: >>> >>> >>> - (least preferred) stop using pipeline templates, rebuild all >>> pipelines as stand-alone and just put arbitrary #s of 'fetch artifacts' >>> tasks on each pipeline (one for each upstream project as needed) >>> - (still bad but better) hack something to mass rsync directories >>> between agents >>> - (more controlled but still bad and outside of gocd) using our hack >>> script to rebuild pipelines and just automating it further to run on >>> every >>> agent periodically >>> - (still very complicated but at least inside gocd) somehow >>> scheduling/forcing gocd to periodically build all pipelines on all agents >>> (eg so for any given project/pipeline, agentA and agentN all have >>> identical >>> trees) >>> - (hacky but more deterministic and closer to "GOCD WAY") trying to >>> put some fixed # of upstreams (upstream1,upstream2,upstream3, etc) into >>> the >>> template and see if it will properly ignore empty parameters >>> - ("GOCD WAY" as i understand it) being able to somehow create a >>> single parameter that holds multiple upstream pipelines and have gocd >>> fetch >>> them all before it builds/tests a downstream stage) >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> On Tue, Dec 17, 2024 at 1:52 AM Chad Wilson <[email protected]> >>> wrote: >>> >>>> If you're using subversion and you don't have "clean working directory" >>>> checked then the problem I have seen might explain something like this. (I >>>> mentioned git because most folks use git, and the git integration by >>>> default cleans the clone locations every build using git tools independent >>>> of "clean working directory"). >>>> >>>> If you enable "clean working directory" on these stages what I imagine >>>> will probably happen is that "1 in 5" of these builds will now fail at the >>>> "test" or "package" stages due to the DLLs being completely missing - >>>> rather than stale. That's probably better semantics. But cleaning the >>>> working dir might make things slower in other areas of your build so for >>>> some folks that's not ideal. >>>> >>>> What I *suspect* could be happening is that the fetch artifact is >>>> actually downloading an "empty artifact" silently and instead of replacing >>>> the previous binaries/dlls is perhaps using whatever was there from the >>>> last run. is that possible given your agents and build scripting? >>>> >>>> - How big are the artifacts being uploaded? >>>> - Could you share the layout of the artifacts as uploaded by the >>>> "build" stage, and the "fetch" configuration used? >>>> >>>> e.g GoCD itself (we use GoCD to build GoCD) has a stage with artifacts >>>> uploaded like so: >>>> >>>> [image: image.png] >>>> >>>> The next stage in the same pipeline (very similar to your setup) does >>>> the following fetch >>>> [image: image.png] >>>> >>>> When specified this way (fetch entire artifact directory, not >>>> individual file) what the server does is zip up all of the things inside >>>> the "dist/zip" folder and send this zip to the agent. The agent then unzips >>>> into the working dir. >>>> >>>> >>>> *Why might this be the issue?* >>>> Something like I describe above can happen due to a design decision in >>>> GoCD which I personally consider a bug and which I have seen (but have >>>> never come across being documented - I should probably dig). >>>> >>>> If I recall correctly the details of what can happen, it is basically >>>> possible for a subsequent stage to trigger, and start fetching artifacts >>>> before the previous stage's uploaded artifacts have actually been processed >>>> properly and are ready to download. I believe when you ask for an artifact >>>> directory to be fetched it might be possible for it to just download an >>>> empty zip rather than "failing fast" because requested directory is >>>> missing. This is much more likely to happen with large artifacts; with slow >>>> artifact uploads or slow GoCD server/networking. >>>> >>>> >>>> 1. Possible workaround if you want to confirm this is the problem >>>> while getting things to "fail fast": I believe if you download an >>>> individual specific zip rather than a directory it will fail at the >>>> fetch >>>> step if the artifact is not there, after retrying. Not always possible >>>> if >>>> the file/zip name is not deterministic (e.g includes a build number or >>>> something) >>>> 2. Possible workaround for the main problem: >>>> - Only worth doing if you confirm the root cause. Add a "sleep" >>>> type of step before the fetch :-( Not 100% reliable unless you make >>>> it >>>> sleep a lot.[image: image.png] >>>> - Use sequential tasks for build/test/package rather than >>>> relying on artifact upload/fetch, if you don't need the intermediary >>>> artifacts on the GoCD server for other reasons. >>>> >>>> >>>> My *main reservations/open question*s as to whether I understand and >>>> this could explains your problem is that >>>> >>>> - you said "it's only between upstream and downstream pipelines, >>>> never in the same pipeline." but then you described a set of stages >>>> inside >>>> a single pipeline for build-->test-->package? What am I missing? >>>> - you implied the "package" step is affected, and that has the >>>> "test" stage in between build(upload) --> test --> package(fetch from >>>> build >>>> stage) , so I'd normally expect the artifacts to be fully ready to >>>> download >>>> by the time the package step runs, unless the test stage is incredibly >>>> fast. >>>> - if I've got it all wrong, probably need a fuller description of >>>> pipelines/stages/tasks that shows how they manifest :-/ There are many >>>> other reasons in pipeline design you could be fetching things wrongly >>>> outside the "bug" I refer to above :-) >>>> >>>> >>>> You can see exactly which pipeline/stage it is fetching the artifacts >>>> from in the console, and thus check which versions *should* have been >>>> there. e.g >>>> >>>> [go] Fetching artifact [dist/zip] from [installers/4437/dist/latest/dist] >>>> >>>> >>>> - Chad >>>> >>>> On Tue, Dec 17, 2024 at 2:16 PM Joshua Franta <[email protected]> >>>> wrote: >>>> >>>>> >>>>> > To confirm - "local build directory" in your description is inside >>>>> the normal agent working directory that GoCD creates inside pipelines/ >>>>> rather than somewhere elsewhere on the agent file system? >>>>> >>>>> yes exactly the 'fetch artifact' task pulls the binaries back into the >>>>> agent's working directory for that pipeline (aka local build directory in >>>>> my parlance) >>>>> >>>>> 1. not sure i understand the intent of this question, but most of >>>>> these pipelines use svn/subversion not git. (there maybe 1-2 using git). >>>>> perhaps you mean how a given project/pipeline sources dependencies NOT >>>>> in it's own repo/material? >>>>> if that's the question, the agents each have 'pipelines' folder that >>>>> holds all the agent working directories, so eg: >>>>> >>>>> agent-pipeline-dir >>>>> >>>>> - project1-working-dir/ >>>>> - project2-working-dir/ >>>>> >>>>> to avoid monorepo complexity, projects can assume their non-package >>>>> upstream dependencies live one level up from their working directory. >>>>> so each project is either cloned (git) or checked out (svn) from the >>>>> agent working directory to the project/pipeline working directory (that >>>>> the >>>>> agent is configured to use for each pipeline in template). >>>>> >>>>> 2. i just checked and we do NOT have 'clean working directory' set on >>>>> any stage of this pipeline/templates. >>>>> this would only apply to the project1/2-working-directory in my >>>>> example in #1, yes? >>>>> how would this help making sure the upstream binaries were correct? >>>>> (or maybe it wouldn't and you're just asking to understand, not to >>>>> suggest) >>>>> >>>>> so at least No on 2 i think, not sure about whether #1 is a no for you. >>>>> >>>>> thx again for ur help >>>>> >>>>> On Mon, Dec 16, 2024 at 11:44 PM Chad Wilson <[email protected]> >>>>> wrote: >>>>> >>>>>> This does sound broadly like something that GoCD is designed to >>>>>> handle - ensuring consistent and reproducible artifact and/or material >>>>>> inputs. Using the server to mediate (store and fetch) artifacts between >>>>>> stages or pipelines is also intended usage. >>>>>> >>>>>> To confirm - "local build directory" in your description is inside >>>>>> the normal agent working directory that GoCD creates inside pipelines/ >>>>>> rather than somewhere elsewhere on the agent file system? >>>>>> >>>>>> 1) do the DLLs get put/copied/fetched into a location that is >>>>>> *inside* a Git material repo clone? e.g <working-dir>/test-repo >>>>>> where "test-repo" is a Git material with alternate checkout location or >>>>>> if >>>>>> Git material is cloned directly to <working-dir> >>>>>> 2) if NOT, and they are inside the agent the working area, but >>>>>> OUTSIDE the clone does your pipeline that packages the DLLs clean its >>>>>> workspace from previous runs every time it executes, i.e have you enabled >>>>>> this for the stage? >>>>>> >>>>>> [image: image.png] >>>>>> >>>>>> If "no" to both questions - I possibly know a possibly root cause, as >>>>>> I've seen it myself. :-/ >>>>>> >>>>>> -Chad >>>>>> >>>>>> On Tue, Dec 17, 2024 at 1:08 PM Josh <[email protected]> wrote: >>>>>> >>>>>>> >>>>>>> we've used gocd for many years and it's a great product. >>>>>>> >>>>>>> been having an occasional issue that is increasing as we increase >>>>>>> deployment frequency. >>>>>>> sometimes pipelines briefly get "stuck" on old dlls, meaning that >>>>>>> sometimes a downstream pipeline will fail to run because it's been >>>>>>> packaged >>>>>>> with older dlls. it's only between upstream and downstream pipelines, >>>>>>> never >>>>>>> in the same pipeline. this occurs infrequently infrequently but perhaps >>>>>>> as >>>>>>> much as 1 out of every 5 builds. >>>>>>> >>>>>>> the workaround fix is to run a script on all the agents that >>>>>>> periodically refreshes and rebuilds all the pipelines manually. not >>>>>>> sure >>>>>>> why this works but it always does. >>>>>>> >>>>>>> haven't been able to figure out the cause, i'm wondering if it's a >>>>>>> misunderstanding about artifacts, or otherwise misconfigured artifacts? >>>>>>> >>>>>>> here's the situation: >>>>>>> >>>>>>> - we have a pipeline template that runs 8 or 9 pipelines >>>>>>> - the template (and thus every pipeline) has 4 stages: prep, >>>>>>> build, test and package >>>>>>> - prep stage: doesn't do much, mostly just analysis >>>>>>> - build stage: pulls code from the repo and builds it, builds >>>>>>> artifacts from all binaries built and puts them in gocd at: >>>>>>> #{project-name}/build >>>>>>> - test stage, fetches those artifacts stored in >>>>>>> #{project-name}/build, puts them in local build directory and then >>>>>>> runs >>>>>>> tests, saves test artifact (not used in build/pkging) >>>>>>> - package stage: fetches artifacts stored at stored in >>>>>>> #{project-name}/build, puts them in local build directory and >>>>>>> packages them >>>>>>> up >>>>>>> >>>>>>> as i say, most of the time it works great but occasionally a >>>>>>> mismatch between a previously built upstream pipeline (older version) >>>>>>> gets >>>>>>> mixed in with a newer pipeline build, and while it compiles when you run >>>>>>> something with the mismatched versions it generates a runtime exception. >>>>>>> >>>>>>> as i'm describing this, i believe the cause might be that since we >>>>>>> have multiple agents, a given agent might not always be scheduled to >>>>>>> build >>>>>>> every pipeline stage. >>>>>>> >>>>>>> so eg if project2 is downstream from project1: >>>>>>> >>>>>>> agentA builds project1.verX >>>>>>> agentB builds project2.verX >>>>>>> >>>>>>> [project 2 changes] >>>>>>> >>>>>>> agentA builds project2.verY >>>>>>> agentA still has project1.verX binaries locally, so these get built >>>>>>> against project2.verY >>>>>>> >>>>>>> then when the binaries get packaged up, you get the version mismatch. >>>>>>> >>>>>>> it seems like what maybe should occur is that we should have >>>>>>> pipelines also fetch artifacts from all their upstream dependencies (vs >>>>>>> just fetching from their upstream stages, as i described above). >>>>>>> >>>>>>> however I'm not certain how to do this with pipeline templates, >>>>>>> since we could have multiple upstream pipelines to fetch from? >>>>>>> >>>>>>> so i wanted to add an arbitrary # of 'fetch artifact' tasks to a >>>>>>> build stage's pipeline, and then put all it's upstream pipelines as >>>>>>> parameters... how can i make the pipeline properly fetch all of: >>>>>>> >>>>>>> - zero upstream pipelines >>>>>>> - one upstream pipeline >>>>>>> - multiple upstream pipelines >>>>>>> >>>>>>> ? >>>>>>> >>>>>>> Hopefully this makes sense. >>>>>>> >>>>>>> My Idea: >>>>>>> >>>>>>> - Is there a way i can somehow create a 'upstream-pipeline-list' >>>>>>> parameter, have each pipeline list their upstreams in CSV fasion, >>>>>>> and then >>>>>>> have gocd fetch EACH of these upstream pipeline builds prior to >>>>>>> actually >>>>>>> building the stage? >>>>>>> >>>>>>> To me putting #{upstream-pipeline-list} in a single 'fetch artifact' >>>>>>> task doesn't seem right, since the context of the task seems to only >>>>>>> take >>>>>>> one source location, not multiple. >>>>>>> >>>>>>> But I misunderstood this before regarding resources, so I figured it >>>>>>> was worth asking. >>>>>>> >>>>>>> Or maybe there's some other even more obvious thing I"m missing >>>>>>> (outside of a monorepo, we can't use a monorepo here at least not >>>>>>> presently). What is the 'GOCD WAY' to handle this properly? >>>>>>> >>>>>>> appreciate any assistance >>>>>>> >>>>>>> -j >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> You received this message because you are subscribed to the Google >>>>>>> Groups "go-cd" group. >>>>>>> To unsubscribe from this group and stop receiving emails from it, >>>>>>> send an email to [email protected]. >>>>>>> To view this discussion visit >>>>>>> https://groups.google.com/d/msgid/go-cd/066669fa-b0ae-40ad-bfb2-0cf3e567e641n%40googlegroups.com >>>>>>> <https://groups.google.com/d/msgid/go-cd/066669fa-b0ae-40ad-bfb2-0cf3e567e641n%40googlegroups.com?utm_medium=email&utm_source=footer> >>>>>>> . >>>>>>> >>>>>> -- >>>>>> You received this message because you are subscribed to a topic in >>>>>> the Google Groups "go-cd" group. >>>>>> To unsubscribe from this topic, visit >>>>>> https://groups.google.com/d/topic/go-cd/7HMOd1Z_3oM/unsubscribe. >>>>>> To unsubscribe from this group and all its topics, send an email to >>>>>> [email protected]. >>>>>> To view this discussion visit >>>>>> https://groups.google.com/d/msgid/go-cd/CAA1RwH-jPedtiQtUt6qQhAGLSu3-CkZESg0Sx1qjRCkERFU6tg%40mail.gmail.com >>>>>> <https://groups.google.com/d/msgid/go-cd/CAA1RwH-jPedtiQtUt6qQhAGLSu3-CkZESg0Sx1qjRCkERFU6tg%40mail.gmail.com?utm_medium=email&utm_source=footer> >>>>>> . >>>>>> >>>>> -- >>>>> You received this message because you are subscribed to the Google >>>>> Groups "go-cd" group. >>>>> To unsubscribe from this group and stop receiving emails from it, send >>>>> an email to [email protected]. >>>>> To view this discussion visit >>>>> https://groups.google.com/d/msgid/go-cd/CABr%2BOtrx%3DH_R1-aA4tnC7Lgx%2BFS9KQ1Po11rLkigE_6jPpbeQw%40mail.gmail.com >>>>> <https://groups.google.com/d/msgid/go-cd/CABr%2BOtrx%3DH_R1-aA4tnC7Lgx%2BFS9KQ1Po11rLkigE_6jPpbeQw%40mail.gmail.com?utm_medium=email&utm_source=footer> >>>>> . >>>>> >>>> -- >>>> You received this message because you are subscribed to a topic in the >>>> Google Groups "go-cd" group. >>>> To unsubscribe from this topic, visit >>>> https://groups.google.com/d/topic/go-cd/7HMOd1Z_3oM/unsubscribe. >>>> To unsubscribe from this group and all its topics, send an email to >>>> [email protected]. >>>> To view this discussion visit >>>> https://groups.google.com/d/msgid/go-cd/CAA1RwH_ZKU1GoRtWvpKtc16DqPHf87DbS1%3DK0ONL0z%2B%3D0Do9Lw%40mail.gmail.com >>>> <https://groups.google.com/d/msgid/go-cd/CAA1RwH_ZKU1GoRtWvpKtc16DqPHf87DbS1%3DK0ONL0z%2B%3D0Do9Lw%40mail.gmail.com?utm_medium=email&utm_source=footer> >>>> . >>>> >>> -- > You received this message because you are subscribed to a topic in the > Google Groups "go-cd" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/go-cd/7HMOd1Z_3oM/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > [email protected]. > To view this discussion visit > https://groups.google.com/d/msgid/go-cd/e3975f71-6dc0-44f8-bf12-875afba061cfn%40googlegroups.com > <https://groups.google.com/d/msgid/go-cd/e3975f71-6dc0-44f8-bf12-875afba061cfn%40googlegroups.com?utm_medium=email&utm_source=footer> > . > -- You received this message because you are subscribed to the Google Groups "go-cd" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion visit https://groups.google.com/d/msgid/go-cd/CABr%2BOtoCc7qdo%2Brrd2HD_p__5FuY1WfBHxkzzoKaZEUN45wuxA%40mail.gmail.com.
