Skip to content

Releases: concourse/concourse

v0.46.0

20 Apr 18:15
Compare
Choose a tag to compare
  • Jobs can now be configured with serial_groups, which can be used to
    ensure multiple jobs do not run their builds concurrently. See
    the docs for more information.

  • Jobs can now be paused. This prevents newly created builds from
    running until the job is unpaused.

    To pause a job, go to its page, which is now accessible by clicking
    the job name when viewing a build, and click the pause button next to
    its name in the header.

  • The abort button now aborts asynchronously, and also works when
    aborting one-off builds.

  • If multiple template variables are not bound when configuring via a
    pipeline template, all of their names are printed in the error, rather
    than just the last one.

  • Resource source and params can now contain arbitrarily nested
    configuration.

  • We’ve upgraded D3, which now does smoother zooming when
    double-clicking or double-tapping the pipeline view.

  • The ’started’ indicator in the legend now does its little dance once
    again.

v0.45.0

14 Apr 00:20
Compare
Choose a tag to compare

Don’t worry, this one’s backwards-compatible.

If you’re reading this and haven’t upgraded to v0.44.0 yet, be sure to
read that guy’s release notes. It’s a doozy.

  • One-off builds can now be viewed in the web UI!

    There is an icon at the top right (in the nav bar) that will take you
    to a page listing all builds that have ever run, including one-off
    builds, with the most recent up top.

  • Resources can now be paused, meaning no new versions will be
    collected or used in jobs until it is unpaused. This is useful to cut
    off broken upstream dependencies.

  • Pipeline configurations can now be parameterized via
    fly configure. This allows pipeline configurations to be reused,
    or published with the credentials extracted into a separate (private)
    file.

  • The Time resource can now be configured to trigger once, or on an
    interval, within a time period. This can be used to e.g. run a build
    that cleans up development environments every night, while no one’s at
    work.

  • The super verbose and ugly perl warnings while cloning git
    repositories has been fixed!

  • Some pipeline UI quirks have been fixed. Right-clicking no longer
    triggers dragging around, and the zooming has been bounded (no more
    losing your pipeline!).

v0.44.0

07 Apr 01:38
Compare
Choose a tag to compare

This release is hella backwards incompatible. Read carefully, and ask in
IRC (#concourse) if you need help!

We won’t be making such drastic changes after 1.0, but as long as we’re
still figuring things out, we don’t want to collect tech debt or land on
the wrong set of primitives.

  • Backwards-incompatible: the progression of artifacts through a build
    plan has been made more explicit.

    Previously there was basically a working directory that would be
    streamed from step to step, and aggregate steps were relied on to
    place things under subdirectories, which is how inputs to tasks were
    satisfied.

    Now, as a plan executes, each step’s produced artifact (for example a
    get step’s fetched bits or the result of a task’s execution) are
    stored in a pool, with the source named after the step.

    This change affects many things, but the primary things you’ll notice
    are as follows:

    • When executing a task step, its inputs are collected from the
      pool, rather than blindly streamed from the previous step. This
      means aggregate is no longer required to satisfy task inputs, and
      can now be removed if it’s only wrapping one step.

      Tasks are now required to list their set of inputs, otherwise no
      inputs will be streamed in. This is backwards-incompatible, but has
      many advantages: it’s more explicit, more efficient, and makes it
      clearer where the dependent inputs will be placed in a task’s
      working directory when it runs.

      When a task completes, its resulting working directory is added to
      the pool, named after the task itself. This is how you would put
      using artifacts generated by tasks.

    • The file attribute of a task step must now qualify the path with
      the name of the source providing the file.

    • When executing a put step, all sources are fetched from the
      pool. Later on we may introduce a change so that put steps declare
      their dependencies, but for now streaming everything in is the
      simplest path forward.

      The net effect of this is that any params referring to files in
      put steps must now qualify the path with the source name, as
      they’re all fetched into subdirectories.

    • Now that there’s a flat pool of sources, later steps in a build plan
      can now refer back to previously fetched (or generated) sources,
      rather than having to fetch them again.

    So, if before you had a plan that looked like this:

    plan:               
    - aggregate:        
      - get: something  
    - task: generate-foo
      file: build.yml   
    - put: foo-bucket   
      params:           
        from: foo       

    ...it would now look like this:

    plan:                      
    - get: something           
    - task: generate-foo       
      file: something/build.yml
    - put: foo-bucket          
      params:                  
        from: generate-foo/foo 

    Notably, the redundant aggregate is gone, the file attribute of
    the task step qualifies the filename with the name of the source
    containing it, and the put step qualifies the path to foo with the
    name of the task that it came from.

    Also, the something/build.yml task would now explicitly list its
    inputs, if it wasn’t before. So that could mean changing:

    platform: linux              
    
    image: docker:///busybox     
    
    run:                         
      path: something/some-script

    ...to...

    platform: linux              
    
    image: docker:///busybox     
    
    inputs:                      
    - name: something            
    
    run:                         
      path: something/some-script

    This has the advantage of making the task config more
    self-documenting, and removes any doubt as to what inputs will be
    placed where when the task starts.

    Note that listing inputs in the task config is not new, and if you
    were already listing them before the semantics hasn’t changed. The
    only difference is that they’re now required.

  • Backwards-incompatible: worker registration is now done over SSH,
    using a new component called the
    TSA.

    To upgrade, you’ll have to change your manifest a bit:

    • On your workers, replace the gate job with groundcrew and remove
      the gate properties.

    • The new tsa job template will have to be added somewhere, and
      configured with the atc credentials (the same way gate used to
      be configured).

      Colocating tsa with the atc works out nicely, so that you can
      register its listening port 2222 with your routing layer (e.g.
      ELB), which will already be pointing at the ATC.

    To compare, see the example AWS VPC manifest.

    The main upshot of this change is it’s much easier to securely
    register an external worker with Concourse. This new model only needs
    the worker to be able to reach the ATC rather than the other way
    around.

  • Backwards-incompatible: Consul services are now automatically
    registered based on the jobs being colocated with the agent. For this
    to work, you must edit your deployment manifest and move the
    consul-agent job to the top of each job template list, and remove
    your existing Consul services configuration from your manifest.

  • The get and put steps from a build’s execution can now be hijacked
    after they’ve finished or errored. Previously they would be reaped
    immediately; now they stick around for 5 minutes afterwards (same
    semantics as tasks).

  • The S3 resource now
    defaults to the us-east-1 region.

  • The S3 resource no longer
    fails to check when the configured bucket is empty.

  • A new BOSH Deployment resource has been introduced. It can be used to
    deploy a given set of release/stemcell tarballs with a manifest to a
    statically configured BOSH target. The precise versions of the
    releases and stemcells are overridden in the manifest before deploying
    to ensure it’s not just always rolling forward to latest.

v0.43.0

27 Mar 23:42
Compare
Choose a tag to compare
  • Two new resources: bosh.io release and bosh.io stemcell, for consuming
    public BOSH releases and stemcells in a more convenient way.
  • The event stream is now GZip compressed, which should speed up build
    logs.
  • The S3 resource now
    supports creating URLs using CloudFront.
  • The Git resource can now
    create tags while pushing.
  • Autoscrolling was broken. It’s fixed now.

v0.42.0

24 Mar 18:05
Compare
Choose a tag to compare
  • Debugging and detecting misconfigured or failing resource checks has
    been improved.

    Resources that are failing to check will now be shown as errored in
    the pipeline view. When viewing the resource, the check status will be
    shown on its page, and the last check error will be shown if the user
    is authenticated.

  • Viewing an already completed build is now much less painful. Rather
    than streaming the events in and drawing the page live, we now process
    all events and then render the build. This greatly improves the
    responsiveness of the UI and cuts the overall rendering time by ~6x.

  • We have fixed a few sources of potential resource leaks in the ATC. If
    you noticed your deployment getting slower over time before, you may
    want to try upgrading. Symptoms include high CPU and high memory use.

  • Blackbox can now deliver metrics from the ATC to Datadog. To configure
    this, colocate the Blackbox job on your web VMs (be sure to list it as
    the first job template), and set the following property:

    blackbox:                  
    expvar:                  
      datadog:               
        api_key: blahblahblah

    Currently emitted metrics are mainly focused on identifying resource
    leaks. If you’re noticing your deployment get slower over time, having
    metrics would greatly help in reporting the bug.

v0.41.0

23 Mar 03:33
Compare
Choose a tag to compare

Run fly sync to upgrade Fly after deploying v0.41.0!

  • The containers used for checking for new versions of resources can now
    be hijacked with fly hijack -c resource-name. This should help
    with debugging failing checks; once in the container you can directly
    run the check by running something like this:

    echo '{"source":{...}}' | /opt/resource/check

    Where the source reflects the configuration for the resource in your
    pipeline.

    In future releases there will be better ways to detect and debug
    failing checks; this is just a first step.

  • The Git resource will now start checking over from HEAD when the ref
    it’s checking from becomes invalid, e.g. when a git push -f
    happens.

    This will also fix any stuck resources in deployments affected by the
    bug introduced by v0.39.0 (later fixed in v0.40.0).

v0.40.0

18 Mar 16:48
Compare
Choose a tag to compare

This release fixes up a regression that affected Git repositories with
plenty of files. Previously the resource would detect bogus versions and
clog up the pipes. This has been fixed by upgrading Git and greatly
simplifying the checking implementation.

v0.39.0

16 Mar 23:07
Compare
Choose a tag to compare

Run fly sync to upgrade Fly after deploying v0.39.0!

This release adds a new way of configuring jobs called Build Plans.

We’ll be removing support for the old style of job configuration in the
future; to automatically migrate your configuration, just run
fly configure against your instance to update your local
configuration after upgrading to this release.

For more details, read on.

  • The biggest and best feature of this release is support for arbitrary
    Build Plans. Jobs no longer need to take the form of inputs to a build
    to outputs (although this is still possible). Jobs can now run steps
    in parallel, aggregate inputs together, and push to outputs in the
    middle of a build. The applications and possibilities are too numerous
    to list. So I’m not going to bother.

    Since there is now more than one stage to hijack into we’ve added
    new flags for the step name (-n) and step type (-t). You can use
    these to gain shell access to any step of your build.

    For more information on how you can start, see Build Plans. We’ve
    found a 43.73% increase in happiness from people who use this feature.

    As part of rolling out build plans, we now automatically translate the
    old configuration format to the new plan-based configuration.

  • We’ve renamed what was formerly known as "builds" (i.e. build.yml)
    to Tasks to disambiguate from builds of a job. Jobs have builds, and
    builds are the result of running tasks and resource actions.

  • In related news, we needed to upgrade the UI to support all of these
    wonderful new flows so we’ve spruced up the build log page a little.
    There are now individual success/failure markers for each stage and
    stages that are not-interesting (successful) will automatically
    collapse. There are also little icons. Totally rad.

  • A few of you noticed that having multiple ignore paths in the
    git-resource wasn’t really working properly. Well, we’ve fixed that.
    We now process the ignore_paths parameter using .gitignore
    semantics.

v0.38.0

04 Mar 22:19
Compare
Choose a tag to compare

Run fly sync to upgrade Fly after deploying v0.38.0!

  • Extra keys are now detected during config updates and rejected if
    present. This should catch common user errors (e.g. forgetting to nest
    resource config under source) and backwards-incompatible changes
    more safely (e.g. renaming trigger to something else).
  • You can now use the GitHub release resource without an access token
    for everything except publishing a resource.
  • The GitHub Release resource now actually supports being used as an
    input. We had previously forgotten to actually wire in the command.
    Woops.

v0.37.0

04 Mar 00:33
Compare
Choose a tag to compare

Run fly sync to upgrade Fly after deploying v0.37.0!

  • fly configure -c now presents the user with the changes that the
    new configuration applies, and asks for the user to confirm.
    Committing the configurion is done atomically, meaning confirmation
    always applies what you expect. It will reject it if what you’ve
    compared against has since changed, i.e. another person has run
    fly configure.
  • Concourse is now durable to the worker’s Garden servers going down (or
    not being up) in the midst of builds running. This fixes EOF and
    connection errors causing builds to error.
  • fly configure -c is now more helpful when it fails (it actually,
    you know, prints the problem out, rather than silently failing).
  • The S3 resource now includes url metadata when fetched as an input.