- Consolidate the release notes.
- Make sure breaking changes are explicitly called out. Ideally these have been tracked on the go but it does not hurt to ask around and double check. Running the python integration tests of the previous release with the latest binaries could serve as a double check for this.
- Ensure the release notes are complete. Ideally these have been tracked on the go.
- Based on our semantic versioning strategy, our release numbers look like
MAJOR.MINOR.PATCH
. Determine the release type to come up with what the next version should look like:- Breaking changes increment
MAJOR
, and resetMINOR.PATCH
to0.0
- Added functionality which is backwards compatible increments
MINOR
, and resetsPATCH
to0
- Bug fixes increment
PATCH
- Breaking changes increment
- Speculatively bump the version in version_info.h to the version you determined earlier. This may result in version gaps if a release attempt fails, but avoids having to freeze merges to master and/or having to work with release branches. In short it helps keeping the release procedure lean and mean and eliminates the need for blocking others while this procedure is in-flight.
- Draft a GitHub tagged release. Earlier releases are tagged like
v0.1
, but as of0.3.0
we are using semantic versioning - Perform thorough testing of the targeted revision to double down on stability [1]
- Create an optimized build for comparing with the previous release. Changes in performance and/or measurement accuracy may need to be considered breaking. If in doubt, discuss! TODO(#245): Formalize and/or automate this.
- If things look good, File an issue to raise a vote on publishing the drafted release, and point contributors to that.
- Allow sufficient time to transpire for others to provide feedback, before moving on to the next step. If nobody raises blocking issues, it is time to go ahead and publish the drafted release.
- |o/ Announce the new release on Slack.
In case any of the steps above prevent shipping:
- Discard the draft release tag, if any
- Close the voting issue, if any. If needed share context in the issue so everyone has context.
- Subsequently re-spin the full procedure from the top after the blocking issue is resolved
[1] Consider running the unit tests and integration tests repeatedly and concurrently for while. It's worth noting that some of the integration tests have not been designed for this, so that part is not tried and proven yet. (some of the integration tests are sensitive to timing, and may get unstable when slowed down due to concurrent runs). A sample command to do this:
bazel test \
--cache_test_results=no \
--test_env=ENVOY_IP_TEST_VERSIONS=all \
--runs_per_test=1000 \
--jobs 50 \
-c dbg \
--local_resources 20000,20,0.25 \
//test:*