Compare commits

..

58 commits

Author SHA1 Message Date
Tan N. Le
db10285368
Update CHANGELOG.md 2022-10-12 21:47:07 -07:00
shivrsrivastava
8454a6ebf3
Adding priority to the task (#140) 2022-10-12 21:46:07 -07:00
Tan N. Le
c318042e96
release 1.24.0 (#139) 2021-11-09 09:00:35 -08:00
Tan N. Le
db9bebb802
enable default sla for slaDrain (#138) 2021-11-01 18:17:49 -07:00
Renán I. Del Valle
fff2c16751
Enabling code analysis 2021-09-01 10:09:11 -07:00
Renán I. Del Valle
c59d01ab51
Changes Travis CI badge to Github Actions badge (#137) 2021-08-06 18:16:06 -07:00
Renán I. Del Valle
62df98a3c8
Bug fix for auto paused update monitor (#136)
Returns success if the update has finished updating successfully.
2021-08-06 16:02:52 -07:00
Renán I. Del Valle
5c39a23eb2
Enable Github Actions for PRs
Run CI on pull requests and when the branch is pushed.
2021-08-06 15:52:46 -07:00
Renán I. Del Valle
dbc396b0db
Disables Travis CI (#135)
Travis CI is no longer needed as we have migrated to Github Actions
2021-08-06 10:58:34 -07:00
Renán I. Del Valle
86eb045808
Adds go.sum and removes dep files (#134) 2021-08-06 10:57:45 -07:00
Renán I. Del Valle
c7e309f421
Actions fix (#133)
* Moving main.yml to the right place.
2021-08-06 10:06:12 -07:00
Renán I. Del Valle
49877b7d41
Adds support for running CI on github actions. (#132) 2021-08-06 10:00:57 -07:00
Renán I. Del Valle
82b40a53f0
Add verification to retry mechanism (#131)
CreateJob, CreateService, and StartJobUpdate now include a rudimentary verification function to check if the call made it to the Aurora Scheduler when the client experiences a timeout.
2021-05-11 13:37:23 -07:00
Renán I. Del Valle
a9d99067ee
Documentation fix (#130)
Fixes documentation so that it is more compliant with godoc format.
2021-04-29 10:48:43 -07:00
Renán Del Valle
e7f9c0cba9 Bumping up version to 1.23.1 2021-02-25 17:59:02 -08:00
Renán Del Valle
fbaf218dfb Preparing release notes for 1.23.0 2021-02-25 17:59:02 -08:00
Renán I. Del Valle
6a1cf25c87
Upgrading Mesos to 1.7.2 and Aurora Scheduler to 0.23.0 (#128) 2021-02-25 17:55:42 -08:00
Renán Del Valle
4aaec87d32 Bumping up version to 1.23.0 2021-02-25 17:34:48 -08:00
Renán Del Valle
895d810b6c Releasing Version 1.22.5 2021-02-25 17:34:48 -08:00
Renán I. Del Valle
511e76a0ac
Upgrading to Thrift 0.14.0 (#126)
Upgrading thrift to 0.14.0 in order to pick up bug fixes, including the fix for trying to write to closed connections.
2021-02-25 16:37:46 -08:00
Renan DelValle
8be66da88a
Bumping up go version to 1.15 and removing v2 tests from Travis CI config file. 2020-09-28 11:13:29 -07:00
Renan DelValle
6d20f347f7
Update latest version gorealis has been tested against. 2020-09-28 10:32:36 -07:00
Renan DelValle
74b12c36b1
Adding version to README and changing badge to point to the right place. 2020-09-28 10:32:13 -07:00
Renan DelValle
269e0208c1
Change travis CI configuration to use main branch. 2020-09-28 10:30:48 -07:00
Suchith Arodi
4acb0d54a9
return response object in case of noop update (#125)
Return response object such that end user can look into what went wrong when the response is nil.

Cases like this occur when there is a no-op update.
2020-07-29 13:30:56 -07:00
Renán I. Del Valle
5f667555dc
Bumping up CI build to go 1.14 (#124)
Bumping up Travis CI build to go 1.14 as well as increasing the timeout for go tests.
2020-05-27 16:04:07 -07:00
Renán I. Del Valle
2b6025e67d
Checking previously ignored error which caused issues. (#123)
Error in monitor was going unchecked which caused some issues when the monitor timed out.
2020-05-27 12:45:53 -07:00
Renán I. Del Valle
5ec22fab98
Restoring location of r.Close() in retry mechanism since the move created a deadlock. (#122)
Moving the r.Close() call in the retry mechanism created a deadlock since r.Close() also uses the client lock to avoid multiple routines closing at the same time.

This commit reverts that change.
2020-05-27 12:36:52 -07:00
Renan DelValle
f196aa9ed7 Fixing some cosmetic issues and a potential race condition. 2020-05-26 20:40:09 -07:00
Renan DelValle
bb5408f5e2 Bumping up Thrift Version to v0.13.2 forked as v0.13.1 contains a bug. 2020-05-26 20:40:09 -07:00
Renán I. Del Valle
ea8e48f3b8
Allow users to define what extensions CA certs will have (#120)
* Allow users to define what extensions CA certs will have. Skip any files that don't have the right extension.
2020-02-26 08:24:41 -08:00
Renán I. Del Valle
3dc3b09a8e
Point to temporary Thrift fork while we wait for 0.14.0 to be released (#118)
* Updating readme to reflect changes made to the Aurora Scheduler project.

* Changing dependency of mod to point to forked version of the Thrift library while 0.14.0 is released.
2020-02-18 14:18:13 -08:00
Renan I. Del Valle
3fa2a20fe4
Thrift Upgrade to v0.13.0 (#117)
* Removing go.sum file as it's no longer required as of go1.13.

* Removing uncessary client command.

* Bumping up thrift version to v0.13.0
2020-02-12 12:31:56 -08:00
Renan I. Del Valle
c6a2a23ddb
Changing how constraints are handled internally (#115)
* Updating Changelog to reflect what's changing in 1.22.1

* Bug fix: Setting the same constraint multiple times is no longer allowed.

* Constraints map has been added to handle constraints being added to Aurora Jobs.

* Lowering timeout to avoid flaky test for bad payload timeout.

* Adding attributes to Mesos agents in order to test limits by constraint.



* Make two instances schedulable per zone in order to experience flaky behavior.
2020-01-15 08:21:12 -08:00
Renan I. Del Valle
9da3b96b1f Moving future to final 0.22.0 release and Mesos 1.6.2 (#114)
Changes in compose testing setup:
* Upgrading Aurora to 0.22.0
* Upgrading Mesos to 1.6.2
2020-01-14 15:50:10 -08:00
Renan I. Del Valle
976dc26dcc Adding autopause APIs to future (#110)
* Updating thrift definitions to add autopause for batch based update strategies.

* Adding batch calculator utility and test cases for it.

* Adding PauseUpdateMonitor which allows users to poll Aurora for information on an active Update being carried out until it enters the ROLL_FORWARD_PAUSED state.

* Tests for PauseUpdateMonitor and VariableBatchStep added to the end to end tests.

* Adding TerminalUpdateStates function which returns a slice containing all terminal states for an update. Changed signature of JobUpdateStatus from using a map for desired states to a slice. A map is no longer necessary with the new version of thrift and only adds complexity.
2020-01-14 15:50:10 -08:00
Renan DelValle
fe692040aa Variable Batch Update Support (#100)
* Changing generateBinding.sh check to check for thrift 0.12.0 and adding support for Variable Batch updates.

* Adding update strategies change to changelog, changed docker-compose to point to aurora 0.22.0 snapshot. Added test coverage for update strategies.
2020-01-14 15:50:10 -08:00
Renan DelValle
0b2dd44d94 Increasing aurora version for future branch. 2020-01-14 15:50:10 -08:00
Renan DelValle
df8fc2fba1
Documentation and linting improvements (#108)
* Simplifying documentation for getting started: Removed outdated information about install Golang on different platforms and instead included a link to the official Golang website which has more up to date information. Instructions for installing docker-compose have also been added.

* Added documentation to all exported functions and structs.

* Unexported some structures and functions that were needlessly exported.

* Adding golang CI default configuration which can be useful while developing and may be turned on later in the CI.

* Moving build process in CI to xenial.

* Reducing line size. in some files and shadowing in some test cases.
2019-06-12 11:22:59 -07:00
Renan DelValle
6dc4bf93b9
Retry temporary errors by default (#107)
* Adding Aurora URL validator in order to handle scenarios where incomplete information is passed to the client. The client will do its best to guess the missing information such as protocol and port.

* Upgraded to testify 1.3.0.

* Added configuration to fail on a non-temporary error. This is reverting to the original behavior of the retry mechanism. However, this allows the user to opt to fail in a non-temporary error.
2019-06-11 11:47:14 -07:00
Renan DelValle
4ffb509939
Adding go mod files to v1 (#106)
* Declaring dependencies using go mod.
2019-05-06 11:33:14 -07:00
Renan DelValle
1a15c4a5aa
V1 CreateService and StartJobUpdate Timeout signal and cleanup (#105)
* Bumped up version to 1.21.1

* Moving admin functions to a new file. They are still part of the same pointer receiver type.

* Removing dead code and fixing some comments to add space between backslash and comment.

* Adding set up and tear down to run tests script. It sets up a pod, runs all tests, and then tears down the pod.

* Added `--rm` to run tests Mac script.

* Removing cookie jar from transport layer as it's not needed.

* Changing all error messages to start with a lower case letter. Changing some messages around to be more descriptive.

* Adding an argument to allow the retry mechanism to stop if a timeout has been encountered. This is useful for mutating API calls. Only StartUpdate and CreateService have enabled by default stop at timeout.

* Added 2 tests for when a call goes through despite the client timing out. One is with a good payload, one is with a bad payload.

* Updating changelog with information about the error type returned.

* Adding test for duplicate metadata.

* Refactored JobUpdateStatus monitor to use a new monitor called JobUpdateQuery. Update monitor will now still continue if it does not find an update to monitor. Furthermore, it has been optimized to reduce returning payloads from the scheduler as much as possible. This is through using the GetJobUpdateSummaries API instead of JobUpdateDetails and by including a the statuses we're searching for as part of the query.


* Added documentation as to how to handle a timeout on an API request.

* Optimized GetInstancesIds to create a copy of the JobKey being passed down in order to avoid unexpected behavior. Instead of setting every variable name separately, now a JobKey array is being created.
2019-05-05 11:46:22 -07:00
Renan DelValle
e16e390afe
1.21.0 (formerly 1.4.0) release 2019-03-15 15:15:37 -07:00
Renan DelValle
f7bd7cc20f
Bug fix for metadata duplicates as well as un-initialized GPU re… (#103)
* Fix for metadata duplicates as well.
* Fix for un-initialized GPU resource when creating a new job update.
2019-03-15 15:10:31 -07:00
Renan DelValle
c997b90720
Adding future branch to testing. 2019-03-15 12:17:43 -07:00
Renan DelValle
773d842b03
Adding missing GPU to Job interface. 2019-03-05 11:43:50 -08:00
Renan DelValle
1f459dd56a
Adds support for Tier and SlaPolicy to the Job interface (#99)
* Adding parameter for Aurora so that we're able to run SLA aware updates with less than 20 instances. Lowered time it takes to run test by reducing watch time per instance as well.

* Reducing the number of instances and time for SLA aware instances in docker-compose set up.

* Adding another Mesos agent to the docker-compose setup.

* Huge thanks to @zircote for this contribution.
2019-02-20 16:36:50 -08:00
Renan DelValle
79fa7ba16d
Upgrading gorealis v1 to Thrift 0.12.0 code generation. End to end tests cleanup (#96)
* Ported all code from Thrift 0.9.3 to Thrift 0.12.0 while backporting some fixes from gorealis v2

* Removing git.apache.org dependency from Vendor folder as this dependency has migrated to github.

* Adding github.com thrift dependency back but now it points to github.com

* Removing unnecessary files from Thrift Vendor folder and adding them to .gitignore.

* Updating dep dependencies to include Thrift 0.12.0 from github.com

* Adding changelog.

* End to end tests: Adding coverage for killinstances.

*  End to end tests: Deleting instances after partition policy recovers them.

*  End to end tests: Adding more coverage to the realis API.

*  End to end tests: Allowing arguments to be passed to runTestMac so that '-run <test name>' can be passed in.

*  End to end tests: Reducing the resources used by CreateJob test.

*  End to end tests: Adding coverage for Pause and Resume update.

*   End to end tests: Removed checks for Aurora_OK response as that should always be handled by the error returned by the API. Changed names to be less verbose and repetitive.

*  End to end tests: Reducing watch time for instance running when creating service for reducing time it takes to run end to end test.
2019-02-20 11:11:46 -08:00
Renan DelValle
2b7eb3a852
Making abort job synchronous (#95)
* Making abort job synchronous to avoid scenarios where kill is received before job update lock is released.
* Adding missing cases for terminal update statues to JobUpdate monitor.
* Monitors now return errors which provide context through behavior.
* Adding notes to the doc explaining what happens when AbortJob times out.
2019-01-15 14:55:59 -08:00
Renan DelValle
10c620de7b
Fixing logger not unrolling variadic argument when appending to the front of it. 2019-01-11 12:20:01 -08:00
Renan DelValle
1d3854aa5f
Trace level for logger (#94)
* Add trace level to print out response thrift objects. Allows user to control whether these are printed or not to avoid pollution.

* Using named parameters to be more explicit about what is being set for LevelLogger.

* Adding TracePrint and TracePrintln. Inlined library level prefixes.
2019-01-10 16:58:59 -08:00
Renan DelValle
73e7ab2671
Releasing version 1.3.1 2019-01-08 15:57:19 -08:00
Renan DelValle
22b1d82d88
Bug fix for logger interface. Varidic arguments need to be unrolled when passed to print functions. 2019-01-08 15:37:25 -08:00
Renan DelValle
2f7015571c
Adding support for setting GPU as a resource. (#93)
* Adding support for setting GPU as a resource.
* Refactoring pulse update test.
2019-01-08 15:11:52 -08:00
Robert Allen
296af622d1 This adds the following function to the PartitionPolicy configuration to the Job interface (#91)
* Adding Partition Policy API
2018-12-20 14:38:06 -08:00
Renan DelValle
9a835631b2
Running goimports on all repository to conform to newest goimports. 2018-12-19 15:33:35 -08:00
Renan DelValle
b100158080
Updating Travis CI config file to include running CI on master-v2.0 branch 2018-12-19 15:30:22 -08:00
Renan DelValle
45a4416830
Adding .gitattributes to ignore generated files. 2018-12-03 16:09:46 -08:00
81 changed files with 14542 additions and 10214 deletions

View file

@ -1,5 +0,0 @@
[users]
aurora = secret, admin
[roles]
admin = *

View file

@ -1 +1 @@
0.21.0
0.23.0

25
.github/main.yml vendored Normal file
View file

@ -0,0 +1,25 @@
name: CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Go for use with actions
uses: actions/setup-go@v2
with:
go-version: 1.16
- name: Install goimports
run: go get golang.org/x/tools/cmd/goimports
- name: Set env with list of directories in repo containin go code
run: echo GO_USR_DIRS=$(go list -f {{.Dir}} ./... | grep -E -v "/gen-go/|/vendor/") >> $GITHUB_ENV
- name: Run goimports check
run: test -z "`for d in $GO_USR_DIRS; do goimports -d $d/*.go | tee /dev/stderr; done`"
- name: Create aurora/mesos docker cluster
run: docker-compose up -d
- name: Run tests
run: go test -timeout 35m -race -coverprofile=coverage.txt -covermode=atomic -v github.com/paypal/gorealis

57
.github/workflows/codeql-analysis.yml vendored Normal file
View file

@ -0,0 +1,57 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
push:
branches: [ main ]
pull_request:
# The branches below must be a subset of the branches above
branches: [ main ]
schedule:
- cron: '34 4 * * 3'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
permissions:
actions: read
contents: read
security-events: write
strategy:
fail-fast: false
matrix:
language: [ 'go' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
# Learn more:
# https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed
steps:
- name: Checkout repository
uses: actions/checkout@v2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
- run: go build examples/client.go
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1

30
.github/workflows/main.yml vendored Normal file
View file

@ -0,0 +1,30 @@
name: CI
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Setup Go for use with actions
uses: actions/setup-go@v2
with:
go-version: 1.16
- name: Install goimports
run: go get golang.org/x/tools/cmd/goimports
- name: Set env with list of directories in repo containin go code
run: echo GO_USR_DIRS=$(go list -f {{.Dir}} ./... | grep -E -v "/gen-go/|/vendor/") >> $GITHUB_ENV
- name: Run goimports check
run: test -z "`for d in $GO_USR_DIRS; do goimports -d $d/*.go | tee /dev/stderr; done`"
- name: Create aurora/mesos docker cluster
run: docker-compose up -d
- name: Run tests
run: go test -timeout 35m -race -coverprofile=coverage.txt -covermode=atomic -v github.com/paypal/gorealis

71
.golangci.yml Normal file
View file

@ -0,0 +1,71 @@
# This file contains all available configuration options
# with their default values.
# options for analysis running
run:
# default concurrency is a available CPU number
concurrency: 4
# timeout for analysis, e.g. 30s, 5m, default is 1m
deadline: 1m
# exit code when at least one issue was found, default is 1
issues-exit-code: 1
# include test files or not, default is true
tests: true
skip-dirs:
- gen-go/
# output configuration options
output:
# colored-line-number|line-number|json|tab|checkstyle|code-climate, default is "colored-line-number"
format: colored-line-number
# print lines of code with issue, default is true
print-issued-lines: true
# print linter name in the end of issue text, default is true
print-linter-name: true
# all available settings of specific linters
linters-settings:
errcheck:
# report about not checking of errors in type assetions: `a := b.(MyStruct)`;
# default is false: such cases aren't reported by default.
check-type-assertions: true
# report about assignment of errors to blank identifier: `num, _ := strconv.Atoi(numStr)`;
# default is false: such cases aren't reported by default.
check-blank: true
govet:
# report about shadowed variables
check-shadowing: true
goconst:
# minimal length of string constant, 3 by default
min-len: 3
# minimal occurrences count to trigger, 3 by default
min-occurrences: 2
misspell:
# Correct spellings using locale preferences for US or UK.
# Default is to use a neutral variety of English.
# Setting locale to US will correct the British spelling of 'colour' to 'color'.
locale: US
lll:
# max line length, lines longer will be reported. Default is 120.
# '\t' is counted as 1 character by default, and can be changed with the tab-width option
line-length: 120
# tab width in spaces. Default to 1.
tab-width: 4
linters:
enable:
- govet
- goimports
- golint
- lll
- goconst
enable-all: false
fast: false

View file

@ -1,26 +0,0 @@
sudo: required
language: go
go:
- "1.11.x"
env:
global:
- GO_USR_DIRS=$(go list -f {{.Dir}} ./... | grep -E -v "/gen-go/|/vendor/")
services:
- docker
before_install:
- go get golang.org/x/tools/cmd/goimports
- test -z "`for d in $GO_USR_DIRS; do goimports -d $d/*.go | tee /dev/stderr; done`"
install:
- docker-compose up -d
script:
- go test -race -coverprofile=coverage.txt -covermode=atomic -v github.com/paypal/gorealis
after_success:
- bash <(curl -s https://codecov.io/bash)

62
CHANGELOG.md Normal file
View file

@ -0,0 +1,62 @@
1.25.1 (unreleased)
1.25.0
* Add priority api
1.24.0
* enable default sla for slaDrain
* Changes Travis CI badge to Github Actions badge
* Bug fix for auto paused update monitor
* Adds support for running CI on github actions
1.23.0
* First release tested against Aurora Scheduler 0.23.0
1.22.5
* Upgrading to thrift 0.14.0
1.22.4
* Updates which result in a no-op now return a response value so that the caller may analyze it to determine what happened
1.22.3
* Contains a monitor timeout fix. Previously an error was being left unchecked which made a specific monitor timining out not be handled properly.
1.22.2
* Bug fix: Change in retry mechanism created a deadlock. This release reverts that particular change.
1.22.1
* Adding safeguards against setting multiple constraints with the same name for a single task.
1.22.0
* CreateService and StartJobUpdate do not continue retrying if a timeout has been encountered
by the HTTP client. Instead they now return an error that conforms to the Timedout interface.
Users can check for a Timedout error by using `realis.IsTimeout(err)`.
* New API function VariableBatchStep has been added which returns the current batch at which
a Variable Batch Update configured Update is currently in.
* Added new PauseUpdateMonitor which monitors an update until it is an `ROLL_FORWARD_PAUSED` state.
* Added variableBatchStep command to sample client to be used for testing new VariableBatchStep api.
* JobUpdateStatus has changed function signature from:
`JobUpdateStatus(updateKey aurora.JobUpdateKey, desiredStatuses map[aurora.JobUpdateStatus]bool, interval, timeout time.Duration) (aurora.JobUpdateStatus, error)`
to
`JobUpdateStatus(updateKey aurora.JobUpdateKey, desiredStatuses []aurora.JobUpdateStatus, interval, timeout time.Duration) (aurora.JobUpdateStatus, error)`
* Added TerminalUpdateStates function which returns an slice containing all UpdateStates which are considered terminal states.
1.21.0
* Version numbering change. Future versions will be labled X.Y.Z where X is the major version, Y is the Aurora version the library has been tested against (e.g. 21 -> 0.21.0), and X is the minor revision.
* Moved to Thrift 0.12.0 code generator and go library.
* `aurora.ACTIVE_STATES`, `aurora.SLAVE_ASSIGNED_STATES`, `aurora.LIVE_STATES`, `aurora.TERMINAL_STATES`, `aurora.ACTIVE_JOB_UPDATE_STATES`, `aurora.AWAITNG_PULSE_JOB_UPDATE_STATES` are all now generated as a slices.
* Please use `realis.ActiveStates`, `realis.SlaveAssignedStates`,`realis.LiveStates`, `realis.TerminalStates`, `realis.ActiveJobUpdateStates`, `realis.AwaitingPulseJobUpdateStates` in their places when map representations are needed.
* `GetInstanceIds(key *aurora.JobKey, states map[aurora.ScheduleStatus]bool) (map[int32]bool, error)` has changed signature to ` GetInstanceIds(key *aurora.JobKey, states []aurora.ScheduleStatus) ([]int32, error)`
* Adding support for GPU as resource.
* Changing compose environment to Aurora snapshot in order to support staggered update.
* Adding staggered updates API.

60
Gopkg.lock generated
View file

@ -1,60 +0,0 @@
# This file is autogenerated, do not edit; changes may be undone by the next 'dep ensure'.
[[projects]]
branch = "0.12.0"
digest = "1:89696c38cec777120b8b1bb5e2d363d655cf2e1e7d8c851919aaa0fd576d9b86"
name = "github.com/apache/thrift"
packages = ["lib/go/thrift"]
pruneopts = ""
revision = "384647d290e2e4a55a14b1b7ef1b7e66293a2c33"
[[projects]]
digest = "1:0deddd908b6b4b768cfc272c16ee61e7088a60f7fe2f06c547bd3d8e1f8b8e77"
name = "github.com/davecgh/go-spew"
packages = ["spew"]
pruneopts = ""
revision = "8991bc29aa16c548c550c7ff78260e27b9ab7c73"
version = "v1.1.1"
[[projects]]
digest = "1:df48fb76fb2a40edea0c9b3d960bc95e326660d82ff1114e1f88001f7a236b40"
name = "github.com/pkg/errors"
packages = ["."]
pruneopts = ""
revision = "e881fd58d78e04cf6d0de1217f8707c8cc2249bc"
[[projects]]
digest = "1:256484dbbcd271f9ecebc6795b2df8cad4c458dd0f5fd82a8c2fa0c29f233411"
name = "github.com/pmezard/go-difflib"
packages = ["difflib"]
pruneopts = ""
revision = "792786c7400a136282c1664665ae0a8db921c6c2"
version = "v1.0.0"
[[projects]]
digest = "1:78bea5e26e82826dacc5fd64a1013a6711b7075ec8072819b89e6ad76cb8196d"
name = "github.com/samuel/go-zookeeper"
packages = ["zk"]
pruneopts = ""
revision = "471cd4e61d7a78ece1791fa5faa0345dc8c7d5a5"
[[projects]]
digest = "1:c587772fb8ad29ad4db67575dad25ba17a51f072ff18a22b4f0257a4d9c24f75"
name = "github.com/stretchr/testify"
packages = ["assert"]
pruneopts = ""
revision = "f35b8ab0b5a2cef36673838d662e249dd9c94686"
version = "v1.2.2"
[solve-meta]
analyzer-name = "dep"
analyzer-version = 1
input-imports = [
"github.com/apache/thrift/lib/go/thrift",
"github.com/pkg/errors",
"github.com/samuel/go-zookeeper/zk",
"github.com/stretchr/testify/assert",
]
solver-name = "gps-cdcl"
solver-version = 1

View file

@ -1,16 +0,0 @@
[[constraint]]
name = "github.com/apache/thrift"
branch = "0.12.0"
[[constraint]]
name = "github.com/pkg/errors"
revision = "e881fd58d78e04cf6d0de1217f8707c8cc2249bc"
[[constraint]]
name = "github.com/samuel/go-zookeeper"
revision = "471cd4e61d7a78ece1791fa5faa0345dc8c7d5a5"
[[constraint]]
name = "github.com/stretchr/testify"
version = "1.2.0"

View file

@ -1,6 +1,8 @@
# gorealis [![GoDoc](https://godoc.org/github.com/paypal/gorealis?status.svg)](https://godoc.org/github.com/paypal/gorealis) [![Build Status](https://travis-ci.org/paypal/gorealis.svg?branch=master)](https://travis-ci.org/paypal/gorealis) [![codecov](https://codecov.io/gh/paypal/gorealis/branch/master-v2.0/graph/badge.svg)](https://codecov.io/gh/paypal/gorealis/branch/master-v2.0)
# gorealis [![GoDoc](https://godoc.org/github.com/paypal/gorealis?status.svg)](https://godoc.org/github.com/paypal/gorealis) ![CI Build Status](https://github.com/paypal/gorealis/actions/workflows/main.yml/badge.svg) [![codecov](https://codecov.io/gh/paypal/gorealis/branch/main/graph/badge.svg)](https://codecov.io/gh/paypal/gorealis)
Go library for interacting with [Apache Aurora](https://github.com/apache/aurora).
Version 1 of Go library for interacting with [Aurora Scheduler](https://github.com/aurora-scheduler/aurora).
Version 2 of this library can be found [here](https://github.com/aurora-scheduler/gorealis).
### Aurora version compatibility
Please see [.auroraversion](./.auroraversion) to see the latest Aurora version against which this
@ -14,7 +16,7 @@ library has been tested.
## Projects using gorealis
* [australis](https://github.com/rdelval/australis)
* [australis](https://github.com/aurora-scheduler/australis)
## Contributions
Contributions are always welcome. Please raise an issue to discuss a contribution before it is made.

View file

@ -716,9 +716,40 @@ struct JobUpdateKey {
2: string id
}
/** Limits the amount of active changes being made to instances to groupSize. */
struct QueueJobUpdateStrategy {
1: i32 groupSize
}
/** Similar to Queue strategy but will not start a new group until all instances in an active
* group have finished updating.
*/
struct BatchJobUpdateStrategy {
1: i32 groupSize
/* Update will pause automatically after each batch completes */
2: bool autopauseAfterBatch
}
/** Same as Batch strategy but each time an active group completes, the size of the next active
* group may change.
*/
struct VariableBatchJobUpdateStrategy {
1: list<i32> groupSizes
/* Update will pause automatically after each batch completes */
2: bool autopauseAfterBatch
}
union JobUpdateStrategy {
1: QueueJobUpdateStrategy queueStrategy
2: BatchJobUpdateStrategy batchStrategy
3: VariableBatchJobUpdateStrategy varBatchStrategy
}
/** Job update thresholds and limits. */
struct JobUpdateSettings {
/** Max number of instances being updated at any given moment. */
/** Deprecated, please set value inside of desired update strategy instead.
* Max number of instances being updated at any given moment.
*/
1: i32 updateGroupSize
/** Max number of instance failures to tolerate before marking instance as FAILED. */
@ -736,7 +767,7 @@ struct JobUpdateSettings {
/** Instance IDs to act on. All instances will be affected if this is not set. */
7: set<Range> updateOnlyTheseInstances
/**
/** Deprecated, please set updateStrategy to the Batch strategy instead.
* If true, use updateGroupSize as strict batching boundaries, and avoid proceeding to another
* batch until the preceding batch finishes updating.
*/
@ -755,6 +786,9 @@ struct JobUpdateSettings {
* differs between the old and new task configurations, updates will use the newest configuration.
*/
10: optional bool slaAware
/** Update strategy to be used for the update. See JobUpdateStrategy for choices. */
11: optional JobUpdateStrategy updateStrategy
}
/** Event marking a state transition in job update lifecycle. */

View file

@ -21,6 +21,8 @@ import (
"github.com/pkg/errors"
)
// Cluster contains the definition of the clusters.json file used by the default Aurora
// client for configuration
type Cluster struct {
Name string `json:"name"`
AgentRoot string `json:"slave_root"`
@ -33,7 +35,8 @@ type Cluster struct {
AuthMechanism string `json:"auth_mechanism"`
}
// Loads clusters.json file traditionally located at /etc/aurora/clusters.json
// LoadClusters loads clusters.json file traditionally located at /etc/aurora/clusters.json
// for use with a gorealis client
func LoadClusters(config string) (map[string]Cluster, error) {
file, err := os.Open(config)
@ -54,14 +57,3 @@ func LoadClusters(config string) (map[string]Cluster, error) {
return m, nil
}
func GetDefaultClusterFromZKUrl(zkURL string) *Cluster {
return &Cluster{
Name: "defaultCluster",
AuthMechanism: "UNAUTHENTICATED",
ZK: zkURL,
SchedZKPath: "/aurora/scheduler",
AgentRunDir: "latest",
AgentRoot: "/var/lib/mesos",
}
}

View file

@ -18,7 +18,7 @@ import (
"fmt"
"testing"
realis "github.com/paypal/gorealis/v2"
realis "github.com/paypal/gorealis"
"github.com/stretchr/testify/assert"
)

View file

@ -15,31 +15,44 @@
package realis
import (
"github.com/paypal/gorealis/v2/gen-go/apache/aurora"
"github.com/paypal/gorealis/gen-go/apache/aurora"
)
// Container is an interface that defines a single function needed to create
// an Aurora container type. It exists because the code must support both Mesos
// and Docker containers.
type Container interface {
Build() *aurora.Container
}
// MesosContainer is a Mesos style container that can be used by Aurora Jobs.
type MesosContainer struct {
container *aurora.MesosContainer
}
// DockerContainer is a vanilla Docker style container that can be used by Aurora Jobs.
type DockerContainer struct {
container *aurora.DockerContainer
}
func NewDockerContainer() *DockerContainer {
return &DockerContainer{container: aurora.NewDockerContainer()}
// NewDockerContainer creates a new Aurora compatible Docker container configuration.
func NewDockerContainer() DockerContainer {
return DockerContainer{container: aurora.NewDockerContainer()}
}
func (c *DockerContainer) Build() *aurora.Container {
// Build creates an Aurora container based upon the configuration provided.
func (c DockerContainer) Build() *aurora.Container {
return &aurora.Container{Docker: c.container}
}
func (c *DockerContainer) Image(image string) *DockerContainer {
// Image adds the name of a Docker image to be used by the Job when running.
func (c DockerContainer) Image(image string) DockerContainer {
c.container.Image = image
return c
}
func (c *DockerContainer) AddParameter(name, value string) *DockerContainer {
// AddParameter adds a parameter to be passed to Docker when the container is run.
func (c DockerContainer) AddParameter(name, value string) DockerContainer {
c.container.Parameters = append(c.container.Parameters, &aurora.DockerParameter{
Name: name,
Value: value,
@ -47,19 +60,18 @@ func (c *DockerContainer) AddParameter(name, value string) *DockerContainer {
return c
}
type MesosContainer struct {
container *aurora.MesosContainer
// NewMesosContainer creates a Mesos style container to be configured and built for use by an Aurora Job.
func NewMesosContainer() MesosContainer {
return MesosContainer{container: aurora.NewMesosContainer()}
}
func NewMesosContainer() *MesosContainer {
return &MesosContainer{container: aurora.NewMesosContainer()}
}
func (c *MesosContainer) Build() *aurora.Container {
// Build creates a Mesos style Aurora container configuration to be passed on to the Aurora Job.
func (c MesosContainer) Build() *aurora.Container {
return &aurora.Container{Mesos: c.container}
}
func (c *MesosContainer) DockerImage(name, tag string) *MesosContainer {
// DockerImage configures the Mesos container to use a specific Docker image when being run.
func (c MesosContainer) DockerImage(name, tag string) MesosContainer {
if c.container.Image == nil {
c.container.Image = aurora.NewImage()
}
@ -68,20 +80,12 @@ func (c *MesosContainer) DockerImage(name, tag string) *MesosContainer {
return c
}
func (c *MesosContainer) AppcImage(name, imageId string) *MesosContainer {
// AppcImage configures the Mesos container to use an image in the Appc format to run the container.
func (c MesosContainer) AppcImage(name, imageID string) MesosContainer {
if c.container.Image == nil {
c.container.Image = aurora.NewImage()
}
c.container.Image.Appc = &aurora.AppcImage{Name: name, ImageId: imageId}
return c
}
func (c *MesosContainer) AddVolume(hostPath, containerPath string, mode aurora.Mode) *MesosContainer {
c.container.Volumes = append(c.container.Volumes, &aurora.Volume{
HostPath: hostPath,
ContainerPath: containerPath,
Mode: mode})
c.container.Image.Appc = &aurora.AppcImage{Name: name, ImageId: imageID}
return c
}

View file

@ -14,7 +14,7 @@ services:
ipv4_address: 192.168.33.2
master:
image: rdelvalle/mesos-master:1.5.1
image: aurorascheduler/mesos-master:1.7.2
restart: on-failure
ports:
- "5050:5050"
@ -32,12 +32,13 @@ services:
- zk
agent-one:
image: rdelvalle/mesos-agent:1.5.1
image: aurorascheduler/mesos-agent:1.7.2
pid: host
restart: on-failure
ports:
- "5051:5051"
environment:
MESOS_ATTRIBUTES: 'zone:west'
MESOS_MASTER: zk://192.168.33.2:2181/mesos
MESOS_CONTAINERIZERS: docker,mesos
MESOS_PORT: 5051
@ -55,8 +56,33 @@ services:
depends_on:
- zk
agent-two:
image: aurorascheduler/mesos-agent:1.7.2
pid: host
restart: on-failure
ports:
- "5061:5061"
environment:
MESOS_ATTRIBUTES: 'zone:east'
MESOS_MASTER: zk://192.168.33.2:2181/mesos
MESOS_CONTAINERIZERS: docker,mesos
MESOS_HOSTNAME: localhost
MESOS_PORT: 5061
MESOS_RESOURCES: ports(*):[11000-11999]
MESOS_SYSTEMD_ENABLE_SUPPORT: 'false'
MESOS_WORK_DIR: /tmp/mesos
networks:
aurora_cluster:
ipv4_address: 192.168.33.5
volumes:
- /sys/fs/cgroup:/sys/fs/cgroup
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- zk
aurora-one:
image: rdelvalle/aurora:0.21.0
image: aurorascheduler/scheduler:0.23.0
pid: host
ports:
- "8081:8081"
@ -65,12 +91,7 @@ services:
CLUSTER_NAME: test-cluster
ZK_ENDPOINTS: "192.168.33.2:2181"
MESOS_MASTER: "zk://192.168.33.2:2181/mesos"
EXTRA_SCHEDULER_ARGS: >
-http_authentication_mechanism=BASIC
-shiro_realm_modules=INI_AUTHNZ
-shiro_ini_path=/etc/aurora/security.ini
volumes:
- ./.aurora-config:/etc/aurora
EXTRA_SCHEDULER_ARGS: "-min_required_instances_for_sla_check=1"
networks:
aurora_cluster:
ipv4_address: 192.168.33.7

View file

@ -88,12 +88,6 @@ On Ubuntu, restarting the aurora-scheduler can be achieved by running the follow
$ sudo service aurora-scheduler restart
```
### Using a custom client
Pystachio does not yet support launching tasks using custom executors. Therefore, a custom
client must be used in order to launch tasks using a custom executor. In this case,
we will be using [gorealis](https://github.com/paypal/gorealis) to launch a task with
the compose executor on Aurora.
## Using [dce-go](https://github.com/paypal/dce-go)
Instead of manually configuring Aurora to run the docker-compose executor, one can follow the instructions provided [here](https://github.com/paypal/dce-go/blob/develop/docs/environment.md) to quickly create a DCE environment that would include mesos, aurora, golang1.7, docker, docker-compose and DCE installed.
@ -107,80 +101,12 @@ Mesos endpoint --> http://192.168.33.8:5050
### Installing Go
#### Linux
Follow the instructions at the official golang website: [golang.org/doc/install](https://golang.org/doc/install)
##### Ubuntu
### Installing docker-compose
###### Adding a PPA and install via apt-get
```
$ sudo add-apt-repository ppa:ubuntu-lxc/lxd-stable
$ sudo apt-get update
$ sudo apt-get install golang
```
###### Configuring the GOPATH
Configure the environment to be able to compile and run Go code.
```
$ mkdir $HOME/go
$ echo export GOPATH=$HOME/go >> $HOME/.bashrc
$ echo export GOROOT=/usr/lib/go >> $HOME/.bashrc
$ echo export PATH=$PATH:$GOPATH/bin >> $HOME/.bashrc
$ echo export PATH=$PATH:$GOROOT/bin >> $HOME/.bashrc
```
Finally we must reload the .bashrc configuration:
```
$ source $HOME/.bashrc
```
#### OS X
One way to install go on OS X is by using [Homebrew](http://brew.sh/)
##### Installing Homebrew
Run the following command from the terminal to install Hombrew:
```
$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
```
##### Installing Go using Hombrew
Run the following command from the terminal to install Go:
```
$ brew install go
```
##### Configuring the GOPATH
Configure the environment to be able to compile and run Go code.
```
$ mkdir $HOME/go
$ echo export GOPATH=$HOME/go >> $HOME/.profile
$ echo export GOROOT=/usr/local/opt/go/libexec >> $HOME/.profile
$ echo export PATH=$PATH:$GOPATH/bin >> $HOME/.profile
$ echo export PATH=$PATH:$GOROOT/bin >> $HOME/.profile
```
Finally we must reload the .profile configuration:
```
$ source $HOME/.profile
```
#### Windows
Download and run the msi installer from https://golang.org/dl/
## Installing Docker Compose (if manually configured Aurora)
To show Aurora's new multi executor feature, we need to use at least one custom executor.
In this case we will be using the [docker-compose-executor](https://github.com/mesos/docker-compose-executor).
In order to run the docker-compose executor, each agent must have docker-compose installed on it.
This can be done using pip:
```
$ sudo pip install docker-compose
```
Agents which will run dce-go will need docker-compose in order to sucessfully run the executor.
Instructions for installing docker-compose on various platforms may be found on Docker's webiste: [docs.docker.com/compose/install/](https://docs.docker.com/compose/install/)
## Downloading gorealis
Finally, we must get `gorealis` using the `go get` command:
@ -192,7 +118,7 @@ go get github.com/paypal/gorealis
# Creating Aurora Jobs
## Creating a thermos job
To demonstrate that we are able to run jobs using different executors on the
To demonstrate that we are able to run jobs using different executors on the
same scheduler, we'll first launch a thermos job using the default Aurora Client.
We can use a sample job for this:
@ -259,8 +185,8 @@ go run $GOPATH/src/github.com/paypal/gorealis/examples/client.go -executor=compo
```
If everything went according to plan, a new job will be shown in the Aurora UI.
We can further investigate inside the Mesos task sandbox. Inside the sandbox, under
the sample-app folder, we can find a docker-compose.yml-generated.yml. If we inspect this file,
We can further investigate inside the Mesos task sandbox. Inside the sandbox, under
the sample-app folder, we can find a docker-compose.yml-generated.yml. If we inspect this file,
we can find the port at which we can find the web server we launched.
Under Web->Ports, we find the port Mesos allocated. We can then navigate to:
@ -269,10 +195,10 @@ Under Web->Ports, we find the port Mesos allocated. We can then navigate to:
A message from the executor should greet us.
## Creating a Thermos job using gorealis
It is also possible to create a thermos job using gorealis. To do this, however,
It is also possible to create a thermos job using gorealis. To do this, however,
a thermos payload is required. A thermos payload consists of a JSON blob that details
the entire task as it exists inside the Aurora Scheduler. *Creating the blob is unfortunately
out of the scope of what gorealis does*, so a thermos payload must be generated beforehand or
out of the scope of what gorealis does*, so a thermos payload must be generated beforehand or
retrieved from the structdump of an existing task for testing purposes.
A sample thermos JSON payload may be found [here](../examples/thermos_payload.json) in the examples folder.

View file

@ -57,4 +57,19 @@ updateJob := realis.NewUpdateJob(job)
updateJob.InstanceCount(1)
updateJob.Ram(128)
msg, err := r.UpdateJob(updateJob, "")
```
```
* Handling a timeout scenario:
When sending an API call to Aurora, the call may timeout at the client side.
This means that the time limit has been reached while waiting for the scheduler
to reply. In such a case it is recommended that the timeout is increased through
the use of the `realis.TimeoutMS()` option.
As these timeouts cannot be totally avoided, there exists a mechanism to mitigate such
scenarios. The `StartJobUpdate` and `CreateService` API will return an error that
implements the Timeout interface.
An error can be checked to see if it is a Timeout error by using the `realis.IsTimeout()`
function.

View file

@ -1,6 +1,6 @@
# Using the Sample client
## Usage:
## Usage:
```
Usage of ./client:
-cluster string

View file

@ -17,12 +17,14 @@ package realis
// Using a pattern described by Dave Cheney to differentiate errors
// https://dave.cheney.net/2016/04/27/dont-just-check-errors-handle-them-gracefully
// Timedout errors are returned when a function is unable to continue executing due
// Timeout errors are returned when a function is unable to continue executing due
// to a time constraint or meeting a set number of retries.
type timeout interface {
Timedout() bool
}
// IsTimeout returns true if the error being passed as an argument implements the Timeout interface
// and the Timedout function returns true.
func IsTimeout(err error) bool {
temp, ok := err.(timeout)
return ok && temp.Timedout()
@ -61,41 +63,42 @@ func (r *retryErr) RetryCount() int {
return r.retryCount
}
// Helper function for testing verification to avoid whitebox testing
// ToRetryCount is a helper function for testing verification to avoid whitebox testing
// as well as keeping retryErr as a private.
// Should NOT be used under any other context.
func ToRetryCount(err error) *retryErr {
if retryErr, ok := err.(*retryErr); ok {
return retryErr
} else {
return nil
}
return nil
}
func newRetryError(err error, retryCount int) *retryErr {
return &retryErr{error: err, timedout: true, retryCount: retryCount}
}
// Temporary errors indicate that the action may and should be retried.
// Temporary errors indicate that the action may or should be retried.
type temporary interface {
Temporary() bool
}
// IsTemporary indicates whether the error passed in as an argument implements the temporary interface
// and if the Temporary function returns true.
func IsTemporary(err error) bool {
temp, ok := err.(temporary)
return ok && temp.Temporary()
}
type TemporaryErr struct {
type temporaryErr struct {
error
temporary bool
}
func (t *TemporaryErr) Temporary() bool {
func (t *temporaryErr) Temporary() bool {
return t.temporary
}
// Retrying after receiving this error is advised
func NewTemporaryError(err error) *TemporaryErr {
return &TemporaryErr{error: err, temporary: true}
// NewTemporaryError creates a new error which satisfies the Temporary interface.
func NewTemporaryError(err error) *temporaryErr {
return &temporaryErr{error: err, temporary: true}
}

View file

@ -17,22 +17,24 @@ package main
import (
"flag"
"fmt"
"io/ioutil"
"log"
"strings"
"time"
realis "github.com/paypal/gorealis/v2"
"github.com/paypal/gorealis/v2/gen-go/apache/aurora"
realis "github.com/paypal/gorealis"
"github.com/paypal/gorealis/gen-go/apache/aurora"
"github.com/paypal/gorealis/response"
)
var cmd, executor, url, clustersConfig, clusterName, updateId, username, password, zkUrl, hostList, role string
var caCertsPath string
var clientKey, clientCert string
var ConnectionTimeout = 20 * time.Second
var ConnectionTimeout = 20000
func init() {
flag.StringVar(&cmd, "cmd", "", "Aurora Job request type to send to Aurora Scheduler")
flag.StringVar(&cmd, "cmd", "", "Job request type to send to Aurora Scheduler")
flag.StringVar(&executor, "executor", "thermos", "Executor to use")
flag.StringVar(&url, "url", "", "URL at which the Aurora Scheduler exists as [url]:[port]")
flag.StringVar(&clustersConfig, "clusters", "", "Location of the clusters.json file used by aurora.")
@ -72,14 +74,15 @@ func init() {
func main() {
var job *realis.AuroraJob
var job realis.Job
var err error
var r *realis.Client
var monitor *realis.Monitor
var r realis.Realis
clientOptions := []realis.ClientOption{
realis.BasicAuth(username, password),
realis.ThriftJSON(),
realis.Timeout(ConnectionTimeout),
realis.TimeoutMS(ConnectionTimeout),
realis.BackOff(realis.Backoff{
Steps: 2,
Duration: 10 * time.Second,
@ -97,36 +100,39 @@ func main() {
}
if caCertsPath != "" {
clientOptions = append(clientOptions, realis.CertsPath(caCertsPath))
clientOptions = append(clientOptions, realis.Certspath(caCertsPath))
}
if clientKey != "" && clientCert != "" {
clientOptions = append(clientOptions, realis.ClientCerts(clientKey, clientCert))
}
r, err = realis.NewClient(clientOptions...)
r, err = realis.NewRealisClient(clientOptions...)
if err != nil {
log.Fatalln(err)
}
monitor = &realis.Monitor{Client: r}
defer r.Close()
switch executor {
case "thermos":
thermosExec := realis.ThermosExecutor{}
thermosExec.AddProcess(realis.NewThermosProcess("boostrap", "echo bootsrapping")).
AddProcess(realis.NewThermosProcess("hello_gorealis", "while true; do echo hello world from gorealis; sleep 10; done"))
payload, err := ioutil.ReadFile("examples/thermos_payload.json")
if err != nil {
log.Fatalln("Error reading json config file: ", err)
}
job = realis.NewJob().
Environment("prod").
Role("vagrant").
Name("hello_world_from_gorealis").
ExecutorName(aurora.AURORA_EXECUTOR_NAME).
ExecutorData(string(payload)).
CPU(1).
RAM(64).
Disk(100).
IsService(true).
InstanceCount(1).
AddPorts(1).
ThermosExecutor(thermosExec)
AddPorts(1)
case "compose":
job = realis.NewJob().
Environment("prod").
@ -160,13 +166,14 @@ func main() {
switch cmd {
case "create":
fmt.Println("Creating job")
err := r.CreateJob(job)
resp, err := r.CreateJob(job)
if err != nil {
log.Fatalln(err)
}
fmt.Println(resp.String())
if ok, mErr := r.MonitorInstances(job.JobKey(), job.GetInstanceCount(), 5*time.Second, 50*time.Second); !ok || mErr != nil {
err := r.KillJob(job.JobKey())
if ok, mErr := monitor.Instances(job.JobKey(), job.GetInstanceCount(), 5, 50); !ok || mErr != nil {
_, err := r.KillJob(job.JobKey())
if err != nil {
log.Fatalln(err)
}
@ -176,17 +183,18 @@ func main() {
case "createService":
// Create a service with three instances using the update API instead of the createJob API
fmt.Println("Creating service")
settings := realis.JobUpdateFromAuroraTask(job.AuroraTask()).InstanceCount(3)
result, err := r.CreateService(settings)
settings := realis.NewUpdateSettings()
job.InstanceCount(3)
resp, result, err := r.CreateService(job, settings)
if err != nil {
log.Fatal("error: ", err)
log.Println("error: ", err)
log.Fatal("response: ", resp.String())
}
fmt.Println(result.String())
if ok, mErr := r.MonitorJobUpdate(*result.GetKey(), 5*time.Second, 180*time.Second); !ok || mErr != nil {
err := r.AbortJobUpdate(*result.GetKey(), "Monitor timed out")
err = r.KillJob(job.JobKey())
if ok, mErr := monitor.JobUpdate(*result.GetKey(), 5, 180); !ok || mErr != nil {
_, err := r.AbortJobUpdate(*result.GetKey(), "Monitor timed out")
_, err = r.KillJob(job.JobKey())
if err != nil {
log.Fatal(err)
}
@ -197,13 +205,14 @@ func main() {
fmt.Println("Creating a docker based job")
container := realis.NewDockerContainer().Image("python:2.7").AddParameter("network", "host")
job.Container(container)
err := r.CreateJob(job)
resp, err := r.CreateJob(job)
if err != nil {
log.Fatal(err)
}
fmt.Println(resp.String())
if ok, err := r.MonitorInstances(job.JobKey(), job.GetInstanceCount(), 10*time.Second, 300*time.Second); !ok || err != nil {
err := r.KillJob(job.JobKey())
if ok, err := monitor.Instances(job.JobKey(), job.GetInstanceCount(), 10, 300); !ok || err != nil {
_, err := r.KillJob(job.JobKey())
if err != nil {
log.Fatal(err)
}
@ -213,13 +222,14 @@ func main() {
fmt.Println("Creating a docker based job")
container := realis.NewMesosContainer().DockerImage("python", "2.7")
job.Container(container)
err := r.CreateJob(job)
resp, err := r.CreateJob(job)
if err != nil {
log.Fatal(err)
}
fmt.Println(resp.String())
if ok, err := r.MonitorInstances(job.JobKey(), job.GetInstanceCount(), 10*time.Second, 300*time.Second); !ok || err != nil {
err := r.KillJob(job.JobKey())
if ok, err := monitor.Instances(job.JobKey(), job.GetInstanceCount(), 10, 300); !ok || err != nil {
_, err := r.KillJob(job.JobKey())
if err != nil {
log.Fatal(err)
}
@ -230,44 +240,50 @@ func main() {
// Cron config
job.CronSchedule("* * * * *")
job.IsService(false)
err := r.ScheduleCronJob(job)
resp, err := r.ScheduleCronJob(job)
if err != nil {
log.Fatal(err)
}
fmt.Println(resp.String())
case "startCron":
fmt.Println("Starting a Cron job")
err := r.StartCronJob(job.JobKey())
resp, err := r.StartCronJob(job.JobKey())
if err != nil {
log.Fatal(err)
}
fmt.Println(resp.String())
case "descheduleCron":
fmt.Println("Descheduling a Cron job")
err := r.DescheduleCronJob(job.JobKey())
resp, err := r.DescheduleCronJob(job.JobKey())
if err != nil {
log.Fatal(err)
}
fmt.Println(resp.String())
case "kill":
fmt.Println("Killing job")
err := r.KillJob(job.JobKey())
resp, err := r.KillJob(job.JobKey())
if err != nil {
log.Fatal(err)
}
if ok, err := r.MonitorInstances(job.JobKey(), 0, 5*time.Second, 50*time.Second); !ok || err != nil {
if ok, err := monitor.Instances(job.JobKey(), 0, 5, 50); !ok || err != nil {
log.Fatal("Unable to kill all instances of job")
}
fmt.Println(resp.String())
case "restart":
fmt.Println("Restarting job")
err := r.RestartJob(job.JobKey())
resp, err := r.RestartJob(job.JobKey())
if err != nil {
log.Fatal(err)
}
fmt.Println(resp.String())
case "liveCount":
fmt.Println("Getting instance count")
@ -286,110 +302,106 @@ func main() {
log.Fatal(err)
}
fmt.Println("Active instances: ", live)
fmt.Println("Number of live instances: ", len(live))
case "flexUp":
fmt.Println("Flexing up job")
numOfInstances := 4
numOfInstances := int32(4)
live, err := r.GetInstanceIds(job.JobKey(), aurora.ACTIVE_STATES)
if err != nil {
log.Fatal(err)
}
currInstances := len(live)
currInstances := int32(len(live))
fmt.Println("Current num of instances: ", currInstances)
key := job.JobKey()
err = r.AddInstances(aurora.InstanceKey{
JobKey: &key,
resp, err := r.AddInstances(aurora.InstanceKey{
JobKey: job.JobKey(),
InstanceId: live[0],
},
int32(numOfInstances))
numOfInstances)
if err != nil {
log.Fatal(err)
}
if ok, err := r.MonitorInstances(job.JobKey(), int32(currInstances+numOfInstances), 5*time.Second, 50*time.Second); !ok || err != nil {
if ok, err := monitor.Instances(job.JobKey(), currInstances+numOfInstances, 5, 50); !ok || err != nil {
fmt.Println("Flexing up failed")
}
fmt.Println(resp.String())
case "flexDown":
fmt.Println("Flexing down job")
numOfInstances := 2
numOfInstances := int32(2)
live, err := r.GetInstanceIds(job.JobKey(), aurora.ACTIVE_STATES)
if err != nil {
log.Fatal(err)
}
currInstances := len(live)
currInstances := int32(len(live))
fmt.Println("Current num of instances: ", currInstances)
err = r.RemoveInstances(job.JobKey(), numOfInstances)
resp, err := r.RemoveInstances(job.JobKey(), numOfInstances)
if err != nil {
log.Fatal(err)
}
if ok, err := r.MonitorInstances(job.JobKey(), int32(currInstances-numOfInstances), 5*time.Second, 100*time.Second); !ok || err != nil {
if ok, err := monitor.Instances(job.JobKey(), currInstances-numOfInstances, 5, 100); !ok || err != nil {
fmt.Println("flexDown failed")
}
fmt.Println(resp.String())
case "update":
fmt.Println("Updating a job with with more RAM and to 5 instances")
live, err := r.GetInstanceIds(job.JobKey(), aurora.ACTIVE_STATES)
if err != nil {
log.Fatal(err)
}
key := job.JobKey()
taskConfig, err := r.FetchTaskConfig(aurora.InstanceKey{
JobKey: &key,
JobKey: job.JobKey(),
InstanceId: live[0],
})
if err != nil {
log.Fatal(err)
}
updateJob := realis.JobUpdateFromConfig(taskConfig).InstanceCount(5).RAM(128)
updateJob := realis.NewDefaultUpdateJob(taskConfig)
updateJob.InstanceCount(5).RAM(128)
result, err := r.StartJobUpdate(updateJob, "")
resp, err := r.StartJobUpdate(updateJob, "")
if err != nil {
log.Fatal(err)
}
jobUpdateKey := result.GetKey()
_, err = r.MonitorJobUpdate(*jobUpdateKey, 5*time.Second, 6*time.Minute)
if err != nil {
log.Fatal(err)
}
jobUpdateKey := response.JobUpdateKey(resp)
monitor.JobUpdate(*jobUpdateKey, 5, 500)
case "pauseJobUpdate":
key := job.JobKey()
err := r.PauseJobUpdate(&aurora.JobUpdateKey{
Job: &key,
resp, err := r.PauseJobUpdate(&aurora.JobUpdateKey{
Job: job.JobKey(),
ID: updateId,
}, "")
if err != nil {
log.Fatal(err)
}
fmt.Println("PauseJobUpdate response: ", resp.String())
case "resumeJobUpdate":
key := job.JobKey()
err := r.ResumeJobUpdate(aurora.JobUpdateKey{
Job: &key,
resp, err := r.ResumeJobUpdate(&aurora.JobUpdateKey{
Job: job.JobKey(),
ID: updateId,
}, "")
if err != nil {
log.Fatal(err)
}
fmt.Println("ResumeJobUpdate response: ", resp.String())
case "pulseJobUpdate":
key := job.JobKey()
resp, err := r.PulseJobUpdate(aurora.JobUpdateKey{
Job: &key,
resp, err := r.PulseJobUpdate(&aurora.JobUpdateKey{
Job: job.JobKey(),
ID: updateId,
})
if err != nil {
@ -399,10 +411,9 @@ func main() {
fmt.Println("PulseJobUpdate response: ", resp.String())
case "updateDetails":
key := job.JobKey()
result, err := r.JobUpdateDetails(aurora.JobUpdateQuery{
resp, err := r.JobUpdateDetails(aurora.JobUpdateQuery{
Key: &aurora.JobUpdateKey{
Job: &key,
Job: job.JobKey(),
ID: updateId,
},
Limit: 1,
@ -412,13 +423,12 @@ func main() {
log.Fatal(err)
}
fmt.Println(result)
fmt.Println(response.JobUpdateDetails(resp))
case "abortUpdate":
fmt.Println("Abort update")
key := job.JobKey()
err := r.AbortJobUpdate(aurora.JobUpdateKey{
Job: &key,
resp, err := r.AbortJobUpdate(aurora.JobUpdateKey{
Job: job.JobKey(),
ID: updateId,
},
"")
@ -426,12 +436,12 @@ func main() {
if err != nil {
log.Fatal(err)
}
fmt.Println(resp.String())
case "rollbackUpdate":
fmt.Println("Abort update")
key := job.JobKey()
err := r.RollbackJobUpdate(aurora.JobUpdateKey{
Job: &key,
resp, err := r.RollbackJobUpdate(aurora.JobUpdateKey{
Job: job.JobKey(),
ID: updateId,
},
"")
@ -439,6 +449,7 @@ func main() {
if err != nil {
log.Fatal(err)
}
fmt.Println(resp.String())
case "taskConfig":
fmt.Println("Getting job info")
@ -447,9 +458,8 @@ func main() {
log.Fatal(err)
}
key := job.JobKey()
config, err := r.FetchTaskConfig(aurora.InstanceKey{
JobKey: &key,
JobKey: job.JobKey(),
InstanceId: live[0],
})
@ -461,10 +471,9 @@ func main() {
case "updatesummary":
fmt.Println("Getting job update summary")
key := job.JobKey()
jobquery := &aurora.JobUpdateQuery{
Role: &key.Role,
JobKey: &key,
Role: &job.JobKey().Role,
JobKey: job.JobKey(),
}
updatesummary, err := r.GetJobUpdateSummaries(jobquery)
if err != nil {
@ -475,11 +484,10 @@ func main() {
case "taskStatus":
fmt.Println("Getting task status")
key := job.JobKey()
taskQ := &aurora.TaskQuery{
Role: &key.Role,
Environment: &key.Environment,
JobName: &key.Name,
Role: &job.JobKey().Role,
Environment: &job.JobKey().Environment,
JobName: &job.JobKey().Name,
}
tasks, err := r.GetTaskStatus(taskQ)
if err != nil {
@ -491,11 +499,10 @@ func main() {
case "tasksWithoutConfig":
fmt.Println("Getting task status")
key := job.JobKey()
taskQ := &aurora.TaskQuery{
Role: &key.Role,
Environment: &key.Environment,
JobName: &key.Name,
Role: &job.JobKey().Role,
Environment: &job.JobKey().Environment,
JobName: &job.JobKey().Name,
}
tasks, err := r.GetTasksWithoutConfigs(taskQ)
if err != nil {
@ -511,17 +518,17 @@ func main() {
log.Fatal("No hosts specified to drain")
}
hosts := strings.Split(hostList, ",")
_, err := r.DrainHosts(hosts...)
_, result, err := r.DrainHosts(hosts...)
if err != nil {
log.Fatalf("error: %+v\n", err.Error())
}
// Monitor change to DRAINING and DRAINED mode
hostResult, err := r.MonitorHostMaintenance(
hostResult, err := monitor.HostMaintenance(
hosts,
[]aurora.MaintenanceMode{aurora.MaintenanceMode_DRAINED, aurora.MaintenanceMode_DRAINING},
5*time.Second,
10*time.Second)
5,
10)
if err != nil {
for host, ok := range hostResult {
if !ok {
@ -531,6 +538,8 @@ func main() {
log.Fatalf("error: %+v\n", err.Error())
}
fmt.Print(result.String())
case "SLADrainHosts":
fmt.Println("Setting hosts to DRAINING using SLA aware draining")
if hostList == "" {
@ -540,17 +549,17 @@ func main() {
policy := aurora.SlaPolicy{PercentageSlaPolicy: &aurora.PercentageSlaPolicy{Percentage: 50.0}}
_, err := r.SLADrainHosts(&policy, 30, hosts...)
result, err := r.SLADrainHosts(&policy, 30, hosts...)
if err != nil {
log.Fatalf("error: %+v\n", err.Error())
}
// Monitor change to DRAINING and DRAINED mode
hostResult, err := r.MonitorHostMaintenance(
hostResult, err := monitor.HostMaintenance(
hosts,
[]aurora.MaintenanceMode{aurora.MaintenanceMode_DRAINED, aurora.MaintenanceMode_DRAINING},
5*time.Second,
10*time.Second)
5,
10)
if err != nil {
for host, ok := range hostResult {
if !ok {
@ -560,23 +569,25 @@ func main() {
log.Fatalf("error: %+v\n", err.Error())
}
fmt.Print(result.String())
case "endMaintenance":
fmt.Println("Setting hosts to ACTIVE")
if hostList == "" {
log.Fatal("No hosts specified to drain")
}
hosts := strings.Split(hostList, ",")
_, err := r.EndMaintenance(hosts...)
_, result, err := r.EndMaintenance(hosts...)
if err != nil {
log.Fatalf("error: %+v\n", err.Error())
}
// Monitor change to DRAINING and DRAINED mode
hostResult, err := r.MonitorHostMaintenance(
hostResult, err := monitor.HostMaintenance(
hosts,
[]aurora.MaintenanceMode{aurora.MaintenanceMode_NONE},
5*time.Second,
10*time.Second)
5,
10)
if err != nil {
for host, ok := range hostResult {
if !ok {
@ -586,13 +597,14 @@ func main() {
log.Fatalf("error: %+v\n", err.Error())
}
fmt.Print(result.String())
case "getPendingReasons":
fmt.Println("Getting pending reasons")
key := job.JobKey()
taskQ := &aurora.TaskQuery{
Role: &key.Role,
Environment: &key.Environment,
JobName: &key.Name,
Role: &job.JobKey().Role,
Environment: &job.JobKey().Environment,
JobName: &job.JobKey().Name,
}
reasons, err := r.GetPendingReason(taskQ)
if err != nil {
@ -604,7 +616,7 @@ func main() {
case "getJobs":
fmt.Println("GetJobs...role: ", role)
result, err := r.GetJobs(role)
_, result, err := r.GetJobs(role)
if err != nil {
log.Fatalf("error: %+v\n", err.Error())
}

View file

@ -23,8 +23,8 @@ import (
"os"
"time"
realis "github.com/paypal/gorealis/v2"
"github.com/paypal/gorealis/v2/gen-go/apache/aurora"
realis "github.com/paypal/gorealis"
"github.com/paypal/gorealis/gen-go/apache/aurora"
"github.com/pkg/errors"
)
@ -125,7 +125,7 @@ func init() {
}
}
func CreateRealisClient(config *Config) (*realis.Client, error) {
func CreateRealisClient(config *Config) (realis.Realis, error) {
var transportOption realis.ClientOption
// Configuring transport protocol. If not transport is provided, then using JSON as the
// default transport protocol.
@ -157,7 +157,7 @@ func CreateRealisClient(config *Config) (*realis.Client, error) {
clientOptions = append(clientOptions, realis.Debug())
}
return realis.NewClient(clientOptions...)
return realis.NewRealisClient(clientOptions...)
}
func main() {
@ -165,6 +165,7 @@ func main() {
fmt.Println(clientCreationErr)
os.Exit(1)
} else {
monitor := &realis.Monitor{Client: r}
defer r.Close()
uris := job.URIs
labels := job.Labels
@ -204,18 +205,20 @@ func main() {
}
fmt.Println("Creating Job...")
if jobCreationErr := r.CreateJob(auroraJob); jobCreationErr != nil {
if resp, jobCreationErr := r.CreateJob(auroraJob); jobCreationErr != nil {
fmt.Println("Error creating Aurora job: ", jobCreationErr)
os.Exit(1)
} else {
if ok, monitorErr := r.MonitorInstances(auroraJob.JobKey(), auroraJob.GetInstanceCount(), 5, 50); !ok || monitorErr != nil {
if jobErr := r.KillJob(auroraJob.JobKey()); jobErr !=
nil {
fmt.Println(jobErr)
os.Exit(1)
} else {
fmt.Println("ok: ", ok)
fmt.Println("jobErr: ", jobErr)
if resp.ResponseCode == aurora.ResponseCode_OK {
if ok, monitorErr := monitor.Instances(auroraJob.JobKey(), auroraJob.GetInstanceCount(), 5, 50); !ok || monitorErr != nil {
if _, jobErr := r.KillJob(auroraJob.JobKey()); jobErr !=
nil {
fmt.Println(jobErr)
os.Exit(1)
} else {
fmt.Println("ok: ", ok)
fmt.Println("jobErr: ", jobErr)
}
}
}
}

View file

@ -0,0 +1,62 @@
{
"environment": "prod",
"health_check_config": {
"initial_interval_secs": 15.0,
"health_checker": {
"http": {
"expected_response_code": 0,
"endpoint": "/health",
"expected_response": "ok"
}
},
"interval_secs": 10.0,
"timeout_secs": 1.0,
"max_consecutive_failures": 0
},
"name": "hello_world_from_gorealis",
"service": false,
"max_task_failures": 1,
"cron_collision_policy": "KILL_EXISTING",
"enable_hooks": false,
"cluster": "devcluster",
"task": {
"processes": [
{
"daemon": false,
"name": "hello",
"ephemeral": false,
"max_failures": 1,
"min_duration": 5,
"cmdline": "\n while true; do\n echo hello world from gorealis\n sleep 10\n done\n ",
"final": false
}
],
"name": "hello",
"finalization_wait": 30,
"max_failures": 1,
"max_concurrency": 0,
"resources": {
"gpu": 0,
"disk": 134217728,
"ram": 134217728,
"cpu": 1.0
},
"constraints": [
{
"order": [
"hello"
]
}
]
},
"production": false,
"role": "vagrant",
"lifecycle": {
"http": {
"graceful_shutdown_endpoint": "/quitquitquit",
"port": "health",
"shutdown_endpoint": "/abortabortabort"
}
},
"priority": 0
}

View file

@ -1,28 +0,0 @@
{
"task": {
"processes": [
{
"daemon": false,
"name": "hello",
"ephemeral": false,
"max_failures": 1,
"min_duration": 5,
"cmdline": "\n while true; do\n echo hello world from gorealis\n sleep 10\n done\n ",
"final": false
}
],
"resources": {
"gpu": 0,
"disk": 134217728,
"ram": 134217728,
"cpu": 1.1
},
"constraints": [
{
"order": [
"hello"
]
}
]
}
}

View file

@ -1,5 +1,4 @@
// Autogenerated by Thrift Compiler (0.12.0)
// DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
// Code generated by Thrift Compiler (0.14.0). DO NOT EDIT.
package aurora

View file

@ -1,13 +1,12 @@
// Autogenerated by Thrift Compiler (0.12.0)
// DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
// Code generated by Thrift Compiler (0.14.0). DO NOT EDIT.
package aurora
import (
import(
"bytes"
"context"
"reflect"
"fmt"
"time"
"github.com/apache/thrift/lib/go/thrift"
)
@ -15,7 +14,7 @@ import (
var _ = thrift.ZERO
var _ = fmt.Printf
var _ = context.Background
var _ = reflect.DeepEqual
var _ = time.Now
var _ = bytes.Equal
const AURORA_EXECUTOR_NAME = "AuroraExecutor"

File diff suppressed because it is too large Load diff

View file

@ -1,22 +1,22 @@
// Autogenerated by Thrift Compiler (0.12.0)
// DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
// Code generated by Thrift Compiler (0.14.0). DO NOT EDIT.
package main
import (
"context"
"flag"
"fmt"
"math"
"net"
"net/url"
"os"
"strconv"
"strings"
"github.com/apache/thrift/lib/go/thrift"
"apache/aurora"
"context"
"flag"
"fmt"
"math"
"net"
"net/url"
"os"
"strconv"
"strings"
"github.com/apache/thrift/lib/go/thrift"
"apache/aurora"
)
var _ = aurora.GoUnusedProtection__
func Usage() {
fmt.Fprintln(os.Stderr, "Usage of ", os.Args[0], " [-h host:port] [-u url] [-f[ramed]] function [arg1 [arg2...]]:")
@ -175,19 +175,19 @@ func main() {
fmt.Fprintln(os.Stderr, "CreateJob requires 1 args")
flag.Usage()
}
arg162 := flag.Arg(1)
mbTrans163 := thrift.NewTMemoryBufferLen(len(arg162))
defer mbTrans163.Close()
_, err164 := mbTrans163.WriteString(arg162)
if err164 != nil {
arg213 := flag.Arg(1)
mbTrans214 := thrift.NewTMemoryBufferLen(len(arg213))
defer mbTrans214.Close()
_, err215 := mbTrans214.WriteString(arg213)
if err215 != nil {
Usage()
return
}
factory165 := thrift.NewTJSONProtocolFactory()
jsProt166 := factory165.GetProtocol(mbTrans163)
factory216 := thrift.NewTJSONProtocolFactory()
jsProt217 := factory216.GetProtocol(mbTrans214)
argvalue0 := aurora.NewJobConfiguration()
err167 := argvalue0.Read(jsProt166)
if err167 != nil {
err218 := argvalue0.Read(context.Background(), jsProt217)
if err218 != nil {
Usage()
return
}
@ -200,19 +200,19 @@ func main() {
fmt.Fprintln(os.Stderr, "ScheduleCronJob requires 1 args")
flag.Usage()
}
arg168 := flag.Arg(1)
mbTrans169 := thrift.NewTMemoryBufferLen(len(arg168))
defer mbTrans169.Close()
_, err170 := mbTrans169.WriteString(arg168)
if err170 != nil {
arg219 := flag.Arg(1)
mbTrans220 := thrift.NewTMemoryBufferLen(len(arg219))
defer mbTrans220.Close()
_, err221 := mbTrans220.WriteString(arg219)
if err221 != nil {
Usage()
return
}
factory171 := thrift.NewTJSONProtocolFactory()
jsProt172 := factory171.GetProtocol(mbTrans169)
factory222 := thrift.NewTJSONProtocolFactory()
jsProt223 := factory222.GetProtocol(mbTrans220)
argvalue0 := aurora.NewJobConfiguration()
err173 := argvalue0.Read(jsProt172)
if err173 != nil {
err224 := argvalue0.Read(context.Background(), jsProt223)
if err224 != nil {
Usage()
return
}
@ -225,19 +225,19 @@ func main() {
fmt.Fprintln(os.Stderr, "DescheduleCronJob requires 1 args")
flag.Usage()
}
arg174 := flag.Arg(1)
mbTrans175 := thrift.NewTMemoryBufferLen(len(arg174))
defer mbTrans175.Close()
_, err176 := mbTrans175.WriteString(arg174)
if err176 != nil {
arg225 := flag.Arg(1)
mbTrans226 := thrift.NewTMemoryBufferLen(len(arg225))
defer mbTrans226.Close()
_, err227 := mbTrans226.WriteString(arg225)
if err227 != nil {
Usage()
return
}
factory177 := thrift.NewTJSONProtocolFactory()
jsProt178 := factory177.GetProtocol(mbTrans175)
factory228 := thrift.NewTJSONProtocolFactory()
jsProt229 := factory228.GetProtocol(mbTrans226)
argvalue0 := aurora.NewJobKey()
err179 := argvalue0.Read(jsProt178)
if err179 != nil {
err230 := argvalue0.Read(context.Background(), jsProt229)
if err230 != nil {
Usage()
return
}
@ -250,19 +250,19 @@ func main() {
fmt.Fprintln(os.Stderr, "StartCronJob requires 1 args")
flag.Usage()
}
arg180 := flag.Arg(1)
mbTrans181 := thrift.NewTMemoryBufferLen(len(arg180))
defer mbTrans181.Close()
_, err182 := mbTrans181.WriteString(arg180)
if err182 != nil {
arg231 := flag.Arg(1)
mbTrans232 := thrift.NewTMemoryBufferLen(len(arg231))
defer mbTrans232.Close()
_, err233 := mbTrans232.WriteString(arg231)
if err233 != nil {
Usage()
return
}
factory183 := thrift.NewTJSONProtocolFactory()
jsProt184 := factory183.GetProtocol(mbTrans181)
factory234 := thrift.NewTJSONProtocolFactory()
jsProt235 := factory234.GetProtocol(mbTrans232)
argvalue0 := aurora.NewJobKey()
err185 := argvalue0.Read(jsProt184)
if err185 != nil {
err236 := argvalue0.Read(context.Background(), jsProt235)
if err236 != nil {
Usage()
return
}
@ -275,36 +275,36 @@ func main() {
fmt.Fprintln(os.Stderr, "RestartShards requires 2 args")
flag.Usage()
}
arg186 := flag.Arg(1)
mbTrans187 := thrift.NewTMemoryBufferLen(len(arg186))
defer mbTrans187.Close()
_, err188 := mbTrans187.WriteString(arg186)
if err188 != nil {
arg237 := flag.Arg(1)
mbTrans238 := thrift.NewTMemoryBufferLen(len(arg237))
defer mbTrans238.Close()
_, err239 := mbTrans238.WriteString(arg237)
if err239 != nil {
Usage()
return
}
factory189 := thrift.NewTJSONProtocolFactory()
jsProt190 := factory189.GetProtocol(mbTrans187)
factory240 := thrift.NewTJSONProtocolFactory()
jsProt241 := factory240.GetProtocol(mbTrans238)
argvalue0 := aurora.NewJobKey()
err191 := argvalue0.Read(jsProt190)
if err191 != nil {
err242 := argvalue0.Read(context.Background(), jsProt241)
if err242 != nil {
Usage()
return
}
value0 := argvalue0
arg192 := flag.Arg(2)
mbTrans193 := thrift.NewTMemoryBufferLen(len(arg192))
defer mbTrans193.Close()
_, err194 := mbTrans193.WriteString(arg192)
if err194 != nil {
arg243 := flag.Arg(2)
mbTrans244 := thrift.NewTMemoryBufferLen(len(arg243))
defer mbTrans244.Close()
_, err245 := mbTrans244.WriteString(arg243)
if err245 != nil {
Usage()
return
}
factory195 := thrift.NewTJSONProtocolFactory()
jsProt196 := factory195.GetProtocol(mbTrans193)
factory246 := thrift.NewTJSONProtocolFactory()
jsProt247 := factory246.GetProtocol(mbTrans244)
containerStruct1 := aurora.NewAuroraSchedulerManagerRestartShardsArgs()
err197 := containerStruct1.ReadField2(jsProt196)
if err197 != nil {
err248 := containerStruct1.ReadField2(context.Background(), jsProt247)
if err248 != nil {
Usage()
return
}
@ -318,36 +318,36 @@ func main() {
fmt.Fprintln(os.Stderr, "KillTasks requires 3 args")
flag.Usage()
}
arg198 := flag.Arg(1)
mbTrans199 := thrift.NewTMemoryBufferLen(len(arg198))
defer mbTrans199.Close()
_, err200 := mbTrans199.WriteString(arg198)
if err200 != nil {
arg249 := flag.Arg(1)
mbTrans250 := thrift.NewTMemoryBufferLen(len(arg249))
defer mbTrans250.Close()
_, err251 := mbTrans250.WriteString(arg249)
if err251 != nil {
Usage()
return
}
factory201 := thrift.NewTJSONProtocolFactory()
jsProt202 := factory201.GetProtocol(mbTrans199)
factory252 := thrift.NewTJSONProtocolFactory()
jsProt253 := factory252.GetProtocol(mbTrans250)
argvalue0 := aurora.NewJobKey()
err203 := argvalue0.Read(jsProt202)
if err203 != nil {
err254 := argvalue0.Read(context.Background(), jsProt253)
if err254 != nil {
Usage()
return
}
value0 := argvalue0
arg204 := flag.Arg(2)
mbTrans205 := thrift.NewTMemoryBufferLen(len(arg204))
defer mbTrans205.Close()
_, err206 := mbTrans205.WriteString(arg204)
if err206 != nil {
arg255 := flag.Arg(2)
mbTrans256 := thrift.NewTMemoryBufferLen(len(arg255))
defer mbTrans256.Close()
_, err257 := mbTrans256.WriteString(arg255)
if err257 != nil {
Usage()
return
}
factory207 := thrift.NewTJSONProtocolFactory()
jsProt208 := factory207.GetProtocol(mbTrans205)
factory258 := thrift.NewTJSONProtocolFactory()
jsProt259 := factory258.GetProtocol(mbTrans256)
containerStruct1 := aurora.NewAuroraSchedulerManagerKillTasksArgs()
err209 := containerStruct1.ReadField2(jsProt208)
if err209 != nil {
err260 := containerStruct1.ReadField2(context.Background(), jsProt259)
if err260 != nil {
Usage()
return
}
@ -363,25 +363,25 @@ func main() {
fmt.Fprintln(os.Stderr, "AddInstances requires 2 args")
flag.Usage()
}
arg211 := flag.Arg(1)
mbTrans212 := thrift.NewTMemoryBufferLen(len(arg211))
defer mbTrans212.Close()
_, err213 := mbTrans212.WriteString(arg211)
if err213 != nil {
arg262 := flag.Arg(1)
mbTrans263 := thrift.NewTMemoryBufferLen(len(arg262))
defer mbTrans263.Close()
_, err264 := mbTrans263.WriteString(arg262)
if err264 != nil {
Usage()
return
}
factory214 := thrift.NewTJSONProtocolFactory()
jsProt215 := factory214.GetProtocol(mbTrans212)
factory265 := thrift.NewTJSONProtocolFactory()
jsProt266 := factory265.GetProtocol(mbTrans263)
argvalue0 := aurora.NewInstanceKey()
err216 := argvalue0.Read(jsProt215)
if err216 != nil {
err267 := argvalue0.Read(context.Background(), jsProt266)
if err267 != nil {
Usage()
return
}
value0 := argvalue0
tmp1, err217 := (strconv.Atoi(flag.Arg(2)))
if err217 != nil {
tmp1, err268 := (strconv.Atoi(flag.Arg(2)))
if err268 != nil {
Usage()
return
}
@ -395,19 +395,19 @@ func main() {
fmt.Fprintln(os.Stderr, "ReplaceCronTemplate requires 1 args")
flag.Usage()
}
arg218 := flag.Arg(1)
mbTrans219 := thrift.NewTMemoryBufferLen(len(arg218))
defer mbTrans219.Close()
_, err220 := mbTrans219.WriteString(arg218)
if err220 != nil {
arg269 := flag.Arg(1)
mbTrans270 := thrift.NewTMemoryBufferLen(len(arg269))
defer mbTrans270.Close()
_, err271 := mbTrans270.WriteString(arg269)
if err271 != nil {
Usage()
return
}
factory221 := thrift.NewTJSONProtocolFactory()
jsProt222 := factory221.GetProtocol(mbTrans219)
factory272 := thrift.NewTJSONProtocolFactory()
jsProt273 := factory272.GetProtocol(mbTrans270)
argvalue0 := aurora.NewJobConfiguration()
err223 := argvalue0.Read(jsProt222)
if err223 != nil {
err274 := argvalue0.Read(context.Background(), jsProt273)
if err274 != nil {
Usage()
return
}
@ -420,19 +420,19 @@ func main() {
fmt.Fprintln(os.Stderr, "StartJobUpdate requires 2 args")
flag.Usage()
}
arg224 := flag.Arg(1)
mbTrans225 := thrift.NewTMemoryBufferLen(len(arg224))
defer mbTrans225.Close()
_, err226 := mbTrans225.WriteString(arg224)
if err226 != nil {
arg275 := flag.Arg(1)
mbTrans276 := thrift.NewTMemoryBufferLen(len(arg275))
defer mbTrans276.Close()
_, err277 := mbTrans276.WriteString(arg275)
if err277 != nil {
Usage()
return
}
factory227 := thrift.NewTJSONProtocolFactory()
jsProt228 := factory227.GetProtocol(mbTrans225)
factory278 := thrift.NewTJSONProtocolFactory()
jsProt279 := factory278.GetProtocol(mbTrans276)
argvalue0 := aurora.NewJobUpdateRequest()
err229 := argvalue0.Read(jsProt228)
if err229 != nil {
err280 := argvalue0.Read(context.Background(), jsProt279)
if err280 != nil {
Usage()
return
}
@ -447,19 +447,19 @@ func main() {
fmt.Fprintln(os.Stderr, "PauseJobUpdate requires 2 args")
flag.Usage()
}
arg231 := flag.Arg(1)
mbTrans232 := thrift.NewTMemoryBufferLen(len(arg231))
defer mbTrans232.Close()
_, err233 := mbTrans232.WriteString(arg231)
if err233 != nil {
arg282 := flag.Arg(1)
mbTrans283 := thrift.NewTMemoryBufferLen(len(arg282))
defer mbTrans283.Close()
_, err284 := mbTrans283.WriteString(arg282)
if err284 != nil {
Usage()
return
}
factory234 := thrift.NewTJSONProtocolFactory()
jsProt235 := factory234.GetProtocol(mbTrans232)
factory285 := thrift.NewTJSONProtocolFactory()
jsProt286 := factory285.GetProtocol(mbTrans283)
argvalue0 := aurora.NewJobUpdateKey()
err236 := argvalue0.Read(jsProt235)
if err236 != nil {
err287 := argvalue0.Read(context.Background(), jsProt286)
if err287 != nil {
Usage()
return
}
@ -474,19 +474,19 @@ func main() {
fmt.Fprintln(os.Stderr, "ResumeJobUpdate requires 2 args")
flag.Usage()
}
arg238 := flag.Arg(1)
mbTrans239 := thrift.NewTMemoryBufferLen(len(arg238))
defer mbTrans239.Close()
_, err240 := mbTrans239.WriteString(arg238)
if err240 != nil {
arg289 := flag.Arg(1)
mbTrans290 := thrift.NewTMemoryBufferLen(len(arg289))
defer mbTrans290.Close()
_, err291 := mbTrans290.WriteString(arg289)
if err291 != nil {
Usage()
return
}
factory241 := thrift.NewTJSONProtocolFactory()
jsProt242 := factory241.GetProtocol(mbTrans239)
factory292 := thrift.NewTJSONProtocolFactory()
jsProt293 := factory292.GetProtocol(mbTrans290)
argvalue0 := aurora.NewJobUpdateKey()
err243 := argvalue0.Read(jsProt242)
if err243 != nil {
err294 := argvalue0.Read(context.Background(), jsProt293)
if err294 != nil {
Usage()
return
}
@ -501,19 +501,19 @@ func main() {
fmt.Fprintln(os.Stderr, "AbortJobUpdate requires 2 args")
flag.Usage()
}
arg245 := flag.Arg(1)
mbTrans246 := thrift.NewTMemoryBufferLen(len(arg245))
defer mbTrans246.Close()
_, err247 := mbTrans246.WriteString(arg245)
if err247 != nil {
arg296 := flag.Arg(1)
mbTrans297 := thrift.NewTMemoryBufferLen(len(arg296))
defer mbTrans297.Close()
_, err298 := mbTrans297.WriteString(arg296)
if err298 != nil {
Usage()
return
}
factory248 := thrift.NewTJSONProtocolFactory()
jsProt249 := factory248.GetProtocol(mbTrans246)
factory299 := thrift.NewTJSONProtocolFactory()
jsProt300 := factory299.GetProtocol(mbTrans297)
argvalue0 := aurora.NewJobUpdateKey()
err250 := argvalue0.Read(jsProt249)
if err250 != nil {
err301 := argvalue0.Read(context.Background(), jsProt300)
if err301 != nil {
Usage()
return
}
@ -528,19 +528,19 @@ func main() {
fmt.Fprintln(os.Stderr, "RollbackJobUpdate requires 2 args")
flag.Usage()
}
arg252 := flag.Arg(1)
mbTrans253 := thrift.NewTMemoryBufferLen(len(arg252))
defer mbTrans253.Close()
_, err254 := mbTrans253.WriteString(arg252)
if err254 != nil {
arg303 := flag.Arg(1)
mbTrans304 := thrift.NewTMemoryBufferLen(len(arg303))
defer mbTrans304.Close()
_, err305 := mbTrans304.WriteString(arg303)
if err305 != nil {
Usage()
return
}
factory255 := thrift.NewTJSONProtocolFactory()
jsProt256 := factory255.GetProtocol(mbTrans253)
factory306 := thrift.NewTJSONProtocolFactory()
jsProt307 := factory306.GetProtocol(mbTrans304)
argvalue0 := aurora.NewJobUpdateKey()
err257 := argvalue0.Read(jsProt256)
if err257 != nil {
err308 := argvalue0.Read(context.Background(), jsProt307)
if err308 != nil {
Usage()
return
}
@ -555,19 +555,19 @@ func main() {
fmt.Fprintln(os.Stderr, "PulseJobUpdate requires 1 args")
flag.Usage()
}
arg259 := flag.Arg(1)
mbTrans260 := thrift.NewTMemoryBufferLen(len(arg259))
defer mbTrans260.Close()
_, err261 := mbTrans260.WriteString(arg259)
if err261 != nil {
arg310 := flag.Arg(1)
mbTrans311 := thrift.NewTMemoryBufferLen(len(arg310))
defer mbTrans311.Close()
_, err312 := mbTrans311.WriteString(arg310)
if err312 != nil {
Usage()
return
}
factory262 := thrift.NewTJSONProtocolFactory()
jsProt263 := factory262.GetProtocol(mbTrans260)
factory313 := thrift.NewTJSONProtocolFactory()
jsProt314 := factory313.GetProtocol(mbTrans311)
argvalue0 := aurora.NewJobUpdateKey()
err264 := argvalue0.Read(jsProt263)
if err264 != nil {
err315 := argvalue0.Read(context.Background(), jsProt314)
if err315 != nil {
Usage()
return
}
@ -598,19 +598,19 @@ func main() {
fmt.Fprintln(os.Stderr, "GetTasksStatus requires 1 args")
flag.Usage()
}
arg266 := flag.Arg(1)
mbTrans267 := thrift.NewTMemoryBufferLen(len(arg266))
defer mbTrans267.Close()
_, err268 := mbTrans267.WriteString(arg266)
if err268 != nil {
arg317 := flag.Arg(1)
mbTrans318 := thrift.NewTMemoryBufferLen(len(arg317))
defer mbTrans318.Close()
_, err319 := mbTrans318.WriteString(arg317)
if err319 != nil {
Usage()
return
}
factory269 := thrift.NewTJSONProtocolFactory()
jsProt270 := factory269.GetProtocol(mbTrans267)
factory320 := thrift.NewTJSONProtocolFactory()
jsProt321 := factory320.GetProtocol(mbTrans318)
argvalue0 := aurora.NewTaskQuery()
err271 := argvalue0.Read(jsProt270)
if err271 != nil {
err322 := argvalue0.Read(context.Background(), jsProt321)
if err322 != nil {
Usage()
return
}
@ -623,19 +623,19 @@ func main() {
fmt.Fprintln(os.Stderr, "GetTasksWithoutConfigs requires 1 args")
flag.Usage()
}
arg272 := flag.Arg(1)
mbTrans273 := thrift.NewTMemoryBufferLen(len(arg272))
defer mbTrans273.Close()
_, err274 := mbTrans273.WriteString(arg272)
if err274 != nil {
arg323 := flag.Arg(1)
mbTrans324 := thrift.NewTMemoryBufferLen(len(arg323))
defer mbTrans324.Close()
_, err325 := mbTrans324.WriteString(arg323)
if err325 != nil {
Usage()
return
}
factory275 := thrift.NewTJSONProtocolFactory()
jsProt276 := factory275.GetProtocol(mbTrans273)
factory326 := thrift.NewTJSONProtocolFactory()
jsProt327 := factory326.GetProtocol(mbTrans324)
argvalue0 := aurora.NewTaskQuery()
err277 := argvalue0.Read(jsProt276)
if err277 != nil {
err328 := argvalue0.Read(context.Background(), jsProt327)
if err328 != nil {
Usage()
return
}
@ -648,19 +648,19 @@ func main() {
fmt.Fprintln(os.Stderr, "GetPendingReason requires 1 args")
flag.Usage()
}
arg278 := flag.Arg(1)
mbTrans279 := thrift.NewTMemoryBufferLen(len(arg278))
defer mbTrans279.Close()
_, err280 := mbTrans279.WriteString(arg278)
if err280 != nil {
arg329 := flag.Arg(1)
mbTrans330 := thrift.NewTMemoryBufferLen(len(arg329))
defer mbTrans330.Close()
_, err331 := mbTrans330.WriteString(arg329)
if err331 != nil {
Usage()
return
}
factory281 := thrift.NewTJSONProtocolFactory()
jsProt282 := factory281.GetProtocol(mbTrans279)
factory332 := thrift.NewTJSONProtocolFactory()
jsProt333 := factory332.GetProtocol(mbTrans330)
argvalue0 := aurora.NewTaskQuery()
err283 := argvalue0.Read(jsProt282)
if err283 != nil {
err334 := argvalue0.Read(context.Background(), jsProt333)
if err334 != nil {
Usage()
return
}
@ -673,19 +673,19 @@ func main() {
fmt.Fprintln(os.Stderr, "GetConfigSummary requires 1 args")
flag.Usage()
}
arg284 := flag.Arg(1)
mbTrans285 := thrift.NewTMemoryBufferLen(len(arg284))
defer mbTrans285.Close()
_, err286 := mbTrans285.WriteString(arg284)
if err286 != nil {
arg335 := flag.Arg(1)
mbTrans336 := thrift.NewTMemoryBufferLen(len(arg335))
defer mbTrans336.Close()
_, err337 := mbTrans336.WriteString(arg335)
if err337 != nil {
Usage()
return
}
factory287 := thrift.NewTJSONProtocolFactory()
jsProt288 := factory287.GetProtocol(mbTrans285)
factory338 := thrift.NewTJSONProtocolFactory()
jsProt339 := factory338.GetProtocol(mbTrans336)
argvalue0 := aurora.NewJobKey()
err289 := argvalue0.Read(jsProt288)
if err289 != nil {
err340 := argvalue0.Read(context.Background(), jsProt339)
if err340 != nil {
Usage()
return
}
@ -718,19 +718,19 @@ func main() {
fmt.Fprintln(os.Stderr, "PopulateJobConfig requires 1 args")
flag.Usage()
}
arg292 := flag.Arg(1)
mbTrans293 := thrift.NewTMemoryBufferLen(len(arg292))
defer mbTrans293.Close()
_, err294 := mbTrans293.WriteString(arg292)
if err294 != nil {
arg343 := flag.Arg(1)
mbTrans344 := thrift.NewTMemoryBufferLen(len(arg343))
defer mbTrans344.Close()
_, err345 := mbTrans344.WriteString(arg343)
if err345 != nil {
Usage()
return
}
factory295 := thrift.NewTJSONProtocolFactory()
jsProt296 := factory295.GetProtocol(mbTrans293)
factory346 := thrift.NewTJSONProtocolFactory()
jsProt347 := factory346.GetProtocol(mbTrans344)
argvalue0 := aurora.NewJobConfiguration()
err297 := argvalue0.Read(jsProt296)
if err297 != nil {
err348 := argvalue0.Read(context.Background(), jsProt347)
if err348 != nil {
Usage()
return
}
@ -743,19 +743,19 @@ func main() {
fmt.Fprintln(os.Stderr, "GetJobUpdateSummaries requires 1 args")
flag.Usage()
}
arg298 := flag.Arg(1)
mbTrans299 := thrift.NewTMemoryBufferLen(len(arg298))
defer mbTrans299.Close()
_, err300 := mbTrans299.WriteString(arg298)
if err300 != nil {
arg349 := flag.Arg(1)
mbTrans350 := thrift.NewTMemoryBufferLen(len(arg349))
defer mbTrans350.Close()
_, err351 := mbTrans350.WriteString(arg349)
if err351 != nil {
Usage()
return
}
factory301 := thrift.NewTJSONProtocolFactory()
jsProt302 := factory301.GetProtocol(mbTrans299)
factory352 := thrift.NewTJSONProtocolFactory()
jsProt353 := factory352.GetProtocol(mbTrans350)
argvalue0 := aurora.NewJobUpdateQuery()
err303 := argvalue0.Read(jsProt302)
if err303 != nil {
err354 := argvalue0.Read(context.Background(), jsProt353)
if err354 != nil {
Usage()
return
}
@ -768,19 +768,19 @@ func main() {
fmt.Fprintln(os.Stderr, "GetJobUpdateDetails requires 1 args")
flag.Usage()
}
arg304 := flag.Arg(1)
mbTrans305 := thrift.NewTMemoryBufferLen(len(arg304))
defer mbTrans305.Close()
_, err306 := mbTrans305.WriteString(arg304)
if err306 != nil {
arg355 := flag.Arg(1)
mbTrans356 := thrift.NewTMemoryBufferLen(len(arg355))
defer mbTrans356.Close()
_, err357 := mbTrans356.WriteString(arg355)
if err357 != nil {
Usage()
return
}
factory307 := thrift.NewTJSONProtocolFactory()
jsProt308 := factory307.GetProtocol(mbTrans305)
factory358 := thrift.NewTJSONProtocolFactory()
jsProt359 := factory358.GetProtocol(mbTrans356)
argvalue0 := aurora.NewJobUpdateQuery()
err309 := argvalue0.Read(jsProt308)
if err309 != nil {
err360 := argvalue0.Read(context.Background(), jsProt359)
if err360 != nil {
Usage()
return
}
@ -793,19 +793,19 @@ func main() {
fmt.Fprintln(os.Stderr, "GetJobUpdateDiff requires 1 args")
flag.Usage()
}
arg310 := flag.Arg(1)
mbTrans311 := thrift.NewTMemoryBufferLen(len(arg310))
defer mbTrans311.Close()
_, err312 := mbTrans311.WriteString(arg310)
if err312 != nil {
arg361 := flag.Arg(1)
mbTrans362 := thrift.NewTMemoryBufferLen(len(arg361))
defer mbTrans362.Close()
_, err363 := mbTrans362.WriteString(arg361)
if err363 != nil {
Usage()
return
}
factory313 := thrift.NewTJSONProtocolFactory()
jsProt314 := factory313.GetProtocol(mbTrans311)
factory364 := thrift.NewTJSONProtocolFactory()
jsProt365 := factory364.GetProtocol(mbTrans362)
argvalue0 := aurora.NewJobUpdateRequest()
err315 := argvalue0.Read(jsProt314)
if err315 != nil {
err366 := argvalue0.Read(context.Background(), jsProt365)
if err366 != nil {
Usage()
return
}

View file

@ -1,22 +1,22 @@
// Autogenerated by Thrift Compiler (0.12.0)
// DO NOT EDIT UNLESS YOU ARE SURE THAT YOU KNOW WHAT YOU ARE DOING
// Code generated by Thrift Compiler (0.14.0). DO NOT EDIT.
package main
import (
"context"
"flag"
"fmt"
"math"
"net"
"net/url"
"os"
"strconv"
"strings"
"github.com/apache/thrift/lib/go/thrift"
"apache/aurora"
"context"
"flag"
"fmt"
"math"
"net"
"net/url"
"os"
"strconv"
"strings"
"github.com/apache/thrift/lib/go/thrift"
"apache/aurora"
)
var _ = aurora.GoUnusedProtection__
func Usage() {
fmt.Fprintln(os.Stderr, "Usage of ", os.Args[0], " [-h host:port] [-u url] [-f[ramed]] function [arg1 [arg2...]]:")
@ -179,19 +179,19 @@ func main() {
fmt.Fprintln(os.Stderr, "GetTasksStatus requires 1 args")
flag.Usage()
}
arg81 := flag.Arg(1)
mbTrans82 := thrift.NewTMemoryBufferLen(len(arg81))
defer mbTrans82.Close()
_, err83 := mbTrans82.WriteString(arg81)
if err83 != nil {
arg132 := flag.Arg(1)
mbTrans133 := thrift.NewTMemoryBufferLen(len(arg132))
defer mbTrans133.Close()
_, err134 := mbTrans133.WriteString(arg132)
if err134 != nil {
Usage()
return
}
factory84 := thrift.NewTJSONProtocolFactory()
jsProt85 := factory84.GetProtocol(mbTrans82)
factory135 := thrift.NewTJSONProtocolFactory()
jsProt136 := factory135.GetProtocol(mbTrans133)
argvalue0 := aurora.NewTaskQuery()
err86 := argvalue0.Read(jsProt85)
if err86 != nil {
err137 := argvalue0.Read(context.Background(), jsProt136)
if err137 != nil {
Usage()
return
}
@ -204,19 +204,19 @@ func main() {
fmt.Fprintln(os.Stderr, "GetTasksWithoutConfigs requires 1 args")
flag.Usage()
}
arg87 := flag.Arg(1)
mbTrans88 := thrift.NewTMemoryBufferLen(len(arg87))
defer mbTrans88.Close()
_, err89 := mbTrans88.WriteString(arg87)
if err89 != nil {
arg138 := flag.Arg(1)
mbTrans139 := thrift.NewTMemoryBufferLen(len(arg138))
defer mbTrans139.Close()
_, err140 := mbTrans139.WriteString(arg138)
if err140 != nil {
Usage()
return
}
factory90 := thrift.NewTJSONProtocolFactory()
jsProt91 := factory90.GetProtocol(mbTrans88)
factory141 := thrift.NewTJSONProtocolFactory()
jsProt142 := factory141.GetProtocol(mbTrans139)
argvalue0 := aurora.NewTaskQuery()
err92 := argvalue0.Read(jsProt91)
if err92 != nil {
err143 := argvalue0.Read(context.Background(), jsProt142)
if err143 != nil {
Usage()
return
}
@ -229,19 +229,19 @@ func main() {
fmt.Fprintln(os.Stderr, "GetPendingReason requires 1 args")
flag.Usage()
}
arg93 := flag.Arg(1)
mbTrans94 := thrift.NewTMemoryBufferLen(len(arg93))
defer mbTrans94.Close()
_, err95 := mbTrans94.WriteString(arg93)
if err95 != nil {
arg144 := flag.Arg(1)
mbTrans145 := thrift.NewTMemoryBufferLen(len(arg144))
defer mbTrans145.Close()
_, err146 := mbTrans145.WriteString(arg144)
if err146 != nil {
Usage()
return
}
factory96 := thrift.NewTJSONProtocolFactory()
jsProt97 := factory96.GetProtocol(mbTrans94)
factory147 := thrift.NewTJSONProtocolFactory()
jsProt148 := factory147.GetProtocol(mbTrans145)
argvalue0 := aurora.NewTaskQuery()
err98 := argvalue0.Read(jsProt97)
if err98 != nil {
err149 := argvalue0.Read(context.Background(), jsProt148)
if err149 != nil {
Usage()
return
}
@ -254,19 +254,19 @@ func main() {
fmt.Fprintln(os.Stderr, "GetConfigSummary requires 1 args")
flag.Usage()
}
arg99 := flag.Arg(1)
mbTrans100 := thrift.NewTMemoryBufferLen(len(arg99))
defer mbTrans100.Close()
_, err101 := mbTrans100.WriteString(arg99)
if err101 != nil {
arg150 := flag.Arg(1)
mbTrans151 := thrift.NewTMemoryBufferLen(len(arg150))
defer mbTrans151.Close()
_, err152 := mbTrans151.WriteString(arg150)
if err152 != nil {
Usage()
return
}
factory102 := thrift.NewTJSONProtocolFactory()
jsProt103 := factory102.GetProtocol(mbTrans100)
factory153 := thrift.NewTJSONProtocolFactory()
jsProt154 := factory153.GetProtocol(mbTrans151)
argvalue0 := aurora.NewJobKey()
err104 := argvalue0.Read(jsProt103)
if err104 != nil {
err155 := argvalue0.Read(context.Background(), jsProt154)
if err155 != nil {
Usage()
return
}
@ -299,19 +299,19 @@ func main() {
fmt.Fprintln(os.Stderr, "PopulateJobConfig requires 1 args")
flag.Usage()
}
arg107 := flag.Arg(1)
mbTrans108 := thrift.NewTMemoryBufferLen(len(arg107))
defer mbTrans108.Close()
_, err109 := mbTrans108.WriteString(arg107)
if err109 != nil {
arg158 := flag.Arg(1)
mbTrans159 := thrift.NewTMemoryBufferLen(len(arg158))
defer mbTrans159.Close()
_, err160 := mbTrans159.WriteString(arg158)
if err160 != nil {
Usage()
return
}
factory110 := thrift.NewTJSONProtocolFactory()
jsProt111 := factory110.GetProtocol(mbTrans108)
factory161 := thrift.NewTJSONProtocolFactory()
jsProt162 := factory161.GetProtocol(mbTrans159)
argvalue0 := aurora.NewJobConfiguration()
err112 := argvalue0.Read(jsProt111)
if err112 != nil {
err163 := argvalue0.Read(context.Background(), jsProt162)
if err163 != nil {
Usage()
return
}
@ -324,19 +324,19 @@ func main() {
fmt.Fprintln(os.Stderr, "GetJobUpdateSummaries requires 1 args")
flag.Usage()
}
arg113 := flag.Arg(1)
mbTrans114 := thrift.NewTMemoryBufferLen(len(arg113))
defer mbTrans114.Close()
_, err115 := mbTrans114.WriteString(arg113)
if err115 != nil {
arg164 := flag.Arg(1)
mbTrans165 := thrift.NewTMemoryBufferLen(len(arg164))
defer mbTrans165.Close()
_, err166 := mbTrans165.WriteString(arg164)
if err166 != nil {
Usage()
return
}
factory116 := thrift.NewTJSONProtocolFactory()
jsProt117 := factory116.GetProtocol(mbTrans114)
factory167 := thrift.NewTJSONProtocolFactory()
jsProt168 := factory167.GetProtocol(mbTrans165)
argvalue0 := aurora.NewJobUpdateQuery()
err118 := argvalue0.Read(jsProt117)
if err118 != nil {
err169 := argvalue0.Read(context.Background(), jsProt168)
if err169 != nil {
Usage()
return
}
@ -349,19 +349,19 @@ func main() {
fmt.Fprintln(os.Stderr, "GetJobUpdateDetails requires 1 args")
flag.Usage()
}
arg119 := flag.Arg(1)
mbTrans120 := thrift.NewTMemoryBufferLen(len(arg119))
defer mbTrans120.Close()
_, err121 := mbTrans120.WriteString(arg119)
if err121 != nil {
arg170 := flag.Arg(1)
mbTrans171 := thrift.NewTMemoryBufferLen(len(arg170))
defer mbTrans171.Close()
_, err172 := mbTrans171.WriteString(arg170)
if err172 != nil {
Usage()
return
}
factory122 := thrift.NewTJSONProtocolFactory()
jsProt123 := factory122.GetProtocol(mbTrans120)
factory173 := thrift.NewTJSONProtocolFactory()
jsProt174 := factory173.GetProtocol(mbTrans171)
argvalue0 := aurora.NewJobUpdateQuery()
err124 := argvalue0.Read(jsProt123)
if err124 != nil {
err175 := argvalue0.Read(context.Background(), jsProt174)
if err175 != nil {
Usage()
return
}
@ -374,19 +374,19 @@ func main() {
fmt.Fprintln(os.Stderr, "GetJobUpdateDiff requires 1 args")
flag.Usage()
}
arg125 := flag.Arg(1)
mbTrans126 := thrift.NewTMemoryBufferLen(len(arg125))
defer mbTrans126.Close()
_, err127 := mbTrans126.WriteString(arg125)
if err127 != nil {
arg176 := flag.Arg(1)
mbTrans177 := thrift.NewTMemoryBufferLen(len(arg176))
defer mbTrans177.Close()
_, err178 := mbTrans177.WriteString(arg176)
if err178 != nil {
Usage()
return
}
factory128 := thrift.NewTJSONProtocolFactory()
jsProt129 := factory128.GetProtocol(mbTrans126)
factory179 := thrift.NewTJSONProtocolFactory()
jsProt180 := factory179.GetProtocol(mbTrans177)
argvalue0 := aurora.NewJobUpdateRequest()
err130 := argvalue0.Read(jsProt129)
if err130 != nil {
err181 := argvalue0.Read(context.Background(), jsProt180)
if err181 != nil {
Usage()
return
}

View file

@ -1,6 +1,6 @@
#! /bin/bash
THRIFT_VER=0.12.0
THRIFT_VER=0.14.0
if [[ $(thrift -version | grep -e $THRIFT_VER -c) -ne 1 ]]; then
echo "Warning: This wrapper has only been tested with version" $THRIFT_VER;

10
go.mod
View file

@ -1,10 +1,12 @@
module github.com/paypal/gorealis/v2
module github.com/paypal/gorealis
go 1.13
require (
github.com/apache/thrift v0.12.0
github.com/apache/thrift v0.14.0
github.com/davecgh/go-spew v1.1.0 // indirect
github.com/pkg/errors v0.0.0-20171216070316-e881fd58d78e
github.com/pkg/errors v0.9.1
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/samuel/go-zookeeper v0.0.0-20171117190445-471cd4e61d7a
github.com/stretchr/testify v1.2.0
github.com/stretchr/testify v1.7.0
)

30
go.sum Normal file
View file

@ -0,0 +1,30 @@
github.com/apache/thrift v0.13.0 h1:5hryIiq9gtn+MiLVn0wP37kb/uTeRZgN08WoCsAhIhI=
github.com/apache/thrift v0.13.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/apache/thrift v0.14.0 h1:vqZ2DP42i8th2OsgCcYZkirtbzvpZEFx53LiWDJXIAs=
github.com/apache/thrift v0.14.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/pkg/errors v0.0.0-20171216070316-e881fd58d78e h1:+RHxT/gm0O3UF7nLJbdNzAmULvCFt4XfXHWzh3XI/zs=
github.com/pkg/errors v0.0.0-20171216070316-e881fd58d78e/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/ridv/thrift v0.12.1 h1:b80V1Oa2Mbd++jrlJZbJsIybO5/MCfbXKzd1A5v4aSo=
github.com/ridv/thrift v0.12.1/go.mod h1:yTMRF94RCZjO1fY1xt69yncvMbQCPdRL8BhbwIrjPx8=
github.com/ridv/thrift v0.13.1 h1:/8XnTRUqJJeiuqoL7mfnJQmXQa4GJn9tUCiP7+i6Y9o=
github.com/ridv/thrift v0.13.1/go.mod h1:yTMRF94RCZjO1fY1xt69yncvMbQCPdRL8BhbwIrjPx8=
github.com/ridv/thrift v0.13.2 h1:Q3Smr8poXd7VkWZPHvdJZzlQCJO+b5W37ECfoUL4qHc=
github.com/ridv/thrift v0.13.2/go.mod h1:yTMRF94RCZjO1fY1xt69yncvMbQCPdRL8BhbwIrjPx8=
github.com/samuel/go-zookeeper v0.0.0-20171117190445-471cd4e61d7a h1:EYL2xz/Zdo0hyqdZMXR4lmT2O11jDLTPCEqIe/FR6W4=
github.com/samuel/go-zookeeper v0.0.0-20171117190445-471cd4e61d7a/go.mod h1:gi+0XIa01GRL2eRQVjQkKGqKF3SF9vZR/HnPullcV2E=
github.com/stretchr/objx v0.1.0 h1:4G4v2dO3VZwixGIRoQ5Lfboy6nUhCyYzaqnIAPPhYs4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.0 h1:LThGCOvhuJic9Gyd1VBCkhyUXmO8vKaBFvBsJ2k03rg=
github.com/stretchr/testify v1.2.0/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

21
helpers.go Normal file
View file

@ -0,0 +1,21 @@
package realis
import (
"context"
"github.com/paypal/gorealis/gen-go/apache/aurora"
)
func (r *realisClient) jobExists(key aurora.JobKey) (bool, error) {
resp, err := r.client.GetConfigSummary(context.TODO(), &key)
if err != nil {
return false, err
}
return resp == nil ||
resp.GetResult_() == nil ||
resp.GetResult_().GetConfigSummaryResult_() == nil ||
resp.GetResult_().GetConfigSummaryResult_().GetSummary() == nil ||
resp.GetResponseCode() != aurora.ResponseCode_OK,
nil
}

414
job.go
View file

@ -15,203 +15,377 @@
package realis
import (
"github.com/paypal/gorealis/v2/gen-go/apache/aurora"
"strconv"
"github.com/paypal/gorealis/gen-go/apache/aurora"
)
// Structure to collect all information pertaining to an Aurora job.
type AuroraJob struct {
jobConfig *aurora.JobConfiguration
task *AuroraTask
// Job inteface is used to define a set of functions an Aurora Job object
// must implemement.
// TODO(rdelvalle): Consider getting rid of the Job interface
type Job interface {
// Set Job Key environment.
Environment(env string) Job
Role(role string) Job
Name(name string) Job
CronSchedule(cron string) Job
CronCollisionPolicy(policy aurora.CronCollisionPolicy) Job
CPU(cpus float64) Job
Disk(disk int64) Job
RAM(ram int64) Job
GPU(gpu int64) Job
ExecutorName(name string) Job
ExecutorData(data string) Job
AddPorts(num int) Job
AddLabel(key string, value string) Job
AddNamedPorts(names ...string) Job
AddLimitConstraint(name string, limit int32) Job
AddValueConstraint(name string, negated bool, values ...string) Job
// From Aurora Docs:
// dedicated attribute. Aurora treats this specially, and only allows matching jobs
// to run on these machines, and will only schedule matching jobs on these machines.
// When a job is created, the scheduler requires that the $role component matches
// the role field in the job configuration, and will reject the job creation otherwise.
// A wildcard (*) may be used for the role portion of the dedicated attribute, which
// will allow any owner to elect for a job to run on the host(s)
AddDedicatedConstraint(role, name string) Job
AddURIs(extract bool, cache bool, values ...string) Job
JobKey() *aurora.JobKey
JobConfig() *aurora.JobConfiguration
TaskConfig() *aurora.TaskConfig
IsService(isService bool) Job
InstanceCount(instCount int32) Job
GetInstanceCount() int32
MaxFailure(maxFail int32) Job
Container(container Container) Job
PartitionPolicy(policy *aurora.PartitionPolicy) Job
Tier(tier string) Job
SlaPolicy(policy *aurora.SlaPolicy) Job
Priority(priority int32) Job
}
// Create a AuroraJob object with everything initialized.
func NewJob() *AuroraJob {
type resourceType int
jobKey := &aurora.JobKey{}
const (
CPU resourceType = iota
RAM
DISK
GPU
)
// AuroraTask clientConfig
task := NewTask()
task.task.Job = jobKey
const portNamePrefix = "org.apache.aurora.port."
// AuroraJob clientConfig
jobConfig := &aurora.JobConfiguration{
Key: jobKey,
TaskConfig: task.TaskConfig(),
}
// AuroraJob is a structure to collect all information pertaining to an Aurora job.
type AuroraJob struct {
jobConfig *aurora.JobConfiguration
resources map[resourceType]*aurora.Resource
metadata map[string]*aurora.Metadata
constraints map[string]*aurora.Constraint
portCount int
}
// NewJob is used to create a Job object with everything initialized.
func NewJob() Job {
jobConfig := aurora.NewJobConfiguration()
taskConfig := aurora.NewTaskConfig()
jobKey := aurora.NewJobKey()
// Job Config
jobConfig.Key = jobKey
jobConfig.TaskConfig = taskConfig
// Task Config
taskConfig.Job = jobKey
taskConfig.Container = aurora.NewContainer()
taskConfig.Container.Mesos = aurora.NewMesosContainer()
// Resources
numCpus := aurora.NewResource()
ramMb := aurora.NewResource()
diskMb := aurora.NewResource()
resources := map[resourceType]*aurora.Resource{CPU: numCpus, RAM: ramMb, DISK: diskMb}
taskConfig.Resources = []*aurora.Resource{numCpus, ramMb, diskMb}
numCpus.NumCpus = new(float64)
ramMb.RamMb = new(int64)
diskMb.DiskMb = new(int64)
return &AuroraJob{
jobConfig: jobConfig,
task: task,
jobConfig: jobConfig,
resources: resources,
metadata: make(map[string]*aurora.Metadata),
constraints: make(map[string]*aurora.Constraint),
portCount: 0,
}
}
// Set AuroraJob Key environment. Explicit changes to AuroraTask's job key are not needed
// because they share a pointer to the same JobKey.
func (j *AuroraJob) Environment(env string) *AuroraJob {
// Environment sets the Job Key environment.
func (j *AuroraJob) Environment(env string) Job {
j.jobConfig.Key.Environment = env
return j
}
// Set AuroraJob Key Role.
func (j *AuroraJob) Role(role string) *AuroraJob {
// Role sets the Job Key role.
func (j *AuroraJob) Role(role string) Job {
j.jobConfig.Key.Role = role
// Will be deprecated
identity := &aurora.Identity{User: role}
j.jobConfig.Owner = identity
j.jobConfig.TaskConfig.Owner = identity
return j
}
// Set AuroraJob Key Name.
func (j *AuroraJob) Name(name string) *AuroraJob {
// Name sets the Job Key Name.
func (j *AuroraJob) Name(name string) Job {
j.jobConfig.Key.Name = name
return j
}
// How many instances of the job to run
func (j *AuroraJob) InstanceCount(instCount int32) *AuroraJob {
// ExecutorName sets the name of the executor that will the task will be configured to.
func (j *AuroraJob) ExecutorName(name string) Job {
if j.jobConfig.TaskConfig.ExecutorConfig == nil {
j.jobConfig.TaskConfig.ExecutorConfig = aurora.NewExecutorConfig()
}
j.jobConfig.TaskConfig.ExecutorConfig.Name = name
return j
}
// ExecutorData sets the data blob that will be passed to the Mesos executor.
func (j *AuroraJob) ExecutorData(data string) Job {
if j.jobConfig.TaskConfig.ExecutorConfig == nil {
j.jobConfig.TaskConfig.ExecutorConfig = aurora.NewExecutorConfig()
}
j.jobConfig.TaskConfig.ExecutorConfig.Data = data
return j
}
// CPU sets the amount of CPU each task will use in an Aurora Job.
func (j *AuroraJob) CPU(cpus float64) Job {
*j.resources[CPU].NumCpus = cpus
return j
}
// RAM sets the amount of RAM each task will use in an Aurora Job.
func (j *AuroraJob) RAM(ram int64) Job {
*j.resources[RAM].RamMb = ram
return j
}
// Disk sets the amount of Disk each task will use in an Aurora Job.
func (j *AuroraJob) Disk(disk int64) Job {
*j.resources[DISK].DiskMb = disk
return j
}
// GPU sets the amount of GPU each task will use in an Aurora Job.
func (j *AuroraJob) GPU(gpu int64) Job {
// GPU resource must be set explicitly since the scheduler by default
// rejects jobs with GPU resources attached to it.
if _, ok := j.resources[GPU]; !ok {
j.resources[GPU] = &aurora.Resource{}
j.JobConfig().GetTaskConfig().Resources = append(
j.JobConfig().GetTaskConfig().Resources,
j.resources[GPU])
}
j.resources[GPU].NumGpus = &gpu
return j
}
// MaxFailure sets how many failures to tolerate before giving up per Job.
func (j *AuroraJob) MaxFailure(maxFail int32) Job {
j.jobConfig.TaskConfig.MaxTaskFailures = maxFail
return j
}
// InstanceCount sets how many instances of the task to run for this Job.
func (j *AuroraJob) InstanceCount(instCount int32) Job {
j.jobConfig.InstanceCount = instCount
return j
}
func (j *AuroraJob) CronSchedule(cron string) *AuroraJob {
// CronSchedule allows the user to configure a cron schedule for this job to run in.
func (j *AuroraJob) CronSchedule(cron string) Job {
j.jobConfig.CronSchedule = &cron
return j
}
func (j *AuroraJob) CronCollisionPolicy(policy aurora.CronCollisionPolicy) *AuroraJob {
// CronCollisionPolicy allows the user to decide what happens if two or more instances
// of the same Cron job need to run.
func (j *AuroraJob) CronCollisionPolicy(policy aurora.CronCollisionPolicy) Job {
j.jobConfig.CronCollisionPolicy = policy
return j
}
// How many instances of the job to run
// GetInstanceCount returns how many tasks this Job contains.
func (j *AuroraJob) GetInstanceCount() int32 {
return j.jobConfig.InstanceCount
}
// Get the current job configurations key to use for some realis calls.
func (j *AuroraJob) JobKey() aurora.JobKey {
return *j.jobConfig.Key
// IsService returns true if the job is a long term running job or false if it is an ad-hoc job.
func (j *AuroraJob) IsService(isService bool) Job {
j.jobConfig.TaskConfig.IsService = isService
return j
}
// Get the current job configurations key to use for some realis calls.
// JobKey returns the job's configuration key.
func (j *AuroraJob) JobKey() *aurora.JobKey {
return j.jobConfig.Key
}
// JobConfig returns the job's configuration.
func (j *AuroraJob) JobConfig() *aurora.JobConfiguration {
return j.jobConfig
}
// Get the current job configurations key to use for some realis calls.
func (j *AuroraJob) AuroraTask() *AuroraTask {
return j.task
}
/*
AuroraTask specific API, see task.go for further documentation.
These functions are provided for the convenience of chaining API calls.
*/
func (j *AuroraJob) ExecutorName(name string) *AuroraJob {
j.task.ExecutorName(name)
return j
}
func (j *AuroraJob) ExecutorData(data string) *AuroraJob {
j.task.ExecutorData(data)
return j
}
func (j *AuroraJob) CPU(cpus float64) *AuroraJob {
j.task.CPU(cpus)
return j
}
func (j *AuroraJob) RAM(ram int64) *AuroraJob {
j.task.RAM(ram)
return j
}
func (j *AuroraJob) Disk(disk int64) *AuroraJob {
j.task.Disk(disk)
return j
}
func (j *AuroraJob) GPU(gpu int64) *AuroraJob {
j.task.GPU(gpu)
return j
}
func (j *AuroraJob) Tier(tier string) *AuroraJob {
j.task.Tier(tier)
return j
}
func (j *AuroraJob) MaxFailure(maxFail int32) *AuroraJob {
j.task.MaxFailure(maxFail)
return j
}
func (j *AuroraJob) IsService(isService bool) *AuroraJob {
j.task.IsService(isService)
return j
}
// TaskConfig returns the job's task(shard) configuration.
func (j *AuroraJob) TaskConfig() *aurora.TaskConfig {
return j.task.TaskConfig()
return j.jobConfig.TaskConfig
}
func (j *AuroraJob) AddURIs(extract bool, cache bool, values ...string) *AuroraJob {
j.task.AddURIs(extract, cache, values...)
// AddURIs adds a list of URIs with the same extract and cache configuration. Scheduler must have
// --enable_mesos_fetcher flag enabled. Currently there is no duplicate detection.
func (j *AuroraJob) AddURIs(extract bool, cache bool, values ...string) Job {
for _, value := range values {
j.jobConfig.TaskConfig.MesosFetcherUris = append(j.jobConfig.TaskConfig.MesosFetcherUris,
&aurora.MesosFetcherURI{Value: value, Extract: &extract, Cache: &cache})
}
return j
}
func (j *AuroraJob) AddLabel(key string, value string) *AuroraJob {
j.task.AddLabel(key, value)
// AddLabel adds a Mesos label to the job. Note that Aurora will add the
// prefix "org.apache.aurora.metadata." to the beginning of each key.
func (j *AuroraJob) AddLabel(key string, value string) Job {
if _, ok := j.metadata[key]; !ok {
j.metadata[key] = &aurora.Metadata{Key: key}
j.jobConfig.TaskConfig.Metadata = append(j.jobConfig.TaskConfig.Metadata, j.metadata[key])
}
j.metadata[key].Value = value
return j
}
func (j *AuroraJob) AddNamedPorts(names ...string) *AuroraJob {
j.task.AddNamedPorts(names...)
// AddNamedPorts adds a named port to the job configuration These are random ports as it's
// not currently possible to request specific ports using Aurora.
func (j *AuroraJob) AddNamedPorts(names ...string) Job {
j.portCount += len(names)
for _, name := range names {
j.jobConfig.TaskConfig.Resources = append(
j.jobConfig.TaskConfig.Resources,
&aurora.Resource{NamedPort: &name})
}
return j
}
func (j *AuroraJob) AddPorts(num int) *AuroraJob {
j.task.AddPorts(num)
return j
}
func (j *AuroraJob) AddValueConstraint(name string, negated bool, values ...string) *AuroraJob {
j.task.AddValueConstraint(name, negated, values...)
// AddPorts adds a request for a number of ports to the job configuration. The names chosen for these ports
// will be org.apache.aurora.port.X, where X is the current port count for the job configuration
// starting at 0. These are random ports as it's not currently possible to request
// specific ports using Aurora.
func (j *AuroraJob) AddPorts(num int) Job {
start := j.portCount
j.portCount += num
for i := start; i < j.portCount; i++ {
portName := portNamePrefix + strconv.Itoa(i)
j.jobConfig.TaskConfig.Resources = append(
j.jobConfig.TaskConfig.Resources,
&aurora.Resource{NamedPort: &portName})
}
return j
}
func (j *AuroraJob) AddLimitConstraint(name string, limit int32) *AuroraJob {
j.task.AddLimitConstraint(name, limit)
// AddValueConstraint allows the user to add a value constrain to the job to limit which agents the job's
// tasks can be run on. If the name matches a constraint that was previously set, the previous value will be
// overwritten. In case the previous constraint attached to the name was of type limit, the constraint will be clobbered
// by this new Value constraint.
// From Aurora Docs:
// Add a Value constraint
// name - Mesos slave attribute that the constraint is matched against.
// If negated = true , treat this as a 'not' - to avoid specific values.
// Values - list of values we look for in attribute name
func (j *AuroraJob) AddValueConstraint(name string, negated bool, values ...string) Job {
if _, ok := j.constraints[name]; !ok {
j.constraints[name] = &aurora.Constraint{Name: name}
j.jobConfig.TaskConfig.Constraints = append(j.jobConfig.TaskConfig.Constraints, j.constraints[name])
}
j.constraints[name].Constraint = &aurora.TaskConstraint{
Value: &aurora.ValueConstraint{
Negated: negated,
Values: values,
},
Limit: nil,
}
return j
}
func (j *AuroraJob) AddDedicatedConstraint(role, name string) *AuroraJob {
j.task.AddDedicatedConstraint(role, name)
// AddLimitConstraint allows the user to limit how many tasks form the same Job are run on a single host.
// If the name matches a constraint that was previously set, the previous value will be
// overwritten. In case the previous constraint attached to the name was of type Value, the constraint will be clobbered
// by this new Limit constraint.
// From Aurora Docs:
// A constraint that specifies the maximum number of active tasks on a host with
// a matching attribute that may be scheduled simultaneously.
func (j *AuroraJob) AddLimitConstraint(name string, limit int32) Job {
if _, ok := j.constraints[name]; !ok {
j.constraints[name] = &aurora.Constraint{Name: name}
j.jobConfig.TaskConfig.Constraints = append(j.jobConfig.TaskConfig.Constraints, j.constraints[name])
}
j.constraints[name].Constraint = &aurora.TaskConstraint{
Value: nil,
Limit: &aurora.LimitConstraint{Limit: limit},
}
return j
}
func (j *AuroraJob) Container(container Container) *AuroraJob {
j.task.Container(container)
// AddDedicatedConstraint is a convenience function that allows the user to
// add a dedicated constraint to a Job configuration.
// In case a previous dedicated constraint was set, it will be clobbered by this new value.
func (j *AuroraJob) AddDedicatedConstraint(role, name string) Job {
j.AddValueConstraint("dedicated", false, role+"/"+name)
return j
}
func (j *AuroraJob) ThermosExecutor(thermos ThermosExecutor) *AuroraJob {
j.task.ThermosExecutor(thermos)
// Container sets a container to run for the job configuration to run.
func (j *AuroraJob) Container(container Container) Job {
j.jobConfig.TaskConfig.Container = container.Build()
return j
}
func (j *AuroraJob) BuildThermosPayload() error {
return j.task.BuildThermosPayload()
}
func (j *AuroraJob) PartitionPolicy(reschedule bool, delay int64) *AuroraJob {
j.task.PartitionPolicy(aurora.PartitionPolicy{
Reschedule: reschedule,
DelaySecs: &delay,
})
// PartitionPolicy sets a partition policy for the job configuration to implement.
func (j *AuroraJob) PartitionPolicy(policy *aurora.PartitionPolicy) Job {
j.jobConfig.TaskConfig.PartitionPolicy = policy
return j
}
// Tier sets the Tier for the Job.
func (j *AuroraJob) Tier(tier string) Job {
j.jobConfig.TaskConfig.Tier = &tier
return j
}
// SlaPolicy sets an SlaPolicy for the Job.
func (j *AuroraJob) SlaPolicy(policy *aurora.SlaPolicy) Job {
j.jobConfig.TaskConfig.SlaPolicy = policy
return j
}
func (j *AuroraJob) Priority(priority int32) Job {
j.jobConfig.TaskConfig.Priority = priority
return j
}

View file

@ -1,249 +0,0 @@
/**
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package realis
import (
"time"
"github.com/apache/thrift/lib/go/thrift"
"github.com/paypal/gorealis/v2/gen-go/apache/aurora"
)
// Structure to collect all information required to create job update
type JobUpdate struct {
task *AuroraTask
request *aurora.JobUpdateRequest
}
// Create a default JobUpdate object with an empty task and no fields filled in.
func NewJobUpdate() *JobUpdate {
newTask := NewTask()
req := aurora.JobUpdateRequest{}
req.TaskConfig = newTask.TaskConfig()
req.Settings = newUpdateSettings()
return &JobUpdate{task: newTask, request: &req}
}
func JobUpdateFromAuroraTask(task *AuroraTask) *JobUpdate {
newTask := task.Clone()
req := aurora.JobUpdateRequest{}
req.TaskConfig = newTask.TaskConfig()
req.Settings = newUpdateSettings()
return &JobUpdate{task: newTask, request: &req}
}
func JobUpdateFromConfig(task *aurora.TaskConfig) *JobUpdate {
// Perform a deep copy to avoid unexpected behavior
newTask := TaskFromThrift(task)
req := aurora.JobUpdateRequest{}
req.TaskConfig = newTask.TaskConfig()
req.Settings = newUpdateSettings()
return &JobUpdate{task: newTask, request: &req}
}
// Set instance count the job will have after the update.
func (j *JobUpdate) InstanceCount(inst int32) *JobUpdate {
j.request.InstanceCount = inst
return j
}
// Max number of instances being updated at any given moment.
func (j *JobUpdate) BatchSize(size int32) *JobUpdate {
j.request.Settings.UpdateGroupSize = size
return j
}
// Minimum number of seconds a shard must remain in RUNNING state before considered a success.
func (j *JobUpdate) WatchTime(timeout time.Duration) *JobUpdate {
j.request.Settings.MinWaitInInstanceRunningMs = int32(timeout.Seconds() * 1000)
return j
}
// Wait for all instances in a group to be done before moving on.
func (j *JobUpdate) WaitForBatchCompletion(batchWait bool) *JobUpdate {
j.request.Settings.WaitForBatchCompletion = batchWait
return j
}
// Max number of instance failures to tolerate before marking instance as FAILED.
func (j *JobUpdate) MaxPerInstanceFailures(inst int32) *JobUpdate {
j.request.Settings.MaxPerInstanceFailures = inst
return j
}
// Max number of FAILED instances to tolerate before terminating the update.
func (j *JobUpdate) MaxFailedInstances(inst int32) *JobUpdate {
j.request.Settings.MaxFailedInstances = inst
return j
}
// When False, prevents auto rollback of a failed update.
func (j *JobUpdate) RollbackOnFail(rollback bool) *JobUpdate {
j.request.Settings.RollbackOnFailure = rollback
return j
}
// Sets the interval at which pulses should be received by the job update before timing out.
func (j *JobUpdate) PulseIntervalTimeout(timeout time.Duration) *JobUpdate {
j.request.Settings.BlockIfNoPulsesAfterMs = thrift.Int32Ptr(int32(timeout.Seconds() * 1000))
return j
}
func newUpdateSettings() *aurora.JobUpdateSettings {
us := aurora.JobUpdateSettings{}
// Mirrors defaults set by Pystachio
us.UpdateOnlyTheseInstances = []*aurora.Range{}
us.UpdateGroupSize = 1
us.WaitForBatchCompletion = false
us.MinWaitInInstanceRunningMs = 45000
us.MaxPerInstanceFailures = 0
us.MaxFailedInstances = 0
us.RollbackOnFailure = true
return &us
}
/*
These methods are provided for user convenience in order to chain
calls for configuration.
API below here are wrappers around modifying an AuroraTask instance.
See task.go for further documentation.
*/
func (j *JobUpdate) Environment(env string) *JobUpdate {
j.task.Environment(env)
return j
}
func (j *JobUpdate) Role(role string) *JobUpdate {
j.task.Role(role)
return j
}
func (j *JobUpdate) Name(name string) *JobUpdate {
j.task.Name(name)
return j
}
func (j *JobUpdate) ExecutorName(name string) *JobUpdate {
j.task.ExecutorName(name)
return j
}
func (j *JobUpdate) ExecutorData(data string) *JobUpdate {
j.task.ExecutorData(data)
return j
}
func (j *JobUpdate) CPU(cpus float64) *JobUpdate {
j.task.CPU(cpus)
return j
}
func (j *JobUpdate) RAM(ram int64) *JobUpdate {
j.task.RAM(ram)
return j
}
func (j *JobUpdate) Disk(disk int64) *JobUpdate {
j.task.Disk(disk)
return j
}
func (j *JobUpdate) Tier(tier string) *JobUpdate {
j.task.Tier(tier)
return j
}
func (j *JobUpdate) MaxFailure(maxFail int32) *JobUpdate {
j.task.MaxFailure(maxFail)
return j
}
func (j *JobUpdate) IsService(isService bool) *JobUpdate {
j.task.IsService(isService)
return j
}
func (j *JobUpdate) TaskConfig() *aurora.TaskConfig {
return j.task.TaskConfig()
}
func (j *JobUpdate) AddURIs(extract bool, cache bool, values ...string) *JobUpdate {
j.task.AddURIs(extract, cache, values...)
return j
}
func (j *JobUpdate) AddLabel(key string, value string) *JobUpdate {
j.task.AddLabel(key, value)
return j
}
func (j *JobUpdate) AddNamedPorts(names ...string) *JobUpdate {
j.task.AddNamedPorts(names...)
return j
}
func (j *JobUpdate) AddPorts(num int) *JobUpdate {
j.task.AddPorts(num)
return j
}
func (j *JobUpdate) AddValueConstraint(name string, negated bool, values ...string) *JobUpdate {
j.task.AddValueConstraint(name, negated, values...)
return j
}
func (j *JobUpdate) AddLimitConstraint(name string, limit int32) *JobUpdate {
j.task.AddLimitConstraint(name, limit)
return j
}
func (j *JobUpdate) AddDedicatedConstraint(role, name string) *JobUpdate {
j.task.AddDedicatedConstraint(role, name)
return j
}
func (j *JobUpdate) Container(container Container) *JobUpdate {
j.task.Container(container)
return j
}
func (j *JobUpdate) JobKey() aurora.JobKey {
return j.task.JobKey()
}
func (j *JobUpdate) ThermosExecutor(thermos ThermosExecutor) *JobUpdate {
j.task.ThermosExecutor(thermos)
return j
}
func (j *JobUpdate) BuildThermosPayload() error {
return j.task.BuildThermosPayload()
}
func (j *JobUpdate) PartitionPolicy(reschedule bool, delay int64) *JobUpdate {
j.task.PartitionPolicy(aurora.PartitionPolicy{
Reschedule: reschedule,
DelaySecs: &delay,
})
return j
}

View file

@ -14,65 +14,73 @@
package realis
type Logger interface {
type logger interface {
Println(v ...interface{})
Printf(format string, v ...interface{})
Print(v ...interface{})
}
// NoopLogger is a logger that can be attached to the client which will not print anything.
type NoopLogger struct{}
// Printf is a NOOP function here.
func (NoopLogger) Printf(format string, a ...interface{}) {}
// Print is a NOOP function here.
func (NoopLogger) Print(a ...interface{}) {}
// Println is a NOOP function here.
func (NoopLogger) Println(a ...interface{}) {}
// LevelLogger is a logger that can be configured to output different levels of information: Debug and Trace.
// Trace should only be enabled when very in depth information about the sequence of events a function took is needed.
type LevelLogger struct {
Logger
logger
debug bool
trace bool
}
// EnableDebug enables debug level logging for the LevelLogger
func (l *LevelLogger) EnableDebug(enable bool) {
l.debug = enable
}
// EnableTrace enables trace level logging for the LevelLogger
func (l *LevelLogger) EnableTrace(enable bool) {
l.trace = enable
}
func (l LevelLogger) DebugPrintf(format string, a ...interface{}) {
func (l LevelLogger) debugPrintf(format string, a ...interface{}) {
if l.debug {
l.Printf("[DEBUG] "+format, a...)
}
}
func (l LevelLogger) DebugPrint(a ...interface{}) {
func (l LevelLogger) debugPrint(a ...interface{}) {
if l.debug {
l.Print(append([]interface{}{"[DEBUG] "}, a...)...)
}
}
func (l LevelLogger) DebugPrintln(a ...interface{}) {
func (l LevelLogger) debugPrintln(a ...interface{}) {
if l.debug {
l.Println(append([]interface{}{"[DEBUG] "}, a...)...)
}
}
func (l LevelLogger) TracePrintf(format string, a ...interface{}) {
func (l LevelLogger) tracePrintf(format string, a ...interface{}) {
if l.trace {
l.Printf("[TRACE] "+format, a...)
}
}
func (l LevelLogger) TracePrint(a ...interface{}) {
func (l LevelLogger) tracePrint(a ...interface{}) {
if l.trace {
l.Print(append([]interface{}{"[TRACE] "}, a...)...)
}
}
func (l LevelLogger) TracePrintln(a ...interface{}) {
func (l LevelLogger) tracePrintln(a ...interface{}) {
if l.trace {
l.Println(append([]interface{}{"[TRACE] "}, a...)...)
}

View file

@ -12,48 +12,46 @@
* limitations under the License.
*/
// Collection of monitors to create synchronicity
package realis
import (
"time"
"github.com/paypal/gorealis/v2/gen-go/apache/aurora"
"github.com/paypal/gorealis/gen-go/apache/aurora"
"github.com/pkg/errors"
)
// MonitorJobUpdate polls the scheduler every certain amount of time to see if the update has succeeded.
// If the update entered a terminal update state but it is not ROLLED_FORWARD, this function will return an error.
func (c *Client) MonitorJobUpdate(updateKey aurora.JobUpdateKey, interval, timeout time.Duration) (bool, error) {
if interval < 1*time.Second {
interval = interval * time.Second
}
// Monitor is a wrapper for the Realis client which allows us to have functions
// with the same name for Monitoring purposes.
// TODO(rdelvalle): Deprecate monitors and instead add prefix Monitor to
// all functions in this file like it is done in V2.
type Monitor struct {
Client Realis
}
if timeout < 1*time.Second {
timeout = timeout * time.Second
// JobUpdate polls the scheduler every certain amount of time to see if the update has entered a terminal state.
func (m *Monitor) JobUpdate(
updateKey aurora.JobUpdateKey,
interval int,
timeout int) (bool, error) {
updateQ := aurora.JobUpdateQuery{
Key: &updateKey,
Limit: 1,
UpdateStatuses: TerminalUpdateStates(),
}
updateSummaries, err := c.MonitorJobUpdateQuery(
aurora.JobUpdateQuery{
Key: &updateKey,
Limit: 1,
UpdateStatuses: []aurora.JobUpdateStatus{
aurora.JobUpdateStatus_ROLLED_FORWARD,
aurora.JobUpdateStatus_ROLLED_BACK,
aurora.JobUpdateStatus_ABORTED,
aurora.JobUpdateStatus_ERROR,
aurora.JobUpdateStatus_FAILED,
},
},
interval,
timeout)
updateSummaries, err := m.JobUpdateQuery(
updateQ,
time.Duration(interval)*time.Second,
time.Duration(timeout)*time.Second)
status := updateSummaries[0].State.Status
if err != nil {
return false, err
}
status := updateSummaries[0].State.Status
c.RealisConfig().logger.Printf("job update status: %v\n", status)
m.Client.RealisConfig().logger.Printf("job update status: %v\n", status)
// Rolled forward is the only state in which an update has been successfully updated
// if we encounter an inactive state and it is not at rolled forward, update failed
@ -70,41 +68,25 @@ func (c *Client) MonitorJobUpdate(updateKey aurora.JobUpdateKey, interval, timeo
}
}
// MonitorJobUpdateStatus polls the scheduler for information about an update until the update enters one of the
// desired states or until the function times out.
func (c *Client) MonitorJobUpdateStatus(updateKey aurora.JobUpdateKey,
// JobUpdateStatus polls the scheduler every certain amount of time to see if the update has entered a specified state.
func (m *Monitor) JobUpdateStatus(updateKey aurora.JobUpdateKey,
desiredStatuses []aurora.JobUpdateStatus,
interval, timeout time.Duration) (aurora.JobUpdateStatus, error) {
if len(desiredStatuses) == 0 {
return aurora.JobUpdateStatus(-1), errors.New("no desired statuses provided")
}
// Make deep local copy to avoid side effects from job key being manipulated externally.
updateKeyLocal := &aurora.JobUpdateKey{
Job: &aurora.JobKey{
Role: updateKey.Job.GetRole(),
Environment: updateKey.Job.GetEnvironment(),
Name: updateKey.Job.GetName(),
},
ID: updateKey.GetID(),
}
updateQ := aurora.JobUpdateQuery{
Key: updateKeyLocal,
Key: &updateKey,
Limit: 1,
UpdateStatuses: desiredStatuses,
}
summary, err := c.MonitorJobUpdateQuery(updateQ, interval, timeout)
if len(summary) > 0 {
return summary[0].State.Status, err
summary, err := m.JobUpdateQuery(updateQ, interval, timeout)
if err != nil {
return 0, err
}
return aurora.JobUpdateStatus(-1), err
return summary[0].State.Status, nil
}
func (c *Client) MonitorJobUpdateQuery(
// JobUpdateQuery polls the scheduler every certain amount of time to see if the query call returns any results.
func (m *Monitor) JobUpdateQuery(
updateQuery aurora.JobUpdateQuery,
interval time.Duration,
timeout time.Duration) ([]*aurora.JobUpdateSummary, error) {
@ -113,16 +95,20 @@ func (c *Client) MonitorJobUpdateQuery(
defer ticker.Stop()
timer := time.NewTimer(timeout)
defer timer.Stop()
var cliErr error
var respDetail *aurora.Response
for {
select {
case <-ticker.C:
updateSummaryResults, cliErr := c.GetJobUpdateSummaries(&updateQuery)
respDetail, cliErr = m.Client.GetJobUpdateSummaries(&updateQuery)
if cliErr != nil {
return nil, cliErr
}
if len(updateSummaryResults.GetUpdateSummaries()) >= 1 {
return updateSummaryResults.GetUpdateSummaries(), nil
updateSummaries := respDetail.Result_.GetJobUpdateSummariesResult_.UpdateSummaries
if len(updateSummaries) >= 1 {
return updateSummaries, nil
}
case <-timer.C:
@ -131,37 +117,105 @@ func (c *Client) MonitorJobUpdateQuery(
}
}
// Monitor a AuroraJob until all instances enter one of the LiveStates
func (c *Client) MonitorInstances(key aurora.JobKey, instances int32, interval, timeout time.Duration) (bool, error) {
return c.MonitorScheduleStatus(key, instances, aurora.LIVE_STATES, interval, timeout)
// AutoPausedUpdateMonitor is a special monitor for auto pause enabled batch updates. This monitor ensures that the update
// being monitored is capable of auto pausing and has auto pausing enabled. After verifying this information,
// the monitor watches for the job to enter the ROLL_FORWARD_PAUSED state and calculates the current batch
// the update is in using information from the update configuration.
func (m *Monitor) AutoPausedUpdateMonitor(key aurora.JobUpdateKey, interval, timeout time.Duration) (int, error) {
key.Job = &aurora.JobKey{
Role: key.Job.Role,
Environment: key.Job.Environment,
Name: key.Job.Name,
}
query := aurora.JobUpdateQuery{
UpdateStatuses: aurora.ACTIVE_JOB_UPDATE_STATES,
Limit: 1,
Key: &key,
}
response, err := m.Client.JobUpdateDetails(query)
if err != nil {
return -1, errors.Wrap(err, "unable to get information about update")
}
// TODO (rdelvalle): check for possible nil values when going down the list of structs
updateDetails := response.Result_.GetJobUpdateDetailsResult_.DetailsList
if len(updateDetails) == 0 {
return -1, errors.Errorf("details for update could not be found")
}
updateStrategy := updateDetails[0].Update.Instructions.Settings.UpdateStrategy
var batchSizes []int32
switch {
case updateStrategy.IsSetVarBatchStrategy():
batchSizes = updateStrategy.VarBatchStrategy.GroupSizes
if !updateStrategy.VarBatchStrategy.AutopauseAfterBatch {
return -1, errors.Errorf("update does not have auto pause enabled")
}
case updateStrategy.IsSetBatchStrategy():
batchSizes = []int32{updateStrategy.BatchStrategy.GroupSize}
if !updateStrategy.BatchStrategy.AutopauseAfterBatch {
return -1, errors.Errorf("update does not have auto pause enabled")
}
default:
return -1, errors.Errorf("update is not using a batch update strategy")
}
query.UpdateStatuses = append(TerminalUpdateStates(), aurora.JobUpdateStatus_ROLL_FORWARD_PAUSED)
summary, err := m.JobUpdateQuery(query, interval, timeout)
if err != nil {
return -1, err
}
if !(summary[0].State.Status == aurora.JobUpdateStatus_ROLL_FORWARD_PAUSED ||
summary[0].State.Status == aurora.JobUpdateStatus_ROLLED_FORWARD) {
return -1, errors.Errorf("update is in a terminal state %v", summary[0].State.Status)
}
updatingInstances := make(map[int32]struct{})
for _, e := range updateDetails[0].InstanceEvents {
// We only care about INSTANCE_UPDATING actions because we only care that they've been attempted
if e != nil && e.GetAction() == aurora.JobUpdateAction_INSTANCE_UPDATING {
updatingInstances[e.GetInstanceId()] = struct{}{}
}
}
return calculateCurrentBatch(int32(len(updatingInstances)), batchSizes), nil
}
// Monitor a AuroraJob until all instances enter a desired status.
// Instances will monitor a Job until all instances enter one of the LIVE_STATES
func (m *Monitor) Instances(key *aurora.JobKey, instances int32, interval, timeout int) (bool, error) {
return m.ScheduleStatus(key, instances, LiveStates, interval, timeout)
}
// ScheduleStatus will monitor a Job until all instances enter a desired status.
// Defaults sets of desired statuses provided by the thrift API include:
// ActiveStates, SlaveAssignedStates, LiveStates, and TerminalStates
func (c *Client) MonitorScheduleStatus(key aurora.JobKey,
// ACTIVE_STATES, SLAVE_ASSIGNED_STATES, LIVE_STATES, and TERMINAL_STATES
func (m *Monitor) ScheduleStatus(
key *aurora.JobKey,
instanceCount int32,
desiredStatuses []aurora.ScheduleStatus,
interval, timeout time.Duration) (bool, error) {
if interval < 1*time.Second {
interval = interval * time.Second
}
desiredStatuses map[aurora.ScheduleStatus]bool,
interval int,
timeout int) (bool, error) {
if timeout < 1*time.Second {
timeout = timeout * time.Second
}
ticker := time.NewTicker(interval)
ticker := time.NewTicker(time.Second * time.Duration(interval))
defer ticker.Stop()
timer := time.NewTimer(timeout)
timer := time.NewTimer(time.Second * time.Duration(timeout))
defer timer.Stop()
wantedStatuses := make([]aurora.ScheduleStatus, 0)
for status := range desiredStatuses {
wantedStatuses = append(wantedStatuses, status)
}
for {
select {
case <-ticker.C:
// Query Aurora for the state of the job key ever interval
instCount, cliErr := c.GetInstanceIds(key, desiredStatuses)
instCount, cliErr := m.Client.GetInstanceIds(key, wantedStatuses)
if cliErr != nil {
return false, errors.Wrap(cliErr, "Unable to communicate with Aurora")
}
@ -171,23 +225,18 @@ func (c *Client) MonitorScheduleStatus(key aurora.JobKey,
case <-timer.C:
// If the timer runs out, return a timeout error to user
return false, newTimedoutError(errors.New("schedule status monitor timedout"))
return false, newTimedoutError(errors.New("schedule status monitor timed out"))
}
}
}
// Monitor host status until all hosts match the status provided. Returns a map where the value is true if the host
// HostMaintenance will monitor host status until all hosts match the status provided.
// Returns a map where the value is true if the host
// is in one of the desired mode(s) or false if it is not as of the time when the monitor exited.
func (c *Client) MonitorHostMaintenance(hosts []string,
func (m *Monitor) HostMaintenance(
hosts []string,
modes []aurora.MaintenanceMode,
interval, timeout time.Duration) (map[string]bool, error) {
if interval < 1*time.Second {
interval = interval * time.Second
}
if timeout < 1*time.Second {
timeout = timeout * time.Second
}
interval, timeout int) (map[string]bool, error) {
// Transform modes to monitor for into a set for easy lookup
desiredMode := make(map[aurora.MaintenanceMode]struct{})
@ -196,7 +245,8 @@ func (c *Client) MonitorHostMaintenance(hosts []string,
}
// Turn slice into a host set to eliminate duplicates.
// We also can't use a simple count because multiple modes means we can have multiple matches for a single host.
// We also can't use a simple count because multiple modes means
// we can have multiple matches for a single host.
// I.e. host A transitions from ACTIVE to DRAINING to DRAINED while monitored
remainingHosts := make(map[string]struct{})
for _, host := range hosts {
@ -205,16 +255,16 @@ func (c *Client) MonitorHostMaintenance(hosts []string,
hostResult := make(map[string]bool)
ticker := time.NewTicker(interval)
ticker := time.NewTicker(time.Second * time.Duration(interval))
defer ticker.Stop()
timer := time.NewTimer(timeout)
timer := time.NewTimer(time.Second * time.Duration(timeout))
defer timer.Stop()
for {
select {
case <-ticker.C:
// Client call has multiple retries internally
result, err := c.MaintenanceStatus(hosts...)
_, result, err := m.Client.MaintenanceStatus(hosts...)
if err != nil {
// Error is either a payload error or a severe connection error
for host := range remainingHosts {
@ -240,7 +290,7 @@ func (c *Client) MonitorHostMaintenance(hosts []string,
hostResult[host] = false
}
return hostResult, newTimedoutError(errors.New("host maintenance monitor timedout"))
return hostResult, newTimedoutError(errors.New("host maintenance monitor timed out"))
}
}
}

1110
realis.go

File diff suppressed because it is too large Load diff

View file

@ -1,267 +1,308 @@
/**
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package realis
import (
"context"
"github.com/paypal/gorealis/v2/gen-go/apache/aurora"
"github.com/paypal/gorealis/gen-go/apache/aurora"
"github.com/pkg/errors"
)
// TODO(rdelvalle): Consider moving these functions to another interface. It would be a backwards incompatible change,
// but would add safety.
// Set a list of nodes to DRAINING. This means nothing will be able to be scheduled on them and any existing
// tasks will be killed and re-scheduled elsewhere in the cluster. Tasks from DRAINING nodes are not guaranteed
// to return to running unless there is enough capacity in the cluster to run them.
func (c *Client) DrainHosts(hosts ...string) ([]*aurora.HostStatus, error) {
func (r *realisClient) DrainHosts(hosts ...string) (*aurora.Response, *aurora.DrainHostsResult_, error) {
var result *aurora.DrainHostsResult_
if len(hosts) == 0 {
return nil, errors.New("no hosts provided to drain")
return nil, nil, errors.New("no hosts provided to drain")
}
drainList := aurora.NewHosts()
drainList.HostNames = hosts
c.logger.DebugPrintf("DrainHosts Thrift Payload: %v\n", drainList)
r.logger.debugPrintf("DrainHosts Thrift Payload: %v\n", drainList)
resp, retryErr := c.thriftCallWithRetries(false, func() (*aurora.Response, error) {
return c.adminClient.DrainHosts(context.TODO(), drainList)
})
resp, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.DrainHosts(context.TODO(), drainList)
},
nil,
)
if retryErr != nil {
return nil, errors.Wrap(retryErr, "unable to recover connection")
return resp, result, errors.Wrap(retryErr, "Unable to recover connection")
}
if resp.GetResult_() != nil && resp.GetResult_().GetDrainHostsResult_() != nil {
return resp.GetResult_().GetDrainHostsResult_().GetStatuses(), nil
} else {
return nil, errors.New("thrift error: Field in response is nil unexpectedly.")
if resp.GetResult_() != nil {
result = resp.GetResult_().GetDrainHostsResult_()
}
return resp, result, nil
}
// Start SLA Aware Drain.
// defaultSlaPolicy is the fallback SlaPolicy to use if a task does not have an SlaPolicy.
// After timeoutSecs, tasks will be forcefully drained without checking SLA.
func (c *Client) SLADrainHosts(policy *aurora.SlaPolicy, timeout int64, hosts ...string) ([]*aurora.HostStatus, error) {
func (r *realisClient) SLADrainHosts(
policy *aurora.SlaPolicy,
timeout int64,
hosts ...string) (*aurora.DrainHostsResult_, error) {
var result *aurora.DrainHostsResult_
if len(hosts) == 0 {
return nil, errors.New("no hosts provided to drain")
}
if policy == nil || policy.CountSetFieldsSlaPolicy() == 0 {
policy = &defaultSlaPolicy
r.logger.Printf("Warning: start draining with default sla policy %v", policy)
}
if timeout < 0 {
r.logger.Printf("Warning: timeout %d secs is invalid, draining with default timeout %d secs",
timeout,
defaultSlaDrainTimeoutSecs)
timeout = defaultSlaDrainTimeoutSecs
}
drainList := aurora.NewHosts()
drainList.HostNames = hosts
c.logger.DebugPrintf("SLADrainHosts Thrift Payload: %v\n", drainList)
r.logger.debugPrintf("SLADrainHosts Thrift Payload: %v\n", drainList)
resp, retryErr := c.thriftCallWithRetries(false, func() (*aurora.Response, error) {
return c.adminClient.SlaDrainHosts(context.TODO(), drainList, policy, timeout)
})
resp, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.SlaDrainHosts(context.TODO(), drainList, policy, timeout)
},
nil,
)
if retryErr != nil {
return nil, errors.Wrap(retryErr, "unable to recover connection")
return result, errors.Wrap(retryErr, "Unable to recover connection")
}
if resp.GetResult_() != nil && resp.GetResult_().GetDrainHostsResult_() != nil {
return resp.GetResult_().GetDrainHostsResult_().GetStatuses(), nil
} else {
return nil, errors.New("thrift error: Field in response is nil unexpectedly.")
if resp.GetResult_() != nil {
result = resp.GetResult_().GetDrainHostsResult_()
}
return result, nil
}
func (c *Client) StartMaintenance(hosts ...string) ([]*aurora.HostStatus, error) {
func (r *realisClient) StartMaintenance(hosts ...string) (*aurora.Response, *aurora.StartMaintenanceResult_, error) {
var result *aurora.StartMaintenanceResult_
if len(hosts) == 0 {
return nil, errors.New("no hosts provided to start maintenance on")
return nil, nil, errors.New("no hosts provided to start maintenance on")
}
hostList := aurora.NewHosts()
hostList.HostNames = hosts
c.logger.DebugPrintf("StartMaintenance Thrift Payload: %v\n", hostList)
r.logger.debugPrintf("StartMaintenance Thrift Payload: %v\n", hostList)
resp, retryErr := c.thriftCallWithRetries(false, func() (*aurora.Response, error) {
return c.adminClient.StartMaintenance(context.TODO(), hostList)
})
resp, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.StartMaintenance(context.TODO(), hostList)
},
nil,
)
if retryErr != nil {
return nil, errors.Wrap(retryErr, "unable to recover connection")
return resp, result, errors.Wrap(retryErr, "Unable to recover connection")
}
if resp.GetResult_() != nil && resp.GetResult_().GetStartMaintenanceResult_() != nil {
return resp.GetResult_().GetStartMaintenanceResult_().GetStatuses(), nil
} else {
return nil, errors.New("thrift error: Field in response is nil unexpectedly.")
if resp.GetResult_() != nil {
result = resp.GetResult_().GetStartMaintenanceResult_()
}
return resp, result, nil
}
func (c *Client) EndMaintenance(hosts ...string) ([]*aurora.HostStatus, error) {
func (r *realisClient) EndMaintenance(hosts ...string) (*aurora.Response, *aurora.EndMaintenanceResult_, error) {
var result *aurora.EndMaintenanceResult_
if len(hosts) == 0 {
return nil, errors.New("no hosts provided to end maintenance on")
return nil, nil, errors.New("no hosts provided to end maintenance on")
}
hostList := aurora.NewHosts()
hostList.HostNames = hosts
c.logger.DebugPrintf("EndMaintenance Thrift Payload: %v\n", hostList)
r.logger.debugPrintf("EndMaintenance Thrift Payload: %v\n", hostList)
resp, retryErr := c.thriftCallWithRetries(false, func() (*aurora.Response, error) {
return c.adminClient.EndMaintenance(context.TODO(), hostList)
})
resp, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.EndMaintenance(context.TODO(), hostList)
},
nil,
)
if retryErr != nil {
return nil, errors.Wrap(retryErr, "unable to recover connection")
return resp, result, errors.Wrap(retryErr, "Unable to recover connection")
}
if resp.GetResult_() != nil && resp.GetResult_().GetEndMaintenanceResult_() != nil {
return resp.GetResult_().GetEndMaintenanceResult_().GetStatuses(), nil
} else {
return nil, errors.New("thrift error: Field in response is nil unexpectedly.")
if resp.GetResult_() != nil {
result = resp.GetResult_().GetEndMaintenanceResult_()
}
return resp, result, nil
}
func (c *Client) MaintenanceStatus(hosts ...string) (*aurora.MaintenanceStatusResult_, error) {
func (r *realisClient) MaintenanceStatus(hosts ...string) (*aurora.Response, *aurora.MaintenanceStatusResult_, error) {
var result *aurora.MaintenanceStatusResult_
if len(hosts) == 0 {
return nil, errors.New("no hosts provided to get maintenance status from")
return nil, nil, errors.New("no hosts provided to get maintenance status from")
}
hostList := aurora.NewHosts()
hostList.HostNames = hosts
c.logger.DebugPrintf("MaintenanceStatus Thrift Payload: %v\n", hostList)
r.logger.debugPrintf("MaintenanceStatus Thrift Payload: %v\n", hostList)
// Make thrift call. If we encounter an error sending the call, attempt to reconnect
// and continue trying to resend command until we run out of retries.
resp, retryErr := c.thriftCallWithRetries(false, func() (*aurora.Response, error) {
return c.adminClient.MaintenanceStatus(context.TODO(), hostList)
})
resp, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.MaintenanceStatus(context.TODO(), hostList)
},
nil,
)
if retryErr != nil {
return result, errors.Wrap(retryErr, "unable to recover connection")
return resp, result, errors.Wrap(retryErr, "Unable to recover connection")
}
if resp.GetResult_() != nil {
result = resp.GetResult_().GetMaintenanceStatusResult_()
}
return result, nil
return resp, result, nil
}
// SetQuota sets a quota aggregate for the given role
// TODO(zircote) Currently investigating an error that is returned from thrift calls that include resources for `NamedPort` and `NumGpu`
func (c *Client) SetQuota(role string, cpu *float64, ramMb *int64, diskMb *int64) error {
ramResource := aurora.NewResource()
ramResource.RamMb = ramMb
cpuResource := aurora.NewResource()
cpuResource.NumCpus = cpu
diskResource := aurora.NewResource()
diskResource.DiskMb = diskMb
// TODO(zircote) Currently investigating an error that is returned
// from thrift calls that include resources for `NamedPort` and `NumGpu`
func (r *realisClient) SetQuota(role string, cpu *float64, ramMb *int64, diskMb *int64) (*aurora.Response, error) {
quota := &aurora.ResourceAggregate{
Resources: []*aurora.Resource{{NumCpus: cpu}, {RamMb: ramMb}, {DiskMb: diskMb}},
}
quota := aurora.NewResourceAggregate()
quota.Resources = []*aurora.Resource{ramResource, cpuResource, diskResource}
_, retryErr := c.thriftCallWithRetries(false, func() (*aurora.Response, error) {
return c.adminClient.SetQuota(context.TODO(), role, quota)
})
resp, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.SetQuota(context.TODO(), role, quota)
},
nil,
)
if retryErr != nil {
return errors.Wrap(retryErr, "unable to set role quota")
return resp, errors.Wrap(retryErr, "Unable to set role quota")
}
return retryErr
return resp, retryErr
}
// GetQuota returns the resource aggregate for the given role
func (c *Client) GetQuota(role string) (*aurora.GetQuotaResult_, error) {
func (r *realisClient) GetQuota(role string) (*aurora.Response, error) {
resp, retryErr := c.thriftCallWithRetries(false, func() (*aurora.Response, error) {
return c.adminClient.GetQuota(context.TODO(), role)
})
resp, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.GetQuota(context.TODO(), role)
},
nil,
)
if retryErr != nil {
return nil, errors.Wrap(retryErr, "unable to get role quota")
}
if resp.GetResult_() != nil {
return resp.GetResult_().GetGetQuotaResult_(), nil
} else {
return nil, errors.New("thrift error: Field in response is nil unexpectedly.")
return resp, errors.Wrap(retryErr, "Unable to get role quota")
}
return resp, retryErr
}
// Force Aurora Scheduler to perform a snapshot and write to Mesos log
func (c *Client) Snapshot() error {
func (r *realisClient) Snapshot() error {
_, retryErr := c.thriftCallWithRetries(false, func() (*aurora.Response, error) {
return c.adminClient.Snapshot(context.TODO())
})
_, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.Snapshot(context.TODO())
},
nil,
)
if retryErr != nil {
return errors.Wrap(retryErr, "unable to recover connection")
return errors.Wrap(retryErr, "Unable to recover connection")
}
return nil
}
// Force Aurora Scheduler to write backup file to a file in the backup directory
func (c *Client) PerformBackup() error {
func (r *realisClient) PerformBackup() error {
_, retryErr := c.thriftCallWithRetries(false, func() (*aurora.Response, error) {
return c.adminClient.PerformBackup(context.TODO())
})
_, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.PerformBackup(context.TODO())
},
nil,
)
if retryErr != nil {
return errors.Wrap(retryErr, "unable to recover connection")
return errors.Wrap(retryErr, "Unable to recover connection")
}
return nil
}
// Force an Implicit reconciliation between Mesos and Aurora
func (c *Client) ForceImplicitTaskReconciliation() error {
func (r *realisClient) ForceImplicitTaskReconciliation() error {
_, retryErr := c.thriftCallWithRetries(false, func() (*aurora.Response, error) {
return c.adminClient.TriggerImplicitTaskReconciliation(context.TODO())
})
_, retryErr := r.thriftCallWithRetries(
false,
func() (*aurora.Response, error) {
return r.adminClient.TriggerImplicitTaskReconciliation(context.TODO())
},
nil,
)
if retryErr != nil {
return errors.Wrap(retryErr, "unable to recover connection")
return errors.Wrap(retryErr, "Unable to recover connection")
}
return nil
}
// Force an Explicit reconciliation between Mesos and Aurora
func (c *Client) ForceExplicitTaskReconciliation(batchSize *int32) error {
func (r *realisClient) ForceExplicitTaskReconciliation(batchSize *int32) error {
if batchSize != nil && *batchSize < 1 {
return errors.New("invalid batch size.")
return errors.New("invalid batch size")
}
settings := aurora.NewExplicitReconciliationSettings()
settings.BatchSize = batchSize
_, retryErr := c.thriftCallWithRetries(false, func() (*aurora.Response, error) {
return c.adminClient.TriggerExplicitTaskReconciliation(context.TODO(), settings)
})
_, retryErr := r.thriftCallWithRetries(false,
func() (*aurora.Response, error) {
return r.adminClient.TriggerExplicitTaskReconciliation(context.TODO(), settings)
},
nil,
)
if retryErr != nil {
return errors.Wrap(retryErr, "unable to recover connection")
return errors.Wrap(retryErr, "Unable to recover connection")
}
return nil

View file

@ -1,171 +0,0 @@
/**
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package realis
import (
"strings"
"time"
"github.com/apache/thrift/lib/go/thrift"
)
type clientConfig struct {
username, password string
url string
timeout time.Duration
transportProtocol TransportProtocol
cluster *Cluster
backoff Backoff
transport thrift.TTransport
protoFactory thrift.TProtocolFactory
logger *LevelLogger
insecureSkipVerify bool
certsPath string
clientKey, clientCert string
options []ClientOption
debug bool
trace bool
zkOptions []ZKOpt
failOnPermanentErrors bool
}
var defaultBackoff = Backoff{
Steps: 3,
Duration: 10 * time.Second,
Factor: 5.0,
Jitter: 0.1,
}
type TransportProtocol int
const (
unsetProtocol TransportProtocol = iota
jsonProtocol
binaryProtocol
)
type ClientOption func(*clientConfig)
// clientConfig sets for options in clientConfig.
func BasicAuth(username, password string) ClientOption {
return func(config *clientConfig) {
config.username = username
config.password = password
}
}
func SchedulerUrl(url string) ClientOption {
return func(config *clientConfig) {
config.url = url
}
}
func Timeout(timeout time.Duration) ClientOption {
return func(config *clientConfig) {
config.timeout = timeout
}
}
func ZKCluster(cluster *Cluster) ClientOption {
return func(config *clientConfig) {
config.cluster = cluster
}
}
func ZKUrl(url string) ClientOption {
opts := []ZKOpt{ZKEndpoints(strings.Split(url, ",")...), ZKPath("/aurora/scheduler")}
return func(config *clientConfig) {
if config.zkOptions == nil {
config.zkOptions = opts
} else {
config.zkOptions = append(config.zkOptions, opts...)
}
}
}
func ThriftJSON() ClientOption {
return func(config *clientConfig) {
config.transportProtocol = jsonProtocol
}
}
func ThriftBinary() ClientOption {
return func(config *clientConfig) {
config.transportProtocol = binaryProtocol
}
}
func BackOff(b Backoff) ClientOption {
return func(config *clientConfig) {
config.backoff = b
}
}
func InsecureSkipVerify(InsecureSkipVerify bool) ClientOption {
return func(config *clientConfig) {
config.insecureSkipVerify = InsecureSkipVerify
}
}
func CertsPath(certspath string) ClientOption {
return func(config *clientConfig) {
config.certsPath = certspath
}
}
func ClientCerts(clientKey, clientCert string) ClientOption {
return func(config *clientConfig) {
config.clientKey, config.clientCert = clientKey, clientCert
}
}
// Use this option if you'd like to override default settings for connecting to Zookeeper.
// See zk.go for what is possible to set as an option.
func ZookeeperOptions(opts ...ZKOpt) ClientOption {
return func(config *clientConfig) {
config.zkOptions = opts
}
}
// Using the word set to avoid name collision with Interface.
func SetLogger(l Logger) ClientOption {
return func(config *clientConfig) {
config.logger = &LevelLogger{Logger: l}
}
}
// Enable debug statements.
func Debug() ClientOption {
return func(config *clientConfig) {
config.debug = true
}
}
// Enable trace statements.
func Trace() ClientOption {
return func(config *clientConfig) {
config.trace = true
}
}
// FailOnPermanentErrors - If the client encounters a connection error the standard library
// considers permanent, stop retrying and return an error to the user.
func FailOnPermanentErrors() ClientOption {
return func(config *clientConfig) {
config.failOnPermanentErrors = true
}
}

File diff suppressed because it is too large Load diff

View file

@ -1,72 +0,0 @@
/**
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package realis
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestGetCACerts(t *testing.T) {
certs, err := GetCerts("./examples/certs")
require.NoError(t, err)
assert.Equal(t, len(certs.Subjects()), 2)
}
func TestAuroraURLValidator(t *testing.T) {
t.Run("badURL", func(t *testing.T) {
url, err := validateAuroraAddress("http://badurl.com/badpath")
assert.Empty(t, url)
assert.Error(t, err)
})
t.Run("URLHttp", func(t *testing.T) {
url, err := validateAuroraAddress("http://goodurl.com:8081/api")
assert.Equal(t, "http://goodurl.com:8081/api", url)
assert.NoError(t, err)
})
t.Run("URLHttps", func(t *testing.T) {
url, err := validateAuroraAddress("https://goodurl.com:8081/api")
assert.Equal(t, "https://goodurl.com:8081/api", url)
assert.NoError(t, err)
})
t.Run("URLNoPath", func(t *testing.T) {
url, err := validateAuroraAddress("http://goodurl.com:8081")
assert.Equal(t, "http://goodurl.com:8081/api", url)
assert.NoError(t, err)
})
t.Run("ipAddrNoPath", func(t *testing.T) {
url, err := validateAuroraAddress("http://192.168.1.33:8081")
assert.Equal(t, "http://192.168.1.33:8081/api", url)
assert.NoError(t, err)
})
t.Run("URLNoProtocol", func(t *testing.T) {
url, err := validateAuroraAddress("goodurl.com:8081/api")
assert.Equal(t, "http://goodurl.com:8081/api", url)
assert.NoError(t, err)
})
t.Run("URLNoProtocolNoPathNoPort", func(t *testing.T) {
url, err := validateAuroraAddress("goodurl.com")
assert.Equal(t, "http://goodurl.com:8081/api", url)
assert.NoError(t, err)
})
}

View file

@ -17,8 +17,9 @@ package response
import (
"bytes"
"errors"
"github.com/paypal/gorealis/v2/gen-go/apache/aurora"
"github.com/paypal/gorealis/gen-go/apache/aurora"
)
// Get key from a response created by a StartJobUpdate call
@ -35,9 +36,25 @@ func ScheduleStatusResult(resp *aurora.Response) *aurora.ScheduleStatusResult_ {
}
func JobUpdateSummaries(resp *aurora.Response) []*aurora.JobUpdateSummary {
if resp.GetResult_() == nil || resp.GetResult_().GetGetJobUpdateSummariesResult_() == nil {
return nil
}
return resp.GetResult_().GetGetJobUpdateSummariesResult_().GetUpdateSummaries()
}
// Deprecated: Replaced by checks done inside of thriftCallHelper
func ResponseCodeCheck(resp *aurora.Response) (*aurora.Response, error) {
if resp == nil {
return resp, errors.New("Response is nil")
}
if resp.GetResponseCode() != aurora.ResponseCode_OK {
return resp, errors.New(CombineMessage(resp))
}
return resp, nil
}
// Based on aurora client: src/main/python/apache/aurora/client/base.py
func CombineMessage(resp *aurora.Response) string {
var buffer bytes.Buffer

239
retry.go
View file

@ -17,21 +17,20 @@ package realis
import (
"io"
"math/rand"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"github.com/apache/thrift/lib/go/thrift"
"github.com/paypal/gorealis/v2/gen-go/apache/aurora"
"github.com/paypal/gorealis/v2/response"
"github.com/paypal/gorealis/gen-go/apache/aurora"
"github.com/paypal/gorealis/response"
"github.com/pkg/errors"
)
// Backoff determines how the retry mechanism should react after each failure and how many failures it should
// tolerate.
type Backoff struct {
Duration time.Duration // the base duration
Factor float64 // Duration is multipled by factor each iteration
Factor float64 // Duration is multiplied by a factor each iteration
Jitter float64 // The amount of jitter applied each iteration
Steps int // Exit with error after this many steps
}
@ -53,23 +52,19 @@ func Jitter(duration time.Duration, maxFactor float64) time.Duration {
// if the loop should be aborted.
type ConditionFunc func() (done bool, err error)
// Modified version of the Kubernetes exponential-backoff code.
// ExponentialBackoff repeats a condition check with exponential backoff.
//
// It checks the condition up to Steps times, increasing the wait by multiplying
// the previous duration by Factor.
// ExponentialBackoff is a modified version of the Kubernetes exponential-backoff code.
// It repeats a condition check with exponential backoff and checks the condition up to
// Steps times, increasing the wait by multiplying the previous duration by Factor.
//
// If Jitter is greater than zero, a random amount of each duration is added
// (between duration and duration*(1+jitter)).
//
// If the condition never returns true, ErrWaitTimeout is returned. Errors
// do not cause the function to return.
func ExponentialBackoff(backoff Backoff, logger Logger, condition ConditionFunc) error {
func ExponentialBackoff(backoff Backoff, logger logger, condition ConditionFunc) error {
var err error
var ok bool
var curStep int
duration := backoff.Duration
for curStep = 0; curStep < backoff.Steps; curStep++ {
@ -81,7 +76,8 @@ func ExponentialBackoff(backoff Backoff, logger Logger, condition ConditionFunc)
adjusted = Jitter(duration, backoff.Jitter)
}
logger.Printf("A retryable error occurred during function call, backing off for %v before retrying\n", adjusted)
logger.Printf(
"A retryable error occurred during function call, backing off for %v before retrying\n", adjusted)
time.Sleep(adjusted)
duration = time.Duration(float64(duration) * backoff.Factor)
}
@ -98,10 +94,9 @@ func ExponentialBackoff(backoff Backoff, logger Logger, condition ConditionFunc)
// If the error is temporary, continue retrying.
if !IsTemporary(err) {
return err
} else {
// Print out the temporary error we experienced.
logger.Println(err)
}
// Print out the temporary error we experienced.
logger.Println(err)
}
}
@ -112,21 +107,33 @@ func ExponentialBackoff(backoff Backoff, logger Logger, condition ConditionFunc)
// Provide more information to the user wherever possible
if err != nil {
return newRetryError(errors.Wrap(err, "ran out of retries"), curStep)
} else {
return newRetryError(errors.New("ran out of retries"), curStep)
}
return newRetryError(errors.New("ran out of retries"), curStep)
}
type auroraThriftCall func() (resp *aurora.Response, err error)
// verifyOntimeout defines the type of function that will be used to verify whether a Thirft call to the Scheduler
// made it to the scheduler or not. In general, these types of functions will have to interact with the scheduler
// through the very same Thrift API which previously encountered a time out from the client.
// This means that the functions themselves should be kept to a minimum number of Thrift calls.
// It should also be noted that this is a best effort mechanism and
// is likely to fail for the same reasons that the original call failed.
type verifyOnTimeout func() (*aurora.Response, bool)
// Duplicates the functionality of ExponentialBackoff but is specifically targeted towards ThriftCalls.
func (c *Client) thriftCallWithRetries(returnOnTimeout bool, thriftCall auroraThriftCall) (*aurora.Response, error) {
func (r *realisClient) thriftCallWithRetries(
returnOnTimeout bool,
thriftCall auroraThriftCall,
verifyOnTimeout verifyOnTimeout) (*aurora.Response, error) {
var resp *aurora.Response
var clientErr error
var curStep int
var timeouts int
timeouts := 0
backoff := c.config.backoff
backoff := r.config.backoff
duration := backoff.Duration
for curStep = 0; curStep < backoff.Steps; curStep++ {
@ -138,7 +145,10 @@ func (c *Client) thriftCallWithRetries(returnOnTimeout bool, thriftCall auroraTh
adjusted = Jitter(duration, backoff.Jitter)
}
c.logger.Printf("A retryable error occurred during thrift call, backing off for %v before retry %v\n", adjusted, curStep)
r.logger.Printf(
"A retryable error occurred during thrift call, backing off for %v before retry %v",
adjusted,
curStep)
time.Sleep(adjusted)
duration = time.Duration(float64(duration) * backoff.Factor)
@ -148,110 +158,137 @@ func (c *Client) thriftCallWithRetries(returnOnTimeout bool, thriftCall auroraTh
// Placing this in an anonymous function in order to create a new, short-lived stack allowing unlock
// to be run in case of a panic inside of thriftCall.
func() {
c.lock.Lock()
defer c.lock.Unlock()
r.lock.Lock()
defer r.lock.Unlock()
resp, clientErr = thriftCall()
c.logger.TracePrintf("Aurora Thrift Call ended resp: %v clientErr: %v\n", resp, clientErr)
r.logger.tracePrintf("Aurora Thrift Call ended resp: %v clientErr: %v", resp, clientErr)
}()
// Check if our thrift call is returning an error. This is a retryable event as we don't know
// if it was caused by network issues.
// Check if our thrift call is returning an error.
if clientErr != nil {
// Print out the error to the user
c.logger.Printf("Client Error: %v\n", clientErr)
r.logger.Printf("Client Error: %v", clientErr)
// Determine if error is a temporary URL error by going up the stack
e, ok := clientErr.(thrift.TTransportException)
if ok {
c.logger.DebugPrint("Encountered a transport exception")
temporary, timedout := isConnectionError(clientErr)
if !temporary && r.RealisConfig().failOnPermanentErrors {
return nil, errors.Wrap(clientErr, "permanent connection error")
}
// TODO(rdelvalle): Figure out a better way to obtain the error code as this is a very brittle solution
// 401 Unauthorized means the wrong username and password were provided
if strings.Contains(e.Error(), strconv.Itoa(http.StatusUnauthorized)) {
return nil, errors.Wrap(clientErr, "wrong username or password provided")
}
e, ok := e.Err().(*url.Error)
if ok {
// EOF error occurs when the server closes the read buffer of the client. This is common
// when the server is overloaded and should be retried. All other errors that are permanent
// will not be retried.
if e.Err != io.EOF && !e.Temporary() && c.RealisConfig().failOnPermanentErrors {
return nil, errors.Wrap(clientErr, "permanent connection error")
}
// Corner case where thrift payload was received by Aurora but connection timedout before Aurora was
// able to reply. In this case we will return whatever response was received and a TimedOut behaving
// error. Users can take special action on a timeout by using IsTimedout and reacting accordingly.
if e.Timeout() {
timeouts++
c.logger.DebugPrintf(
"Client closed connection (timedout) %d times before server responded,"+
" consider increasing connection timeout",
timeouts)
if returnOnTimeout {
return resp,
newTimedoutError(errors.New("client connection closed before server answer"))
}
}
}
// There exists a corner case where thrift payload was received by Aurora but
// connection timed out before Aurora was able to reply.
// Users can take special action on a timeout by using IsTimedout and reacting accordingly
// if they have configured the client to return on a timeout.
if timedout && returnOnTimeout {
return resp, newTimedoutError(errors.New("client connection closed before server answer"))
}
// In the future, reestablish connection should be able to check if it is actually possible
// to make a thrift call to Aurora. For now, a reconnect should always lead to a retry.
// Ignoring error due to the fact that an error should be retried regardless
_ = c.ReestablishConn()
} else {
// If there was no client error, but the response is nil, something went wrong.
// Ideally, we'll never encounter this but we're placing a safeguard here.
if resp == nil {
return nil, errors.New("response from aurora is nil")
reestablishErr := r.ReestablishConn()
if reestablishErr != nil {
r.logger.debugPrintf("error re-establishing connection ", reestablishErr)
}
// Check Response Code from thrift and make a decision to continue retrying or not.
switch responseCode := resp.GetResponseCode(); responseCode {
// If users did not opt for a return on timeout in order to react to a timedout error,
// attempt to verify that the call made it to the scheduler after the connection was re-established.
if timedout {
timeouts++
r.logger.debugPrintf(
"Client closed connection %d times before server responded, "+
"consider increasing connection timeout",
timeouts)
// If the thrift call succeeded, stop retrying
case aurora.ResponseCode_OK:
return resp, nil
// If the response code is transient, continue retrying
case aurora.ResponseCode_ERROR_TRANSIENT:
c.logger.Println("Aurora replied with Transient error code, retrying")
continue
// Failure scenarios, these indicate a bad payload or a bad clientConfig. Stop retrying.
case aurora.ResponseCode_INVALID_REQUEST,
aurora.ResponseCode_ERROR,
aurora.ResponseCode_AUTH_FAILED,
aurora.ResponseCode_JOB_UPDATING_ERROR:
c.logger.Printf("Terminal Response Code %v from Aurora, won't retry\n", resp.GetResponseCode().String())
return resp, errors.New(response.CombineMessage(resp))
// The only case that should fall down to here is a WARNING response code.
// It is currently not used as a response in the scheduler so it is unknown how to handle it.
default:
c.logger.DebugPrintf("unhandled response code %v received from Aurora\n", responseCode)
return nil, errors.Errorf("unhandled response code from Aurora %v", responseCode.String())
// Allow caller to provide a function which checks if the original call was successful before
// it timed out.
if verifyOnTimeout != nil {
if verifyResp, ok := verifyOnTimeout(); ok {
r.logger.Print("verified that the call went through successfully after a client timeout")
// Response here might be different than the original as it is no longer constructed
// by the scheduler but mimicked.
// This is OK since the scheduler is very unlikely to change responses at this point in its
// development cycle but we must be careful to not return an incorrectly constructed response.
return verifyResp, nil
}
}
}
// Retry the thrift payload
continue
}
// If there was no client error, but the response is nil, something went wrong.
// Ideally, we'll never encounter this but we're placing a safeguard here.
if resp == nil {
return nil, errors.New("response from aurora is nil")
}
// Check Response Code from thrift and make a decision to continue retrying or not.
switch responseCode := resp.GetResponseCode(); responseCode {
// If the thrift call succeeded, stop retrying
case aurora.ResponseCode_OK:
return resp, nil
// If the response code is transient, continue retrying
case aurora.ResponseCode_ERROR_TRANSIENT:
r.logger.Println("Aurora replied with Transient error code, retrying")
continue
// Failure scenarios, these indicate a bad payload or a bad config. Stop retrying.
case aurora.ResponseCode_INVALID_REQUEST,
aurora.ResponseCode_ERROR,
aurora.ResponseCode_AUTH_FAILED,
aurora.ResponseCode_JOB_UPDATING_ERROR:
r.logger.Printf("Terminal Response Code %v from Aurora, won't retry\n", resp.GetResponseCode().String())
return resp, errors.New(response.CombineMessage(resp))
// The only case that should fall down to here is a WARNING response code.
// It is currently not used as a response in the scheduler so it is unknown how to handle it.
default:
r.logger.debugPrintf("unhandled response code %v received from Aurora\n", responseCode)
return nil, errors.Errorf("unhandled response code from Aurora %v", responseCode.String())
}
}
c.logger.DebugPrintf("it took %v retries to complete this operation\n", curStep)
if curStep > 1 {
c.config.logger.Printf("retried this thrift call %d time(s)", curStep)
r.config.logger.Printf("this thrift call was retried %d time(s)", curStep)
}
// Provide more information to the user wherever possible.
if clientErr != nil {
return nil, newRetryError(errors.Wrap(clientErr, "ran out of retries, including latest error"), curStep)
} else {
return nil, newRetryError(errors.New("ran out of retries"), curStep)
}
return nil, newRetryError(errors.New("ran out of retries"), curStep)
}
// isConnectionError processes the error received by the client.
// The return values indicate weather this was determined to be a temporary error
// and weather it was determined to be a timeout error
func isConnectionError(err error) (bool, bool) {
// Determine if error is a temporary URL error by going up the stack
transportException, ok := err.(thrift.TTransportException)
if !ok {
return false, false
}
urlError, ok := transportException.Err().(*url.Error)
if !ok {
return false, false
}
// EOF error occurs when the server closes the read buffer of the client. This is common
// when the server is overloaded and we consider it temporary.
// All other which are not temporary as per the member function Temporary(),
// are considered not temporary (permanent).
if urlError.Err != io.EOF && !urlError.Temporary() {
return false, false
}
return true, urlError.Timeout()
}

13
runTests.sh Executable file
View file

@ -0,0 +1,13 @@
#!/bin/bash
docker-compose up -d
# If running docker-compose up gives any error, don't do anything.
if [ $? -ne 0 ]; then
exit
fi
# Since we run our docker compose setup in bridge mode to be able to run on MacOS, we have to launch a Docker container within the bridge network in order to avoid any routing issues.
docker run --rm -t -v $(pwd):/go/src/github.com/paypal/gorealis --network gorealis_aurora_cluster golang:1.10-stretch go test -v github.com/paypal/gorealis $@
docker-compose down

View file

@ -1,4 +1,4 @@
#!/bin/bash
# Since we run our docker compose setup in bridge mode to be able to run on MacOS, we have to launch a Docker container within the bridge network in order to avoid any routing issues.
docker run --rm -t -v $(pwd):/go/src/github.com/paypal/gorealis --network gorealis_aurora_cluster golang:1.10-stretch go test -v github.com/paypal/gorealis $@
docker run --rm -t -w /gorealis -v $GOPATH/pkg:/go/pkg -v $(pwd):/gorealis --network gorealis_aurora_cluster golang:1.16-buster go test -v github.com/paypal/gorealis $@

449
task.go
View file

@ -1,449 +0,0 @@
/**
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package realis
import (
"encoding/json"
"strconv"
"github.com/apache/thrift/lib/go/thrift"
"github.com/paypal/gorealis/v2/gen-go/apache/aurora"
)
type ResourceType int
const (
CPU ResourceType = iota
RAM
DISK
GPU
)
const (
dedicated = "dedicated"
portPrefix = "org.apache.aurora.port."
)
type AuroraTask struct {
task *aurora.TaskConfig
resources map[ResourceType]*aurora.Resource
portCount int
thermos *ThermosExecutor
}
func NewTask() *AuroraTask {
numCpus := &aurora.Resource{}
ramMb := &aurora.Resource{}
diskMb := &aurora.Resource{}
numCpus.NumCpus = new(float64)
ramMb.RamMb = new(int64)
diskMb.DiskMb = new(int64)
resources := map[ResourceType]*aurora.Resource{CPU: numCpus, RAM: ramMb, DISK: diskMb}
return &AuroraTask{task: &aurora.TaskConfig{
Job: &aurora.JobKey{},
MesosFetcherUris: make([]*aurora.MesosFetcherURI, 0),
Metadata: make([]*aurora.Metadata, 0),
Constraints: make([]*aurora.Constraint, 0),
// Container is a Union so one container field must be set. Set Mesos by default.
Container: NewMesosContainer().Build(),
Resources: []*aurora.Resource{numCpus, ramMb, diskMb},
},
resources: resources,
portCount: 0}
}
// Helper method to convert aurora.TaskConfig to gorealis AuroraTask type
func TaskFromThrift(config *aurora.TaskConfig) *AuroraTask {
newTask := NewTask()
// Pass values using receivers as much as possible
newTask.
Environment(config.Job.Environment).
Role(config.Job.Role).
Name(config.Job.Name).
MaxFailure(config.MaxTaskFailures).
IsService(config.IsService)
if config.Tier != nil {
newTask.Tier(*config.Tier)
}
if config.ExecutorConfig != nil {
newTask.
ExecutorName(config.ExecutorConfig.Name).
ExecutorData(config.ExecutorConfig.Data)
}
if config.PartitionPolicy != nil {
newTask.PartitionPolicy(
aurora.PartitionPolicy{
Reschedule: config.PartitionPolicy.Reschedule,
DelaySecs: thrift.Int64Ptr(*config.PartitionPolicy.DelaySecs),
})
}
// Make a deep copy of the task's container
if config.Container != nil {
if config.Container.Mesos != nil {
mesosContainer := NewMesosContainer()
if config.Container.Mesos.Image != nil {
if config.Container.Mesos.Image.Appc != nil {
mesosContainer.AppcImage(config.Container.Mesos.Image.Appc.Name, config.Container.Mesos.Image.Appc.ImageId)
} else if config.Container.Mesos.Image.Docker != nil {
mesosContainer.DockerImage(config.Container.Mesos.Image.Docker.Name, config.Container.Mesos.Image.Docker.Tag)
}
}
for _, vol := range config.Container.Mesos.Volumes {
mesosContainer.AddVolume(vol.ContainerPath, vol.HostPath, vol.Mode)
}
newTask.Container(mesosContainer)
} else if config.Container.Docker != nil {
dockerContainer := NewDockerContainer()
dockerContainer.Image(config.Container.Docker.Image)
for _, param := range config.Container.Docker.Parameters {
dockerContainer.AddParameter(param.Name, param.Value)
}
newTask.Container(dockerContainer)
}
}
// Copy all ports
for _, resource := range config.Resources {
// Copy only ports. Set CPU, RAM, DISK, and GPU
if resource != nil {
if resource.NamedPort != nil {
newTask.task.Resources = append(
newTask.task.Resources,
&aurora.Resource{NamedPort: thrift.StringPtr(*resource.NamedPort)},
)
newTask.portCount++
}
if resource.RamMb != nil {
newTask.RAM(*resource.RamMb)
}
if resource.NumCpus != nil {
newTask.CPU(*resource.NumCpus)
}
if resource.DiskMb != nil {
newTask.Disk(*resource.DiskMb)
}
if resource.NumGpus != nil {
newTask.GPU(*resource.NumGpus)
}
}
}
// Copy constraints
for _, constraint := range config.Constraints {
if constraint != nil && constraint.Constraint != nil {
newConstraint := aurora.Constraint{Name: constraint.Name}
taskConstraint := constraint.Constraint
if taskConstraint.Limit != nil {
newConstraint.Constraint = &aurora.TaskConstraint{
Limit: &aurora.LimitConstraint{Limit: taskConstraint.Limit.Limit},
}
newTask.task.Constraints = append(newTask.task.Constraints, &newConstraint)
} else if taskConstraint.Value != nil {
values := make([]string, 0)
for _, val := range taskConstraint.Value.Values {
values = append(values, val)
}
newConstraint.Constraint = &aurora.TaskConstraint{
Value: &aurora.ValueConstraint{Negated: taskConstraint.Value.Negated, Values: values}}
newTask.task.Constraints = append(newTask.task.Constraints, &newConstraint)
}
}
}
// Copy labels
for _, label := range config.Metadata {
newTask.task.Metadata = append(newTask.task.Metadata, &aurora.Metadata{Key: label.Key, Value: label.Value})
}
// Copy Mesos fetcher URIs
for _, uri := range config.MesosFetcherUris {
newTask.task.MesosFetcherUris = append(
newTask.task.MesosFetcherUris,
&aurora.MesosFetcherURI{
Value: uri.Value,
Extract: thrift.BoolPtr(*uri.Extract),
Cache: thrift.BoolPtr(*uri.Cache),
})
}
return newTask
}
// Set AuroraTask Key environment.
func (t *AuroraTask) Environment(env string) *AuroraTask {
t.task.Job.Environment = env
return t
}
// Set AuroraTask Key Role.
func (t *AuroraTask) Role(role string) *AuroraTask {
t.task.Job.Role = role
return t
}
// Set AuroraTask Key Name.
func (t *AuroraTask) Name(name string) *AuroraTask {
t.task.Job.Name = name
return t
}
// Set name of the executor that will the task will be configured to.
func (t *AuroraTask) ExecutorName(name string) *AuroraTask {
if t.task.ExecutorConfig == nil {
t.task.ExecutorConfig = aurora.NewExecutorConfig()
}
t.task.ExecutorConfig.Name = name
return t
}
// Will be included as part of entire task inside the scheduler that will be serialized.
func (t *AuroraTask) ExecutorData(data string) *AuroraTask {
if t.task.ExecutorConfig == nil {
t.task.ExecutorConfig = aurora.NewExecutorConfig()
}
t.task.ExecutorConfig.Data = data
return t
}
func (t *AuroraTask) CPU(cpus float64) *AuroraTask {
*t.resources[CPU].NumCpus = cpus
return t
}
func (t *AuroraTask) RAM(ram int64) *AuroraTask {
*t.resources[RAM].RamMb = ram
return t
}
func (t *AuroraTask) Disk(disk int64) *AuroraTask {
*t.resources[DISK].DiskMb = disk
return t
}
func (t *AuroraTask) GPU(gpu int64) *AuroraTask {
// GPU resource must be set explicitly since the scheduler by default
// rejects jobs with GPU resources attached to it.
if _, ok := t.resources[GPU]; !ok {
t.resources[GPU] = &aurora.Resource{}
t.task.Resources = append(t.task.Resources, t.resources[GPU])
}
t.resources[GPU].NumGpus = &gpu
return t
}
func (t *AuroraTask) Tier(tier string) *AuroraTask {
t.task.Tier = &tier
return t
}
// How many failures to tolerate before giving up.
func (t *AuroraTask) MaxFailure(maxFail int32) *AuroraTask {
t.task.MaxTaskFailures = maxFail
return t
}
// Restart the job's tasks if they fail
func (t *AuroraTask) IsService(isService bool) *AuroraTask {
t.task.IsService = isService
return t
}
// Add a list of URIs with the same extract and cache configuration. Scheduler must have
// --enable_mesos_fetcher flag enabled. Currently there is no duplicate detection.
func (t *AuroraTask) AddURIs(extract bool, cache bool, values ...string) *AuroraTask {
for _, value := range values {
t.task.MesosFetcherUris = append(
t.task.MesosFetcherUris,
&aurora.MesosFetcherURI{Value: value, Extract: &extract, Cache: &cache})
}
return t
}
// Adds a Mesos label to the job. Note that Aurora will add the
// prefix "org.apache.aurora.metadata." to the beginning of each key.
func (t *AuroraTask) AddLabel(key string, value string) *AuroraTask {
t.task.Metadata = append(t.task.Metadata, &aurora.Metadata{Key: key, Value: value})
return t
}
// Add a named port to the job configuration These are random ports as it's
// not currently possible to request specific ports using Aurora.
func (t *AuroraTask) AddNamedPorts(names ...string) *AuroraTask {
t.portCount += len(names)
for _, name := range names {
t.task.Resources = append(t.task.Resources, &aurora.Resource{NamedPort: &name})
}
return t
}
// Adds a request for a number of ports to the job configuration. The names chosen for these ports
// will be org.apache.aurora.port.X, where X is the current port count for the job configuration
// starting at 0. These are random ports as it's not currently possible to request
// specific ports using Aurora.
func (t *AuroraTask) AddPorts(num int) *AuroraTask {
start := t.portCount
t.portCount += num
for i := start; i < t.portCount; i++ {
portName := portPrefix + strconv.Itoa(i)
t.task.Resources = append(t.task.Resources, &aurora.Resource{NamedPort: &portName})
}
return t
}
// From Aurora Docs:
// Add a Value constraint
// name - Mesos slave attribute that the constraint is matched against.
// If negated = true , treat this as a 'not' - to avoid specific values.
// Values - list of values we look for in attribute name
func (t *AuroraTask) AddValueConstraint(name string, negated bool, values ...string) *AuroraTask {
t.task.Constraints = append(t.task.Constraints,
&aurora.Constraint{
Name: name,
Constraint: &aurora.TaskConstraint{
Value: &aurora.ValueConstraint{
Negated: negated,
Values: values,
},
Limit: nil,
},
})
return t
}
// From Aurora Docs:
// A constraint that specifies the maximum number of active tasks on a host with
// a matching attribute that may be scheduled simultaneously.
func (t *AuroraTask) AddLimitConstraint(name string, limit int32) *AuroraTask {
t.task.Constraints = append(t.task.Constraints,
&aurora.Constraint{
Name: name,
Constraint: &aurora.TaskConstraint{
Value: nil,
Limit: &aurora.LimitConstraint{Limit: limit},
},
})
return t
}
// From Aurora Docs:
// dedicated attribute. Aurora treats this specially, and only allows matching jobs
// to run on these machines, and will only schedule matching jobs on these machines.
// When a job is created, the scheduler requires that the $role component matches
// the role field in the job configuration, and will reject the job creation otherwise.
// A wildcard (*) may be used for the role portion of the dedicated attribute, which
// will allow any owner to elect for a job to run on the host(s)
func (t *AuroraTask) AddDedicatedConstraint(role, name string) *AuroraTask {
t.AddValueConstraint(dedicated, false, role+"/"+name)
return t
}
// Set a container to run for the job configuration to run.
func (t *AuroraTask) Container(container Container) *AuroraTask {
t.task.Container = container.Build()
return t
}
func (t *AuroraTask) TaskConfig() *aurora.TaskConfig {
return t.task
}
func (t *AuroraTask) JobKey() aurora.JobKey {
return *t.task.Job
}
func (t *AuroraTask) Clone() *AuroraTask {
newTask := TaskFromThrift(t.task)
if t.thermos != nil {
newTask.ThermosExecutor(*t.thermos.Clone())
}
return newTask
}
func (t *AuroraTask) ThermosExecutor(thermos ThermosExecutor) *AuroraTask {
t.thermos = &thermos
return t
}
func (t *AuroraTask) BuildThermosPayload() error {
if t.thermos != nil {
// Set the correct resources
if t.resources[CPU].NumCpus != nil {
t.thermos.cpu(*t.resources[CPU].NumCpus)
}
if t.resources[RAM].RamMb != nil {
t.thermos.ram(*t.resources[RAM].RamMb)
}
if t.resources[DISK].DiskMb != nil {
t.thermos.disk(*t.resources[DISK].DiskMb)
}
if t.resources[GPU] != nil && t.resources[GPU].NumGpus != nil {
t.thermos.gpu(*t.resources[GPU].NumGpus)
}
payload, err := json.Marshal(t.thermos)
if err != nil {
return err
}
t.ExecutorName(aurora.AURORA_EXECUTOR_NAME)
t.ExecutorData(string(payload))
}
return nil
}
// Set a partition policy for the job configuration to implement.
func (t *AuroraTask) PartitionPolicy(policy aurora.PartitionPolicy) *AuroraTask {
t.task.PartitionPolicy = &policy
return t
}

View file

@ -1,57 +0,0 @@
/**
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package realis_test
import (
"testing"
realis "github.com/paypal/gorealis/v2"
"github.com/paypal/gorealis/v2/gen-go/apache/aurora"
"github.com/stretchr/testify/assert"
)
func TestAuroraTask_Clone(t *testing.T) {
task0 := realis.NewTask().
Environment("development").
Role("ubuntu").
Name("this_is_a_test").
ExecutorName(aurora.AURORA_EXECUTOR_NAME).
ExecutorData("{fake:payload}").
CPU(10).
RAM(643).
Disk(1000).
IsService(true).
AddPorts(10).
Tier("preferred").
MaxFailure(23).
AddURIs(true, true, "testURI").
AddLabel("Test", "Value").
AddNamedPorts("test").
AddValueConstraint("test", false, "testing").
AddLimitConstraint("test_limit", 1).
AddDedicatedConstraint("ubuntu", "name").
Container(realis.NewDockerContainer().AddParameter("hello", "world").Image("testImg"))
task1 := task0.Clone()
assert.EqualValues(t, task0, task1, "Clone does not return the correct deep copy of AuroraTask")
task0.Container(realis.NewMesosContainer().
AppcImage("test", "testing").
AddVolume("test", "test", aurora.Mode_RW))
task2 := task0.Clone()
assert.EqualValues(t, task0, task2, "Clone does not return the correct deep copy of AuroraTask")
}

View file

@ -1,195 +0,0 @@
/**
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package realis
import "encoding/json"
type ThermosExecutor struct {
Task ThermosTask `json:"task""`
order *ThermosConstraint `json:"-"`
}
type ThermosTask struct {
Processes map[string]*ThermosProcess `json:"processes"`
Constraints []*ThermosConstraint `json:"constraints"`
Resources thermosResources `json:"resources"`
}
type ThermosConstraint struct {
Order []string `json:"order,omitempty"`
}
// This struct should always be controlled by the Aurora job struct.
// Therefore it is private.
type thermosResources struct {
CPU *float64 `json:"cpu,omitempty"`
Disk *int64 `json:"disk,omitempty"`
RAM *int64 `json:"ram,omitempty"`
GPU *int64 `json:"gpu,omitempty"`
}
type ThermosProcess struct {
Name string `json:"name"`
Cmdline string `json:"cmdline"`
Daemon bool `json:"daemon"`
Ephemeral bool `json:"ephemeral"`
MaxFailures int `json:"max_failures"`
MinDuration int `json:"min_duration"`
Final bool `json:"final"`
}
func NewThermosProcess(name, command string) ThermosProcess {
return ThermosProcess{
Name: name,
Cmdline: command,
MaxFailures: 1,
Daemon: false,
Ephemeral: false,
MinDuration: 5,
Final: false}
}
// Processes must have unique names. Adding a process whose name already exists will
// result in overwriting the previous version of the process.
func (t *ThermosExecutor) AddProcess(process ThermosProcess) *ThermosExecutor {
if len(t.Task.Processes) == 0 {
t.Task.Processes = make(map[string]*ThermosProcess, 0)
}
t.Task.Processes[process.Name] = &process
// Add Process to order
t.addToOrder(process.Name)
return t
}
// Only constraint that should be added for now is the order of execution, therefore this
// receiver is private.
func (t *ThermosExecutor) addConstraint(constraint *ThermosConstraint) *ThermosExecutor {
if len(t.Task.Constraints) == 0 {
t.Task.Constraints = make([]*ThermosConstraint, 0)
}
t.Task.Constraints = append(t.Task.Constraints, constraint)
return t
}
// Order in which the Processes should be executed. Index 0 will be executed first, index N will be executed last.
func (t *ThermosExecutor) ProcessOrder(order ...string) *ThermosExecutor {
if t.order == nil {
t.order = &ThermosConstraint{}
t.addConstraint(t.order)
}
t.order.Order = order
return t
}
// Add Process to execution order. By default this is a FIFO setup. Custom order can be given by overriding
// with ProcessOrder
func (t *ThermosExecutor) addToOrder(name string) {
if t.order == nil {
t.order = &ThermosConstraint{Order: make([]string, 0)}
t.addConstraint(t.order)
}
t.order.Order = append(t.order.Order, name)
}
// Ram is determined by the job object.
func (t *ThermosExecutor) ram(ram int64) {
// Convert from bytes to MiB
ram *= 1024 ^ 2
t.Task.Resources.RAM = &ram
}
// Disk is determined by the job object.
func (t *ThermosExecutor) disk(disk int64) {
// Convert from bytes to MiB
disk *= 1024 ^ 2
t.Task.Resources.Disk = &disk
}
// CPU is determined by the job object.
func (t *ThermosExecutor) cpu(cpu float64) {
t.Task.Resources.CPU = &cpu
}
// GPU is determined by the job object.
func (t *ThermosExecutor) gpu(gpu int64) {
t.Task.Resources.GPU = &gpu
}
// Deep copy of Thermos executor
func (t *ThermosExecutor) Clone() *ThermosExecutor {
tNew := ThermosExecutor{}
if t.order != nil {
tNew.order = &ThermosConstraint{Order: t.order.Order}
tNew.addConstraint(tNew.order)
}
tNew.Task.Processes = make(map[string]*ThermosProcess)
for name, process := range t.Task.Processes {
newProcess := *process
tNew.Task.Processes[name] = &newProcess
}
tNew.Task.Resources = t.Task.Resources
return &tNew
}
type thermosTaskJSON struct {
Processes []*ThermosProcess `json:"processes"`
Constraints []*ThermosConstraint `json:"constraints"`
Resources thermosResources `json:"resources"`
}
// Custom Marshaling for Thermos Task to match what Thermos expects
func (t *ThermosTask) MarshalJSON() ([]byte, error) {
// Convert map to array to match what Thermos expects
processes := make([]*ThermosProcess, 0)
for _, process := range t.Processes {
processes = append(processes, process)
}
return json.Marshal(&thermosTaskJSON{
Processes: processes,
Constraints: t.Constraints,
Resources: t.Resources,
})
}
// Custom Unmarshaling to match what Thermos would contain
func (t *ThermosTask) UnmarshalJSON(data []byte) error {
// Thermos format
aux := &thermosTaskJSON{}
if err := json.Unmarshal(data, &aux); err != nil {
return err
}
processes := make(map[string]*ThermosProcess)
for _, process := range aux.Processes {
processes[process.Name] = process
}
return nil
}

View file

@ -1,71 +0,0 @@
package realis
import (
"encoding/json"
"testing"
"github.com/apache/thrift/lib/go/thrift"
"github.com/stretchr/testify/assert"
)
func TestThermosTask(t *testing.T) {
// Test that we can successfully deserialize a minimum subset of an Aurora generated thermos payload
thermosJSON := []byte(
`{
"task": {
"processes": [
{
"daemon": false,
"name": "hello",
"ephemeral": false,
"max_failures": 1,
"min_duration": 5,
"cmdline": "\n while true; do\n echo hello world from gorealis\n sleep 10\n done\n ",
"final": false
}
],
"resources": {
"gpu": 0,
"disk": 134217728,
"ram": 134217728,
"cpu": 1.1
},
"constraints": [
{
"order": [
"hello"
]
}
]
}
}`)
thermos := ThermosExecutor{}
err := json.Unmarshal(thermosJSON, &thermos)
assert.NoError(t, err)
process := &ThermosProcess{
Daemon: false,
Name: "hello",
Ephemeral: false,
MaxFailures: 1,
MinDuration: 5,
Cmdline: "\n while true; do\n echo hello world from gorealis\n sleep 10\n done\n ",
Final: false,
}
constraint := &ThermosConstraint{Order: []string{process.Name}}
thermosExpected := ThermosExecutor{
Task: ThermosTask{
Processes: map[string]*ThermosProcess{process.Name: process},
Constraints: []*ThermosConstraint{constraint},
Resources: thermosResources{CPU: thrift.Float64Ptr(1.1),
Disk: thrift.Int64Ptr(134217728),
RAM: thrift.Int64Ptr(134217728),
GPU: thrift.Int64Ptr(0)}}}
assert.ObjectsAreEqualValues(thermosExpected, thermos)
}

188
updatejob.go Normal file
View file

@ -0,0 +1,188 @@
/**
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package realis
import (
"github.com/paypal/gorealis/gen-go/apache/aurora"
)
// UpdateJob is a structure to collect all information required to create job update.
type UpdateJob struct {
Job // SetInstanceCount for job is hidden, access via full qualifier
req *aurora.JobUpdateRequest
}
// NewDefaultUpdateJob creates an UpdateJob object with opinionated default settings.
func NewDefaultUpdateJob(config *aurora.TaskConfig) *UpdateJob {
req := aurora.NewJobUpdateRequest()
req.TaskConfig = config
req.Settings = NewUpdateSettings()
job, ok := NewJob().(*AuroraJob)
if !ok {
// This should never happen but it is here as a safeguard
return nil
}
job.jobConfig.TaskConfig = config
// Rebuild resource map from TaskConfig
for _, ptr := range config.Resources {
if ptr.NumCpus != nil {
job.resources[CPU].NumCpus = ptr.NumCpus
continue // Guard against Union violations that Go won't enforce
}
if ptr.RamMb != nil {
job.resources[RAM].RamMb = ptr.RamMb
continue
}
if ptr.DiskMb != nil {
job.resources[DISK].DiskMb = ptr.DiskMb
continue
}
if ptr.NumGpus != nil {
job.resources[GPU] = &aurora.Resource{NumGpus: ptr.NumGpus}
continue
}
}
// Mirrors defaults set by Pystachio
req.Settings.UpdateGroupSize = 1
req.Settings.WaitForBatchCompletion = false
req.Settings.MinWaitInInstanceRunningMs = 45000
req.Settings.MaxPerInstanceFailures = 0
req.Settings.MaxFailedInstances = 0
req.Settings.RollbackOnFailure = true
//TODO(rdelvalle): Deep copy job struct to avoid unexpected behavior
return &UpdateJob{Job: job, req: req}
}
// NewUpdateJob creates an UpdateJob object wihtout default settings.
func NewUpdateJob(config *aurora.TaskConfig, settings *aurora.JobUpdateSettings) *UpdateJob {
req := aurora.NewJobUpdateRequest()
req.TaskConfig = config
req.Settings = settings
job, ok := NewJob().(*AuroraJob)
if !ok {
// This should never happen but it is here as a safeguard
return nil
}
job.jobConfig.TaskConfig = config
// Rebuild resource map from TaskConfig
for _, ptr := range config.Resources {
if ptr.NumCpus != nil {
job.resources[CPU].NumCpus = ptr.NumCpus
continue // Guard against Union violations that Go won't enforce
}
if ptr.RamMb != nil {
job.resources[RAM].RamMb = ptr.RamMb
continue
}
if ptr.DiskMb != nil {
job.resources[DISK].DiskMb = ptr.DiskMb
continue
}
if ptr.NumGpus != nil {
job.resources[GPU] = &aurora.Resource{}
job.resources[GPU].NumGpus = ptr.NumGpus
continue // Guard against Union violations that Go won't enforce
}
}
//TODO(rdelvalle): Deep copy job struct to avoid unexpected behavior
return &UpdateJob{Job: job, req: req}
}
// InstanceCount sets instance count the job will have after the update.
func (u *UpdateJob) InstanceCount(inst int32) *UpdateJob {
u.req.InstanceCount = inst
return u
}
// BatchSize sets the max number of instances being updated at any given moment.
func (u *UpdateJob) BatchSize(size int32) *UpdateJob {
u.req.Settings.UpdateGroupSize = size
return u
}
// WatchTime sets the minimum number of seconds a shard must remain in RUNNING state before considered a success.
func (u *UpdateJob) WatchTime(ms int32) *UpdateJob {
u.req.Settings.MinWaitInInstanceRunningMs = ms
return u
}
// WaitForBatchCompletion configures the job update to wait for all instances in a group to be done before moving on.
func (u *UpdateJob) WaitForBatchCompletion(batchWait bool) *UpdateJob {
u.req.Settings.WaitForBatchCompletion = batchWait
return u
}
// MaxPerInstanceFailures sets the max number of instance failures to tolerate before marking instance as FAILED.
func (u *UpdateJob) MaxPerInstanceFailures(inst int32) *UpdateJob {
u.req.Settings.MaxPerInstanceFailures = inst
return u
}
// MaxFailedInstances sets the max number of FAILED instances to tolerate before terminating the update.
func (u *UpdateJob) MaxFailedInstances(inst int32) *UpdateJob {
u.req.Settings.MaxFailedInstances = inst
return u
}
// RollbackOnFail configure the job to rollback automatically after a job update fails.
func (u *UpdateJob) RollbackOnFail(rollback bool) *UpdateJob {
u.req.Settings.RollbackOnFailure = rollback
return u
}
// NewUpdateSettings return an opinionated set of job update settings.
func (u *UpdateJob) BatchUpdateStrategy(strategy aurora.BatchJobUpdateStrategy) *UpdateJob {
u.req.Settings.UpdateStrategy = &aurora.JobUpdateStrategy{BatchStrategy: &strategy}
return u
}
func (u *UpdateJob) QueueUpdateStrategy(strategy aurora.QueueJobUpdateStrategy) *UpdateJob {
u.req.Settings.UpdateStrategy = &aurora.JobUpdateStrategy{QueueStrategy: &strategy}
return u
}
func (u *UpdateJob) VariableBatchStrategy(strategy aurora.VariableBatchJobUpdateStrategy) *UpdateJob {
u.req.Settings.UpdateStrategy = &aurora.JobUpdateStrategy{VarBatchStrategy: &strategy}
return u
}
func NewUpdateSettings() *aurora.JobUpdateSettings {
us := new(aurora.JobUpdateSettings)
// Mirrors defaults set by Pystachio
us.UpdateGroupSize = 1
us.WaitForBatchCompletion = false
us.MinWaitInInstanceRunningMs = 45000
us.MaxPerInstanceFailures = 0
us.MaxFailedInstances = 0
us.RollbackOnFailure = true
return us
}

108
util.go
View file

@ -1,18 +1,47 @@
package realis
import (
"crypto/x509"
"io/ioutil"
"net/url"
"os"
"path/filepath"
"strings"
"github.com/paypal/gorealis/v2/gen-go/apache/aurora"
"github.com/paypal/gorealis/gen-go/apache/aurora"
"github.com/pkg/errors"
)
const apiPath = "/api"
// ActiveStates - States a task may be in when active.
var ActiveStates = make(map[aurora.ScheduleStatus]bool)
// SlaveAssignedStates - States a task may be in when it has already been assigned to a Mesos agent.
var SlaveAssignedStates = make(map[aurora.ScheduleStatus]bool)
// LiveStates - States a task may be in when it is live (e.g. able to take traffic)
var LiveStates = make(map[aurora.ScheduleStatus]bool)
// TerminalStates - Set of states a task may not transition away from.
var TerminalStates = make(map[aurora.ScheduleStatus]bool)
// ActiveJobUpdateStates - States a Job Update may be in where it is considered active.
var ActiveJobUpdateStates = make(map[aurora.JobUpdateStatus]bool)
// TerminalUpdateStates returns a slice containing all the terminal states an update may be in.
// This is a function in order to avoid having a slice that can be accidentally mutated.
func TerminalUpdateStates() []aurora.JobUpdateStatus {
return []aurora.JobUpdateStatus{
aurora.JobUpdateStatus_ROLLED_FORWARD,
aurora.JobUpdateStatus_ROLLED_BACK,
aurora.JobUpdateStatus_ABORTED,
aurora.JobUpdateStatus_ERROR,
aurora.JobUpdateStatus_FAILED,
}
}
// AwaitingPulseJobUpdateStates - States a job update may be in where it is waiting for a pulse.
var AwaitingPulseJobUpdateStates = make(map[aurora.JobUpdateStatus]bool)
func init() {
@ -40,14 +69,57 @@ func init() {
}
}
func validateAuroraAddress(address string) (string, error) {
// If no protocol defined, assume http
if !strings.Contains(address, "://") {
address = "http://" + address
// createCertPool will attempt to load certificates into a certificate pool from a given directory.
// Only files with an extension contained in the extension map are considered.
// This function ignores any files that cannot be read successfully or cannot be added to the certPool
// successfully.
func createCertPool(path string, extensions map[string]struct{}) (*x509.CertPool, error) {
_, err := os.Stat(path)
if err != nil {
return nil, errors.Wrap(err, "unable to load certificates")
}
u, err := url.Parse(address)
caFiles, err := ioutil.ReadDir(path)
if err != nil {
return nil, err
}
certPool := x509.NewCertPool()
loadedCerts := 0
for _, cert := range caFiles {
// Skip directories
if cert.IsDir() {
continue
}
// Skip any files that do not contain the right extension
if _, ok := extensions[filepath.Ext(cert.Name())]; !ok {
continue
}
pem, err := ioutil.ReadFile(filepath.Join(path, cert.Name()))
if err != nil {
continue
}
if certPool.AppendCertsFromPEM(pem) {
loadedCerts++
}
}
if loadedCerts == 0 {
return nil, errors.New("no certificates were able to be successfully loaded")
}
return certPool, nil
}
func validateAuroraURL(location string) (string, error) {
// If no protocol defined, assume http
if !strings.Contains(location, "://") {
location = "http://" + location
}
u, err := url.Parse(location)
if err != nil {
return "", errors.Wrap(err, "error parsing url")
@ -67,9 +139,29 @@ func validateAuroraAddress(address string) (string, error) {
return "", errors.Errorf("only protocols http and https are supported %v\n", u.Scheme)
}
if u.Path != "/api" {
// This could theoretically be elsewhere but we'll be strict for the sake of simplicity
if u.Path != apiPath {
return "", errors.Errorf("expected /api path %v\n", u.Path)
}
return u.String(), nil
}
func calculateCurrentBatch(updatingInstances int32, batchSizes []int32) int {
for i, size := range batchSizes {
updatingInstances -= size
if updatingInstances <= 0 {
return i
}
}
// Overflow batches
batchCount := len(batchSizes) - 1
lastBatchIndex := len(batchSizes) - 1
batchCount += int(updatingInstances / batchSizes[lastBatchIndex])
if updatingInstances%batchSizes[lastBatchIndex] != 0 {
batchCount++
}
return batchCount
}

114
util_test.go Normal file
View file

@ -0,0 +1,114 @@
/**
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package realis
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestAuroraURLValidator(t *testing.T) {
t.Run("badURL", func(t *testing.T) {
url, err := validateAuroraURL("http://badurl.com/badpath")
assert.Empty(t, url)
assert.Error(t, err)
})
t.Run("URLHttp", func(t *testing.T) {
url, err := validateAuroraURL("http://goodurl.com:8081/api")
assert.Equal(t, "http://goodurl.com:8081/api", url)
assert.NoError(t, err)
})
t.Run("URLHttps", func(t *testing.T) {
url, err := validateAuroraURL("https://goodurl.com:8081/api")
assert.Equal(t, "https://goodurl.com:8081/api", url)
assert.NoError(t, err)
})
t.Run("URLNoPath", func(t *testing.T) {
url, err := validateAuroraURL("http://goodurl.com:8081")
assert.Equal(t, "http://goodurl.com:8081/api", url)
assert.NoError(t, err)
})
t.Run("ipAddrNoPath", func(t *testing.T) {
url, err := validateAuroraURL("http://192.168.1.33:8081")
assert.Equal(t, "http://192.168.1.33:8081/api", url)
assert.NoError(t, err)
})
t.Run("URLNoProtocol", func(t *testing.T) {
url, err := validateAuroraURL("goodurl.com:8081/api")
assert.Equal(t, "http://goodurl.com:8081/api", url)
assert.NoError(t, err)
})
t.Run("URLNoProtocolNoPathNoPort", func(t *testing.T) {
url, err := validateAuroraURL("goodurl.com")
assert.Equal(t, "http://goodurl.com:8081/api", url)
assert.NoError(t, err)
})
}
func TestCurrentBatchCalculator(t *testing.T) {
t.Run("singleBatchOverflow", func(t *testing.T) {
curBatch := calculateCurrentBatch(10, []int32{2})
assert.Equal(t, 4, curBatch)
})
t.Run("noInstancesUpdating", func(t *testing.T) {
curBatch := calculateCurrentBatch(0, []int32{2})
assert.Equal(t, 0, curBatch)
})
t.Run("evenMatchSingleBatch", func(t *testing.T) {
curBatch := calculateCurrentBatch(2, []int32{2})
assert.Equal(t, 0, curBatch)
})
t.Run("moreInstancesThanBatches", func(t *testing.T) {
curBatch := calculateCurrentBatch(5, []int32{1, 2})
assert.Equal(t, 2, curBatch)
})
t.Run("moreInstancesThanBatchesDecreasing", func(t *testing.T) {
curBatch := calculateCurrentBatch(5, []int32{2, 1})
assert.Equal(t, 3, curBatch)
})
t.Run("unevenFit", func(t *testing.T) {
curBatch := calculateCurrentBatch(2, []int32{1, 2})
assert.Equal(t, 1, curBatch)
})
t.Run("halfWay", func(t *testing.T) {
curBatch := calculateCurrentBatch(1, []int32{1, 2})
assert.Equal(t, 0, curBatch)
})
}
func TestCertPoolCreator(t *testing.T) {
extensions := map[string]struct{}{".crt": {}}
_, err := createCertPool("examples/certs", extensions)
assert.NoError(t, err)
t.Run("badDir", func(t *testing.T) {
_, err := createCertPool("idontexist", extensions)
assert.Error(t, err)
})
}

View file

@ -1,28 +1,14 @@
language: go
go_import_path: github.com/davecgh/go-spew
go:
- 1.6.x
- 1.7.x
- 1.8.x
- 1.9.x
- 1.10.x
- tip
sudo: false
- 1.5.4
- 1.6.3
- 1.7
install:
- go get -v github.com/alecthomas/gometalinter
- gometalinter --install
- go get -v golang.org/x/tools/cmd/cover
script:
- export PATH=$PATH:$HOME/gopath/bin
- export GORACE="halt_on_error=1"
- test -z "$(gometalinter --disable-all
--enable=gofmt
--enable=golint
--enable=vet
--enable=gosimple
--enable=unconvert
--deadline=4m ./spew | tee /dev/stderr)"
- go test -v -race -tags safe ./spew
- go test -v -race -tags testcgo ./spew -covermode=atomic -coverprofile=profile.cov
- go test -v -tags=safe ./spew
- go test -v -tags=testcgo ./spew -covermode=count -coverprofile=profile.cov
after_success:
- go get -v github.com/mattn/goveralls
- export PATH=$PATH:$HOME/gopath/bin
- goveralls -coverprofile=profile.cov -service=travis-ci

View file

@ -2,7 +2,7 @@ ISC License
Copyright (c) 2012-2016 Dave Collins <dave@davec.name>
Permission to use, copy, modify, and/or distribute this software for any
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.

View file

@ -1,9 +1,12 @@
go-spew
=======
[![Build Status](https://img.shields.io/travis/davecgh/go-spew.svg)](https://travis-ci.org/davecgh/go-spew)
[![ISC License](http://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![Coverage Status](https://img.shields.io/coveralls/davecgh/go-spew.svg)](https://coveralls.io/r/davecgh/go-spew?branch=master)
[![Build Status](https://img.shields.io/travis/davecgh/go-spew.svg)]
(https://travis-ci.org/davecgh/go-spew) [![ISC License]
(http://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org) [![Coverage Status]
(https://img.shields.io/coveralls/davecgh/go-spew.svg)]
(https://coveralls.io/r/davecgh/go-spew?branch=master)
Go-spew implements a deep pretty printer for Go data structures to aid in
debugging. A comprehensive suite of tests with 100% test coverage is provided
@ -18,7 +21,8 @@ post about it
## Documentation
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](http://godoc.org/github.com/davecgh/go-spew/spew)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)]
(http://godoc.org/github.com/davecgh/go-spew/spew)
Full `go doc` style documentation for the project can be viewed online without
installing this package by using the excellent GoDoc site here:

View file

@ -16,9 +16,7 @@
// when the code is not running on Google App Engine, compiled by GopherJS, and
// "-tags safe" is not added to the go build command line. The "disableunsafe"
// tag is deprecated and thus should not be used.
// Go versions prior to 1.4 are disabled because they use a different layout
// for interfaces which make the implementation of unsafeReflectValue more complex.
// +build !js,!appengine,!safe,!disableunsafe,go1.4
// +build !js,!appengine,!safe,!disableunsafe
package spew
@ -36,49 +34,80 @@ const (
ptrSize = unsafe.Sizeof((*byte)(nil))
)
type flag uintptr
var (
// flagRO indicates whether the value field of a reflect.Value
// is read-only.
flagRO flag
// offsetPtr, offsetScalar, and offsetFlag are the offsets for the
// internal reflect.Value fields. These values are valid before golang
// commit ecccf07e7f9d which changed the format. The are also valid
// after commit 82f48826c6c7 which changed the format again to mirror
// the original format. Code in the init function updates these offsets
// as necessary.
offsetPtr = uintptr(ptrSize)
offsetScalar = uintptr(0)
offsetFlag = uintptr(ptrSize * 2)
// flagAddr indicates whether the address of the reflect.Value's
// value may be taken.
flagAddr flag
// flagKindWidth and flagKindShift indicate various bits that the
// reflect package uses internally to track kind information.
//
// flagRO indicates whether or not the value field of a reflect.Value is
// read-only.
//
// flagIndir indicates whether the value field of a reflect.Value is
// the actual data or a pointer to the data.
//
// These values are valid before golang commit 90a7c3c86944 which
// changed their positions. Code in the init function updates these
// flags as necessary.
flagKindWidth = uintptr(5)
flagKindShift = uintptr(flagKindWidth - 1)
flagRO = uintptr(1 << 0)
flagIndir = uintptr(1 << 1)
)
// flagKindMask holds the bits that make up the kind
// part of the flags field. In all the supported versions,
// it is in the lower 5 bits.
const flagKindMask = flag(0x1f)
// Different versions of Go have used different
// bit layouts for the flags type. This table
// records the known combinations.
var okFlags = []struct {
ro, addr flag
}{{
// From Go 1.4 to 1.5
ro: 1 << 5,
addr: 1 << 7,
}, {
// Up to Go tip.
ro: 1<<5 | 1<<6,
addr: 1 << 8,
}}
var flagValOffset = func() uintptr {
field, ok := reflect.TypeOf(reflect.Value{}).FieldByName("flag")
if !ok {
panic("reflect.Value has no flag field")
func init() {
// Older versions of reflect.Value stored small integers directly in the
// ptr field (which is named val in the older versions). Versions
// between commits ecccf07e7f9d and 82f48826c6c7 added a new field named
// scalar for this purpose which unfortunately came before the flag
// field, so the offset of the flag field is different for those
// versions.
//
// This code constructs a new reflect.Value from a known small integer
// and checks if the size of the reflect.Value struct indicates it has
// the scalar field. When it does, the offsets are updated accordingly.
vv := reflect.ValueOf(0xf00)
if unsafe.Sizeof(vv) == (ptrSize * 4) {
offsetScalar = ptrSize * 2
offsetFlag = ptrSize * 3
}
return field.Offset
}()
// flagField returns a pointer to the flag field of a reflect.Value.
func flagField(v *reflect.Value) *flag {
return (*flag)(unsafe.Pointer(uintptr(unsafe.Pointer(v)) + flagValOffset))
// Commit 90a7c3c86944 changed the flag positions such that the low
// order bits are the kind. This code extracts the kind from the flags
// field and ensures it's the correct type. When it's not, the flag
// order has been changed to the newer format, so the flags are updated
// accordingly.
upf := unsafe.Pointer(uintptr(unsafe.Pointer(&vv)) + offsetFlag)
upfv := *(*uintptr)(upf)
flagKindMask := uintptr((1<<flagKindWidth - 1) << flagKindShift)
if (upfv&flagKindMask)>>flagKindShift != uintptr(reflect.Int) {
flagKindShift = 0
flagRO = 1 << 5
flagIndir = 1 << 6
// Commit adf9b30e5594 modified the flags to separate the
// flagRO flag into two bits which specifies whether or not the
// field is embedded. This causes flagIndir to move over a bit
// and means that flagRO is the combination of either of the
// original flagRO bit and the new bit.
//
// This code detects the change by extracting what used to be
// the indirect bit to ensure it's set. When it's not, the flag
// order has been changed to the newer format, so the flags are
// updated accordingly.
if upfv&flagIndir == 0 {
flagRO = 3 << 5
flagIndir = 1 << 7
}
}
}
// unsafeReflectValue converts the passed reflect.Value into a one that bypasses
@ -90,56 +119,34 @@ func flagField(v *reflect.Value) *flag {
// This allows us to check for implementations of the Stringer and error
// interfaces to be used for pretty printing ordinarily unaddressable and
// inaccessible values such as unexported struct fields.
func unsafeReflectValue(v reflect.Value) reflect.Value {
if !v.IsValid() || (v.CanInterface() && v.CanAddr()) {
return v
}
flagFieldPtr := flagField(&v)
*flagFieldPtr &^= flagRO
*flagFieldPtr |= flagAddr
return v
}
// Sanity checks against future reflect package changes
// to the type or semantics of the Value.flag field.
func init() {
field, ok := reflect.TypeOf(reflect.Value{}).FieldByName("flag")
if !ok {
panic("reflect.Value has no flag field")
}
if field.Type.Kind() != reflect.TypeOf(flag(0)).Kind() {
panic("reflect.Value flag field has changed kind")
}
type t0 int
var t struct {
A t0
// t0 will have flagEmbedRO set.
t0
// a will have flagStickyRO set
a t0
}
vA := reflect.ValueOf(t).FieldByName("A")
va := reflect.ValueOf(t).FieldByName("a")
vt0 := reflect.ValueOf(t).FieldByName("t0")
// Infer flagRO from the difference between the flags
// for the (otherwise identical) fields in t.
flagPublic := *flagField(&vA)
flagWithRO := *flagField(&va) | *flagField(&vt0)
flagRO = flagPublic ^ flagWithRO
// Infer flagAddr from the difference between a value
// taken from a pointer and not.
vPtrA := reflect.ValueOf(&t).Elem().FieldByName("A")
flagNoPtr := *flagField(&vA)
flagPtr := *flagField(&vPtrA)
flagAddr = flagNoPtr ^ flagPtr
// Check that the inferred flags tally with one of the known versions.
for _, f := range okFlags {
if flagRO == f.ro && flagAddr == f.addr {
return
func unsafeReflectValue(v reflect.Value) (rv reflect.Value) {
indirects := 1
vt := v.Type()
upv := unsafe.Pointer(uintptr(unsafe.Pointer(&v)) + offsetPtr)
rvf := *(*uintptr)(unsafe.Pointer(uintptr(unsafe.Pointer(&v)) + offsetFlag))
if rvf&flagIndir != 0 {
vt = reflect.PtrTo(v.Type())
indirects++
} else if offsetScalar != 0 {
// The value is in the scalar field when it's not one of the
// reference types.
switch vt.Kind() {
case reflect.Uintptr:
case reflect.Chan:
case reflect.Func:
case reflect.Map:
case reflect.Ptr:
case reflect.UnsafePointer:
default:
upv = unsafe.Pointer(uintptr(unsafe.Pointer(&v)) +
offsetScalar)
}
}
panic("reflect.Value read-only flag has changed semantics")
pv := reflect.NewAt(vt, upv)
rv = pv
for i := 0; i < indirects; i++ {
rv = rv.Elem()
}
return rv
}

View file

@ -16,7 +16,7 @@
// when the code is running on Google App Engine, compiled by GopherJS, or
// "-tags safe" is added to the go build command line. The "disableunsafe"
// tag is deprecated and thus should not be used.
// +build js appengine safe disableunsafe !go1.4
// +build js appengine safe disableunsafe
package spew

View file

@ -180,7 +180,7 @@ func printComplex(w io.Writer, c complex128, floatPrecision int) {
w.Write(closeParenBytes)
}
// printHexPtr outputs a uintptr formatted as hexadecimal with a leading '0x'
// printHexPtr outputs a uintptr formatted as hexidecimal with a leading '0x'
// prefix to Writer w.
func printHexPtr(w io.Writer, p uintptr) {
// Null pointer.

View file

@ -35,16 +35,16 @@ var (
// cCharRE is a regular expression that matches a cgo char.
// It is used to detect character arrays to hexdump them.
cCharRE = regexp.MustCompile(`^.*\._Ctype_char$`)
cCharRE = regexp.MustCompile("^.*\\._Ctype_char$")
// cUnsignedCharRE is a regular expression that matches a cgo unsigned
// char. It is used to detect unsigned character arrays to hexdump
// them.
cUnsignedCharRE = regexp.MustCompile(`^.*\._Ctype_unsignedchar$`)
cUnsignedCharRE = regexp.MustCompile("^.*\\._Ctype_unsignedchar$")
// cUint8tCharRE is a regular expression that matches a cgo uint8_t.
// It is used to detect uint8_t arrays to hexdump them.
cUint8tCharRE = regexp.MustCompile(`^.*\._Ctype_uint8_t$`)
cUint8tCharRE = regexp.MustCompile("^.*\\._Ctype_uint8_t$")
)
// dumpState contains information about the state of a dump operation.
@ -143,10 +143,10 @@ func (d *dumpState) dumpPtr(v reflect.Value) {
// Display dereferenced value.
d.w.Write(openParenBytes)
switch {
case nilFound:
case nilFound == true:
d.w.Write(nilAngleBytes)
case cycleFound:
case cycleFound == true:
d.w.Write(circularBytes)
default:

View file

@ -768,7 +768,7 @@ func addUintptrDumpTests() {
func addUnsafePointerDumpTests() {
// Null pointer.
v := unsafe.Pointer(nil)
v := unsafe.Pointer(uintptr(0))
nv := (*unsafe.Pointer)(nil)
pv := &v
vAddr := fmt.Sprintf("%p", pv)

View file

@ -82,20 +82,18 @@ func addCgoDumpTests() {
v5Len := fmt.Sprintf("%d", v5l)
v5Cap := fmt.Sprintf("%d", v5c)
v5t := "[6]testdata._Ctype_uint8_t"
v5t2 := "[6]testdata._Ctype_uchar"
v5s := "(len=" + v5Len + " cap=" + v5Cap + ") " +
"{\n 00000000 74 65 73 74 35 00 " +
" |test5.|\n}"
addDumpTest(v5, "("+v5t+") "+v5s+"\n", "("+v5t2+") "+v5s+"\n")
addDumpTest(v5, "("+v5t+") "+v5s+"\n")
// C typedefed unsigned char array.
v6, v6l, v6c := testdata.GetCgoTypdefedUnsignedCharArray()
v6Len := fmt.Sprintf("%d", v6l)
v6Cap := fmt.Sprintf("%d", v6c)
v6t := "[6]testdata._Ctype_custom_uchar_t"
v6t2 := "[6]testdata._Ctype_uchar"
v6s := "(len=" + v6Len + " cap=" + v6Cap + ") " +
"{\n 00000000 74 65 73 74 36 00 " +
" |test6.|\n}"
addDumpTest(v6, "("+v6t+") "+v6s+"\n", "("+v6t2+") "+v6s+"\n")
addDumpTest(v6, "("+v6t+") "+v6s+"\n")
}

View file

@ -182,10 +182,10 @@ func (f *formatState) formatPtr(v reflect.Value) {
// Display dereferenced value.
switch {
case nilFound:
case nilFound == true:
f.fs.Write(nilAngleBytes)
case cycleFound:
case cycleFound == true:
f.fs.Write(circularShortBytes)
default:

View file

@ -1083,7 +1083,7 @@ func addUintptrFormatterTests() {
func addUnsafePointerFormatterTests() {
// Null pointer.
v := unsafe.Pointer(nil)
v := unsafe.Pointer(uintptr(0))
nv := (*unsafe.Pointer)(nil)
pv := &v
vAddr := fmt.Sprintf("%p", pv)
@ -1536,14 +1536,14 @@ func TestPrintSortedKeys(t *testing.T) {
t.Errorf("Sorted keys mismatch 3:\n %v %v", s, expected)
}
s = cfg.Sprint(map[testStruct]int{{1}: 1, {3}: 3, {2}: 2})
s = cfg.Sprint(map[testStruct]int{testStruct{1}: 1, testStruct{3}: 3, testStruct{2}: 2})
expected = "map[ts.1:1 ts.2:2 ts.3:3]"
if s != expected {
t.Errorf("Sorted keys mismatch 4:\n %v %v", s, expected)
}
if !spew.UnsafeDisabled {
s = cfg.Sprint(map[testStructP]int{{1}: 1, {3}: 3, {2}: 2})
s = cfg.Sprint(map[testStructP]int{testStructP{1}: 1, testStructP{3}: 3, testStructP{2}: 2})
expected = "map[ts.1:1 ts.2:2 ts.3:3]"
if s != expected {
t.Errorf("Sorted keys mismatch 5:\n %v %v", s, expected)

View file

@ -36,7 +36,10 @@ type dummyFmtState struct {
}
func (dfs *dummyFmtState) Flag(f int) bool {
return f == int('+')
if f == int('+') {
return true
}
return false
}
func (dfs *dummyFmtState) Precision() (int, bool) {

View file

@ -16,7 +16,7 @@
// when the code is not running on Google App Engine, compiled by GopherJS, and
// "-tags safe" is not added to the go build command line. The "disableunsafe"
// tag is deprecated and thus should not be used.
// +build !js,!appengine,!safe,!disableunsafe,go1.4
// +build !js,!appengine,!safe,!disableunsafe
/*
This test file is part of the spew package rather than than the spew_test
@ -30,6 +30,7 @@ import (
"bytes"
"reflect"
"testing"
"unsafe"
)
// changeKind uses unsafe to intentionally change the kind of a reflect.Value to
@ -37,13 +38,13 @@ import (
// fallback code which punts to the standard fmt library for new types that
// might get added to the language.
func changeKind(v *reflect.Value, readOnly bool) {
flags := flagField(v)
rvf := (*uintptr)(unsafe.Pointer(uintptr(unsafe.Pointer(v)) + offsetFlag))
*rvf = *rvf | ((1<<flagKindWidth - 1) << flagKindShift)
if readOnly {
*flags |= flagRO
*rvf |= flagRO
} else {
*flags &^= flagRO
*rvf &= ^uintptr(flagRO)
}
*flags |= flagKindMask
}
// TestAddedReflectValue tests functionaly of the dump and formatter code which

View file

@ -2,14 +2,18 @@ language: go
sudo: false
go:
- "1.8"
- "1.9"
- "1.10"
- tip
script:
- ./.travis.gogenerate.sh
- ./.travis.gofmt.sh
- ./.travis.govet.sh
- go test -v -race $(go list ./... | grep -v vendor)
matrix:
include:
- go: "1.8.x"
- go: "1.9.x"
- go: "1.10.x"
- go: "1.11.x"
env: GO111MODULE=off
- go: "1.11.x"
env: GO111MODULE=on
- go: tip
script:
- ./.travis.gogenerate.sh
- ./.travis.gofmt.sh
- ./.travis.govet.sh
- go test -v -race $(go list ./... | grep -v vendor)

View file

@ -1,22 +1,21 @@
Copyright (c) 2012 - 2013 Mat Ryer and Tyler Bunnell
MIT License
Please consider promoting this project if you find it useful.
Copyright (c) 2012-2018 Mat Ryer and Tyler Bunnell
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without restriction,
including without limitation the rights to use, copy, modify, merge,
publish, distribute, sublicense, and/or sell copies of the Software,
and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT
OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE
OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View file

@ -287,8 +287,10 @@ To install Testify, use `go get`:
This will then make the following packages available to you:
github.com/stretchr/testify/assert
github.com/stretchr/testify/require
github.com/stretchr/testify/mock
github.com/stretchr/testify/http
github.com/stretchr/testify/suite
github.com/stretchr/testify/http (deprecated)
Import the `testify/assert` package into your code using this template:
@ -319,7 +321,7 @@ To update Testify to the latest version, use `go get -u github.com/stretchr/test
Supported go versions
==================
We support the three major Go versions, which are 1.8, 1.9 and 1.10 at the moment.
We support the three major Go versions, which are 1.9, 1.10, and 1.11 at the moment.
------
@ -329,3 +331,10 @@ Contributing
Please feel free to submit issues, fork the repository and send pull requests!
When submitting an issue, we ask that you please include a complete test function that demonstrates the issue. Extra credit for those using Testify to write the test code that demonstrates it.
------
License
=======
This project is licensed under the terms of the MIT license.

View file

@ -39,7 +39,7 @@ type ValueAssertionFunc func(TestingT, interface{}, ...interface{}) bool
// for table driven tests.
type BoolAssertionFunc func(TestingT, bool, ...interface{}) bool
// ValuesAssertionFunc is a common function prototype when validating an error value. Can be useful
// ErrorAssertionFunc is a common function prototype when validating an error value. Can be useful
// for table driven tests.
type ErrorAssertionFunc func(TestingT, error, ...interface{}) bool
@ -179,7 +179,11 @@ func messageFromMsgAndArgs(msgAndArgs ...interface{}) string {
return ""
}
if len(msgAndArgs) == 1 {
return msgAndArgs[0].(string)
msg := msgAndArgs[0]
if msgAsStr, ok := msg.(string); ok {
return msgAsStr
}
return fmt.Sprintf("%+v", msg)
}
if len(msgAndArgs) > 1 {
return fmt.Sprintf(msgAndArgs[0].(string), msgAndArgs[1:]...)
@ -415,6 +419,17 @@ func NotNil(t TestingT, object interface{}, msgAndArgs ...interface{}) bool {
return Fail(t, "Expected value not to be nil.", msgAndArgs...)
}
// containsKind checks if a specified kind in the slice of kinds.
func containsKind(kinds []reflect.Kind, kind reflect.Kind) bool {
for i := 0; i < len(kinds); i++ {
if kind == kinds[i] {
return true
}
}
return false
}
// isNil checks if a specified object is nil or not, without Failing.
func isNil(object interface{}) bool {
if object == nil {
@ -423,7 +438,14 @@ func isNil(object interface{}) bool {
value := reflect.ValueOf(object)
kind := value.Kind()
if kind >= reflect.Chan && kind <= reflect.Slice && value.IsNil() {
isNilableKind := containsKind(
[]reflect.Kind{
reflect.Chan, reflect.Func,
reflect.Interface, reflect.Map,
reflect.Ptr, reflect.Slice},
kind)
if isNilableKind && value.IsNil() {
return true
}
@ -1327,7 +1349,7 @@ func typeAndKind(v interface{}) (reflect.Type, reflect.Kind) {
}
// diff returns a diff of both values as long as both are of the same type and
// are a struct, map, slice or array. Otherwise it returns an empty string.
// are a struct, map, slice, array or string. Otherwise it returns an empty string.
func diff(expected interface{}, actual interface{}) string {
if expected == nil || actual == nil {
return ""
@ -1345,7 +1367,7 @@ func diff(expected interface{}, actual interface{}) string {
}
var e, a string
if ek != reflect.String {
if et != reflect.TypeOf("") {
e = spewConfig.Sdump(expected)
a = spewConfig.Sdump(actual)
} else {

View file

@ -175,6 +175,8 @@ func TestIsType(t *testing.T) {
}
type myType string
func TestEqual(t *testing.T) {
mockT := new(testing.T)
@ -200,6 +202,9 @@ func TestEqual(t *testing.T) {
if !Equal(mockT, uint64(123), uint64(123)) {
t.Error("Equal should return true")
}
if !Equal(mockT, myType("1"), myType("1")) {
t.Error("Equal should return true")
}
if !Equal(mockT, &struct{}{}, &struct{}{}) {
t.Error("Equal should return true (pointer equality is based on equality of underlying value)")
}
@ -207,6 +212,9 @@ func TestEqual(t *testing.T) {
if Equal(mockT, m["bar"], "something") {
t.Error("Equal should return false")
}
if Equal(mockT, myType("1"), myType("2")) {
t.Error("Equal should return false")
}
}
// bufferT implements TestingT. Its implementation of Errorf writes the output that would be produced by
@ -275,6 +283,8 @@ func TestEqualFormatting(t *testing.T) {
}{
{equalWant: "want", equalGot: "got", want: "\tassertions.go:\\d+: \n\t+Error Trace:\t\n\t+Error:\\s+Not equal:\\s+\n\\s+expected: \"want\"\n\\s+actual\\s+: \"got\"\n\\s+Diff:\n\\s+-+ Expected\n\\s+\\++ Actual\n\\s+@@ -1 \\+1 @@\n\\s+-want\n\\s+\\+got\n"},
{equalWant: "want", equalGot: "got", msgAndArgs: []interface{}{"hello, %v!", "world"}, want: "\tassertions.go:[0-9]+: \n\t+Error Trace:\t\n\t+Error:\\s+Not equal:\\s+\n\\s+expected: \"want\"\n\\s+actual\\s+: \"got\"\n\\s+Diff:\n\\s+-+ Expected\n\\s+\\++ Actual\n\\s+@@ -1 \\+1 @@\n\\s+-want\n\\s+\\+got\n\\s+Messages:\\s+hello, world!\n"},
{equalWant: "want", equalGot: "got", msgAndArgs: []interface{}{123}, want: "\tassertions.go:[0-9]+: \n\t+Error Trace:\t\n\t+Error:\\s+Not equal:\\s+\n\\s+expected: \"want\"\n\\s+actual\\s+: \"got\"\n\\s+Diff:\n\\s+-+ Expected\n\\s+\\++ Actual\n\\s+@@ -1 \\+1 @@\n\\s+-want\n\\s+\\+got\n\\s+Messages:\\s+123\n"},
{equalWant: "want", equalGot: "got", msgAndArgs: []interface{}{struct{ a string }{"hello"}}, want: "\tassertions.go:[0-9]+: \n\t+Error Trace:\t\n\t+Error:\\s+Not equal:\\s+\n\\s+expected: \"want\"\n\\s+actual\\s+: \"got\"\n\\s+Diff:\n\\s+-+ Expected\n\\s+\\++ Actual\n\\s+@@ -1 \\+1 @@\n\\s+-want\n\\s+\\+got\n\\s+Messages:\\s+{a:hello}\n"},
} {
mockT := &bufferT{}
Equal(mockT, currCase.equalWant, currCase.equalGot, currCase.msgAndArgs...)

7
vendor/github.com/stretchr/testify/go.mod generated vendored Normal file
View file

@ -0,0 +1,7 @@
module github.com/stretchr/testify
require (
github.com/davecgh/go-spew v1.1.0
github.com/pmezard/go-difflib v1.0.0
github.com/stretchr/objx v0.1.0
)

6
vendor/github.com/stretchr/testify/go.sum generated vendored Normal file
View file

@ -0,0 +1,6 @@
github.com/davecgh/go-spew v1.1.0 h1:ZDRjVQ15GmhC3fiQ8ni8+OwkZQO4DARzQgrnXU1Liz8=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/objx v0.1.0 h1:4G4v2dO3VZwixGIRoQ5Lfboy6nUhCyYzaqnIAPPhYs4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=

View file

@ -176,6 +176,7 @@ func (c *Call) Maybe() *Call {
// Mock.
// On("MyMethod", 1).Return(nil).
// On("MyOtherMethod", 'a', 'b', 'c').Return(errors.New("Some Error"))
//go:noinline
func (c *Call) On(methodName string, arguments ...interface{}) *Call {
return c.Parent.On(methodName, arguments...)
}
@ -691,7 +692,7 @@ func (args Arguments) Diff(objects []interface{}) (string, int) {
output = fmt.Sprintf("%s\t%d: PASS: %s matched by %s\n", output, i, actualFmt, matcher)
} else {
differences++
output = fmt.Sprintf("%s\t%d: PASS: %s not matched by %s\n", output, i, actualFmt, matcher)
output = fmt.Sprintf("%s\t%d: FAIL: %s not matched by %s\n", output, i, actualFmt, matcher)
}
} else if reflect.TypeOf(expected) == reflect.TypeOf((*AnythingOfTypeArgument)(nil)).Elem() {

View file

@ -32,6 +32,7 @@ func (i *TestExampleImplementation) TheExampleMethod(a, b, c int) (int, error) {
return args.Int(0), errors.New("Whoops")
}
//go:noinline
func (i *TestExampleImplementation) TheExampleMethod2(yesorno bool) {
i.Called(yesorno)
}
@ -1492,6 +1493,7 @@ func unexpectedCallRegex(method, calledArg, expectedArg, diff string) string {
rMethod, calledArg, rMethod, expectedArg, diff)
}
//go:noinline
func ConcurrencyTestMethod(m *Mock) {
m.Called()
}

View file

@ -22,7 +22,7 @@ type ValueAssertionFunc func(TestingT, interface{}, ...interface{})
// for table driven tests.
type BoolAssertionFunc func(TestingT, bool, ...interface{})
// ValuesAssertionFunc is a common function prototype when validating an error value. Can be useful
// ErrorAssertionFunc is a common function prototype when validating an error value. Can be useful
// for table driven tests.
type ErrorAssertionFunc func(TestingT, error, ...interface{})

View file

@ -55,10 +55,32 @@ func (suite *Suite) Assert() *assert.Assertions {
return suite.Assertions
}
func failOnPanic(t *testing.T) {
r := recover()
if r != nil {
t.Errorf("test panicked: %v", r)
t.FailNow()
}
}
// Run provides suite functionality around golang subtests. It should be
// called in place of t.Run(name, func(t *testing.T)) in test suite code.
// The passed-in func will be executed as a subtest with a fresh instance of t.
// Provides compatibility with go test pkg -run TestSuite/TestName/SubTestName.
func (suite *Suite) Run(name string, subtest func()) bool {
oldT := suite.T()
defer suite.SetT(oldT)
return oldT.Run(name, func(t *testing.T) {
suite.SetT(t)
subtest()
})
}
// Run takes a testing suite and runs all of the tests attached
// to it.
func Run(t *testing.T, suite TestingSuite) {
suite.SetT(t)
defer failOnPanic(t)
if setupAllSuite, ok := suite.(SetupAllSuite); ok {
setupAllSuite.SetupSuite()
@ -84,6 +106,8 @@ func Run(t *testing.T, suite TestingSuite) {
F: func(t *testing.T) {
parentT := suite.T()
suite.SetT(t)
defer failOnPanic(t)
if setupTestSuite, ok := suite.(SetupTestSuite); ok {
setupTestSuite.SetupTest()
}

View file

@ -42,6 +42,99 @@ func (s *SuiteRequireTwice) TestRequireTwo() {
r.Equal(1, 2)
}
type panickingSuite struct {
Suite
panicInSetupSuite bool
panicInSetupTest bool
panicInBeforeTest bool
panicInTest bool
panicInAfterTest bool
panicInTearDownTest bool
panicInTearDownSuite bool
}
func (s *panickingSuite) SetupSuite() {
if s.panicInSetupSuite {
panic("oops in setup suite")
}
}
func (s *panickingSuite) SetupTest() {
if s.panicInSetupTest {
panic("oops in setup test")
}
}
func (s *panickingSuite) BeforeTest(_, _ string) {
if s.panicInBeforeTest {
panic("oops in before test")
}
}
func (s *panickingSuite) Test() {
if s.panicInTest {
panic("oops in test")
}
}
func (s *panickingSuite) AfterTest(_, _ string) {
if s.panicInAfterTest {
panic("oops in after test")
}
}
func (s *panickingSuite) TearDownTest() {
if s.panicInTearDownTest {
panic("oops in tear down test")
}
}
func (s *panickingSuite) TearDownSuite() {
if s.panicInTearDownSuite {
panic("oops in tear down suite")
}
}
func TestSuiteRecoverPanic(t *testing.T) {
ok := true
panickingTests := []testing.InternalTest{
{
Name: "TestPanicInSetupSuite",
F: func(t *testing.T) { Run(t, &panickingSuite{panicInSetupSuite: true}) },
},
{
Name: "TestPanicInSetupTest",
F: func(t *testing.T) { Run(t, &panickingSuite{panicInSetupTest: true}) },
},
{
Name: "TestPanicInBeforeTest",
F: func(t *testing.T) { Run(t, &panickingSuite{panicInBeforeTest: true}) },
},
{
Name: "TestPanicInTest",
F: func(t *testing.T) { Run(t, &panickingSuite{panicInTest: true}) },
},
{
Name: "TestPanicInAfterTest",
F: func(t *testing.T) { Run(t, &panickingSuite{panicInAfterTest: true}) },
},
{
Name: "TestPanicInTearDownTest",
F: func(t *testing.T) { Run(t, &panickingSuite{panicInTearDownTest: true}) },
},
{
Name: "TestPanicInTearDownSuite",
F: func(t *testing.T) { Run(t, &panickingSuite{panicInTearDownSuite: true}) },
},
}
require.NotPanics(t, func() {
ok = testing.RunTests(allTestsFilter, panickingTests)
})
assert.False(t, ok)
}
// This suite is intended to store values to make sure that only
// testing-suite-related methods are run. It's also a fully
// functional example of a testing suite, using setup/teardown methods
@ -59,6 +152,7 @@ type SuiteTester struct {
TearDownTestRunCount int
TestOneRunCount int
TestTwoRunCount int
TestSubtestRunCount int
NonTestMethodRunCount int
SuiteNameBefore []string
@ -153,6 +247,27 @@ func (suite *SuiteTester) NonTestMethod() {
suite.NonTestMethodRunCount++
}
func (suite *SuiteTester) TestSubtest() {
suite.TestSubtestRunCount++
for _, t := range []struct {
testName string
}{
{"first"},
{"second"},
} {
suiteT := suite.T()
suite.Run(t.testName, func() {
// We should get a different *testing.T for subtests, so that
// go test recognizes them as proper subtests for output formatting
// and running individual subtests
subTestT := suite.T()
suite.NotEqual(subTestT, suiteT)
})
suite.Equal(suiteT, suite.T())
}
}
// TestRunSuite will be run by the 'go test' command, so within it, we
// can run our suite using the Run(*testing.T, TestingSuite) function.
func TestRunSuite(t *testing.T) {
@ -168,18 +283,20 @@ func TestRunSuite(t *testing.T) {
assert.Equal(t, suiteTester.SetupSuiteRunCount, 1)
assert.Equal(t, suiteTester.TearDownSuiteRunCount, 1)
assert.Equal(t, len(suiteTester.SuiteNameAfter), 3)
assert.Equal(t, len(suiteTester.SuiteNameBefore), 3)
assert.Equal(t, len(suiteTester.TestNameAfter), 3)
assert.Equal(t, len(suiteTester.TestNameBefore), 3)
assert.Equal(t, len(suiteTester.SuiteNameAfter), 4)
assert.Equal(t, len(suiteTester.SuiteNameBefore), 4)
assert.Equal(t, len(suiteTester.TestNameAfter), 4)
assert.Equal(t, len(suiteTester.TestNameBefore), 4)
assert.Contains(t, suiteTester.TestNameAfter, "TestOne")
assert.Contains(t, suiteTester.TestNameAfter, "TestTwo")
assert.Contains(t, suiteTester.TestNameAfter, "TestSkip")
assert.Contains(t, suiteTester.TestNameAfter, "TestSubtest")
assert.Contains(t, suiteTester.TestNameBefore, "TestOne")
assert.Contains(t, suiteTester.TestNameBefore, "TestTwo")
assert.Contains(t, suiteTester.TestNameBefore, "TestSkip")
assert.Contains(t, suiteTester.TestNameBefore, "TestSubtest")
for _, suiteName := range suiteTester.SuiteNameAfter {
assert.Equal(t, "SuiteTester", suiteName)
@ -197,15 +314,16 @@ func TestRunSuite(t *testing.T) {
assert.False(t, when.IsZero())
}
// There are three test methods (TestOne, TestTwo, and TestSkip), so
// There are four test methods (TestOne, TestTwo, TestSkip, and TestSubtest), so
// the SetupTest and TearDownTest methods (which should be run once for
// each test) should have been run three times.
assert.Equal(t, suiteTester.SetupTestRunCount, 3)
assert.Equal(t, suiteTester.TearDownTestRunCount, 3)
// each test) should have been run four times.
assert.Equal(t, suiteTester.SetupTestRunCount, 4)
assert.Equal(t, suiteTester.TearDownTestRunCount, 4)
// Each test should have been run once.
assert.Equal(t, suiteTester.TestOneRunCount, 1)
assert.Equal(t, suiteTester.TestTwoRunCount, 1)
assert.Equal(t, suiteTester.TestSubtestRunCount, 1)
// Methods that don't match the test method identifier shouldn't
// have been run at all.

42
zk.go
View file

@ -24,14 +24,14 @@ import (
"github.com/samuel/go-zookeeper/zk"
)
type Endpoint struct {
type endpoint struct {
Host string `json:"host"`
Port int `json:"port"`
}
type ServiceInstance struct {
Service Endpoint `json:"serviceEndpoint"`
AdditionalEndpoints map[string]Endpoint `json:"additionalEndpoints"`
type serviceInstance struct {
Service endpoint `json:"serviceEndpoint"`
AdditionalEndpoints map[string]endpoint `json:"additionalEndpoints"`
Status string `json:"status"`
}
@ -40,47 +40,54 @@ type zkConfig struct {
path string
backoff Backoff
timeout time.Duration
logger Logger
logger logger
}
// ZKOpt - Configuration option for the Zookeeper client used.
type ZKOpt func(z *zkConfig)
// ZKEndpoints - Endpoints on which a Zookeeper instance is running to be used by the client.
func ZKEndpoints(endpoints ...string) ZKOpt {
return func(z *zkConfig) {
z.endpoints = endpoints
}
}
// ZKPath - Path to look for information in when connected to Zookeeper.
func ZKPath(path string) ZKOpt {
return func(z *zkConfig) {
z.path = path
}
}
// ZKBackoff - Configuration for Retry mechanism used when connecting to Zookeeper.
// TODO(rdelvalle): Determine if this is really necessary as the ZK library already has a retry built in.
func ZKBackoff(b Backoff) ZKOpt {
return func(z *zkConfig) {
z.backoff = b
}
}
// ZKTimeout - How long to wait on a response from the Zookeeper instance before considering it dead.
func ZKTimeout(d time.Duration) ZKOpt {
return func(z *zkConfig) {
z.timeout = d
}
}
func ZKLogger(l Logger) ZKOpt {
// ZKLogger - Attach a logger to the Zookeeper client in order to debug issues.
func ZKLogger(l logger) ZKOpt {
return func(z *zkConfig) {
z.logger = l
}
}
// Retrieves current Aurora leader from ZK.
// LeaderFromZK - Retrieves current Aurora leader from ZK.
func LeaderFromZK(cluster Cluster) (string, error) {
return LeaderFromZKOpts(ZKEndpoints(strings.Split(cluster.ZK, ",")...), ZKPath(cluster.SchedZKPath))
}
// Retrieves current Aurora leader from ZK with a custom configuration.
// LeaderFromZKOpts - Retrieves current Aurora leader from ZK with a custom configuration.
func LeaderFromZKOpts(options ...ZKOpt) (string, error) {
var leaderURL string
@ -103,7 +110,7 @@ func LeaderFromZKOpts(options ...ZKOpt) (string, error) {
c, _, err := zk.Connect(config.endpoints, config.timeout, func(c *zk.Conn) { c.SetLogger(config.logger) })
if err != nil {
return false, NewTemporaryError(errors.Wrap(err, "Failed to connect to Zookeeper"))
return false, NewTemporaryError(errors.Wrap(err, "failed to connect to Zookeeper"))
}
defer c.Close()
@ -117,12 +124,10 @@ func LeaderFromZKOpts(options ...ZKOpt) (string, error) {
return false, errors.Wrapf(err, "path %s is an invalid Zookeeper path", config.path)
}
return false,
NewTemporaryError(errors.Wrapf(err, "path %s doesn't exist on Zookeeper ", config.path))
return false, NewTemporaryError(errors.Wrapf(err, "path %s doesn't exist on Zookeeper ", config.path))
}
// Search for the leader through all the children in the given path
serviceInst := new(ServiceInstance)
for _, child := range children {
// Only the leader will start with member_
@ -135,13 +140,13 @@ func LeaderFromZKOpts(options ...ZKOpt) (string, error) {
return false, errors.Wrapf(err, "path %s is an invalid Zookeeper path", childPath)
}
return false, NewTemporaryError(errors.Wrap(err, "error fetching contents of leader"))
return false, NewTemporaryError(errors.Wrap(err, "unable to fetch contents of leader"))
}
err = json.Unmarshal([]byte(data), serviceInst)
var serviceInst serviceInstance
err = json.Unmarshal([]byte(data), &serviceInst)
if err != nil {
return false,
NewTemporaryError(errors.Wrap(err, "unable to unmarshal contents of leader"))
return false, NewTemporaryError(errors.Wrap(err, "unable to unmarshal contents of leader"))
}
// Should only be one endpoint.
@ -149,8 +154,7 @@ func LeaderFromZKOpts(options ...ZKOpt) (string, error) {
// writing bad info into Zookeeper but is kept here as a safety net.
if len(serviceInst.AdditionalEndpoints) > 1 {
return false,
NewTemporaryError(
errors.New("ambiguous endpoints in json blob, Aurora wrote bad info to ZK"))
NewTemporaryError(errors.New("ambiguous endpoints in json blob, Aurora wrote bad info to ZK"))
}
var scheme, host, port string
@ -170,7 +174,7 @@ func LeaderFromZKOpts(options ...ZKOpt) (string, error) {
})
if retryErr != nil {
config.logger.Printf("Failed to determine leader after %v attempts", config.backoff.Steps)
config.logger.Printf("failed to determine leader after %v attempts", config.backoff.Steps)
return "", retryErr
}

View file

@ -20,15 +20,16 @@ import (
"testing"
"time"
realis "github.com/paypal/gorealis/v2"
realis "github.com/paypal/gorealis"
"github.com/stretchr/testify/assert"
)
var backoff realis.Backoff = realis.Backoff{ // Reduce penalties for this test to make it quick
var backoff = realis.Backoff{ // Reduce penalties for this test to make it quick
Steps: 5,
Duration: 1 * time.Second,
Factor: 1.0,
Jitter: 0.1}
Jitter: 0.1,
}
// Test for behavior when no endpoints are given to the ZK leader finding function.
func TestZKNoEndpoints(t *testing.T) {